From david at irdta.eu Sat Jan 1 04:57:53 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 1 Jan 2022 10:57:53 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring - DeepLearn 2022 Summer Message-ID: <1050670961.1937898.1641031073585@webmail.strato.com> Dear all, DeepLearn, the International School on Deep Learning, is running since 2017 successfully. Please note the next editions of the program in 2022: https://irdta.eu/deeplearn/2022sp/ https://irdta.eu/deeplearn/2022su/ Best regards, DeepLearn organizing team -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmschleif at googlemail.com Sat Jan 1 11:27:26 2022 From: fmschleif at googlemail.com (Frank-Michael Schleif) Date: Sat, 1 Jan 2022 17:27:26 +0100 Subject: Connectionists: =?utf-8?q?Open_position_-_professorship_-_for_AI_?= =?utf-8?q?/_Semantic_/_NLP_/_machine_learning_at_FHWS_in_W=C3=BCrz?= =?utf-8?q?burg=2C_Germany?= Message-ID: Open position - professorship - for AI / Semantic / NLP / machine learning at FHWS in W?rzburg, Germany ( -- deadline 19.01.2022 --) We are creating a new center on artificial intelligence in Wuerzburg, Germany (CAIRO) with multiple open positions (right now we announced 1 professor position) https://www.fhws.de/service/stellenausschreibungen-der-fhws/online-stellenportal-fuer-professuren-und-lehrpersonal/ direct link: https://stellen.fhws.de/jobposting/56c82f2a26d49b9569b5da563ab29681edaa87c70?ref=homepage (english translation available - just scroll down) The positions are research professorships (German W2 level, well paid and tenured life long positions) and will establish a center for AI (CAIRO) in Wuerzburg - do research & attract projects - have some minimal administrative duties - have only 9 x 45 min teaching duties per week (9 SWS) during the terms - will be involved in the new created MSc program on Artificial Intelligence (MAI) https://mai.fhws.de/en/ Additional funding to establish a group is also available. This is an exciting moment and chance. The positions are located here in Wuerzburg and the curriculum will be (so far) in English only (it may be necessary to learn some German in the first years) . To be eligible it is mandatory to have 5 years working experience after MSc including at least 3 years of industrial experience (can be spread and industry related research (institutes) also count). The positions announced right now have the following topics: - 61.1.296 Semantic Data Processing and Cognitive Computing ? Artificial Cognitive Perception and Speech https://stellen.fhws.de/jobposting/56c82f2a26d49b9569b5da563ab29681edaa87c70?ref=homepage Please spread the word - would be happy to see many applications Frank -- ------------------------------------------------------- Prof. Dr. rer. nat. habil. Frank-Michael Schleif School of Computer Science University of Applied Sciences W?rzburg-Schweinfurt Sanderheinrichsleitenweg 20 Raum I-3.35 Tel.: +49(0) 931 351 18127 97074 W?rzburg Honorable Research Fellow The University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom - email: frank-michael.schleif at fhws.de http://promos-science.blogspot.de/ https://www.techfak.uni-bielefeld.de/~fschleif/ ------------------------------------------------------- From ASIM.ROY at asu.edu Sat Jan 1 19:43:17 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sun, 2 Jan 2022 00:43:17 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: And, by the way, Paul Werbos was also there at the same debate. And so was Teuvo Kohonen. Asim -----Original Message----- From: Asim Roy Sent: Saturday, January 1, 2022 3:19 PM To: Schmidhuber Juergen ; connectionists at cs.cmu.edu Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. Asim Roy Professor, Information Systems Arizona State University https://isearch.asu.edu/profile/9973 https://lifeboat.com/ex/bios.asim.roy -----Original Message----- From: Connectionists On Behalf Of Schmidhuber Juergen Sent: Friday, December 31, 2021 11:00 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen > On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: > > Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... > > does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. > > My point was that DL requires both Learning algorithm (BP) and an > architecture.. which seems to me much more responsible for the the success of Dl. > > S > > > > On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >> Steve, this is not about machine learning in general, just about deep >> learning vs shallow learning. However, I added the Pandemonium - >> thanks for that! You ask: how is a linear regressor of 1800 >> (Gauss/Legendre) related to a linear neural network? It's formally >> equivalent, of course! (The only difference is that the weights are >> often called beta_i rather than w_i.) Shallow learning: one adaptive >> layer. Deep learning: many adaptive layers. Cheers, J?rgen >> >> >> >> >>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>> >>> wrote: >>> >>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>> >>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>> >>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>> >>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>> >>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>> >>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>> >>> Steve >>> >>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>> >>>> Dear connectionists, >>>> >>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>> >>>> >>>> >>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>> >>>> >>>> >>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>> >>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training d! at! >>>> >> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! , w! >> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >> failed to >> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >> >>>> J?rgen >>>> >>>> >>>> >>>> >>>> ****************************** >>>> >>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>> >>>> >>>> >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review >>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>> article ever published in the journal Neural Networks), I've >>>> decided to put forward another piece for MOOR. I want to thank the >>>> many experts who have already provided me with comments on it. >>>> Please send additional relevant references and suggestions for >>>> improvements for the following draft directly to me at >>>> >>>> juergen at idsia.ch >>>> >>>> : >>>> >>>> >>>> >>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>> >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>> -- >>> >>> >> > -- > From ASIM.ROY at asu.edu Sat Jan 1 17:18:47 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sat, 1 Jan 2022 22:18:47 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> References: <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. Asim Roy Professor, Information Systems Arizona State University https://isearch.asu.edu/profile/9973 https://lifeboat.com/ex/bios.asim.roy -----Original Message----- From: Connectionists On Behalf Of Schmidhuber Juergen Sent: Friday, December 31, 2021 11:00 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen > On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: > > Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... > > does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. > > My point was that DL requires both Learning algorithm (BP) and an > architecture.. which seems to me much more responsible for the the success of Dl. > > S > > > > On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >> Steve, this is not about machine learning in general, just about deep >> learning vs shallow learning. However, I added the Pandemonium - >> thanks for that! You ask: how is a linear regressor of 1800 >> (Gauss/Legendre) related to a linear neural network? It's formally >> equivalent, of course! (The only difference is that the weights are >> often called beta_i rather than w_i.) Shallow learning: one adaptive >> layer. Deep learning: many adaptive layers. Cheers, J?rgen >> >> >> >> >>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>> >>> wrote: >>> >>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>> >>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>> >>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>> >>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>> >>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>> >>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>> >>> Steve >>> >>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>> >>>> Dear connectionists, >>>> >>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>> >>>> >>>> >>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>> >>>> >>>> >>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>> >>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training d! at! >>>> >> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! , w! >> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >> failed to >> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >> >>>> J?rgen >>>> >>>> >>>> >>>> >>>> ****************************** >>>> >>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>> >>>> >>>> >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review >>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>> article ever published in the journal Neural Networks), I've >>>> decided to put forward another piece for MOOR. I want to thank the >>>> many experts who have already provided me with comments on it. >>>> Please send additional relevant references and suggestions for >>>> improvements for the following draft directly to me at >>>> >>>> juergen at idsia.ch >>>> >>>> : >>>> >>>> >>>> >>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>> >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>> -- >>> >>> >> > -- > From jose at rubic.rutgers.edu Sat Jan 1 18:31:31 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Sat, 1 Jan 2022 18:31:31 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> References: <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: <824b508b-3fd2-3418-9f83-348e95987dd9@rubic.rutgers.edu> Juergen:? Happy New Year! "are not quite the same".. I understand that its expedient sometimes to use linear regression to approximate the Perceptron.(i've had other connectionist friends tell me the same thing) which has its own incremental update rule..that is doing <0,1> classification.??? So I guess if you don't like the analogy to logistic regression.. maybe Fisher's LDA?? This whole thing still doesn't scan for me. So, again the point here is context.?? Do you really believe that Frank Rosenblatt didn't reference Gauss/Legendre/Laplace? because it slipped his mind???? He certainly understood modern statistics (of the 1940s and 1950s) Certainly you'd agree that FR could have referenced linear regression as a precursor, or "pretty similar" to what he was working on, it seems disingenuous to imply he was plagiarizing Gauss et al.--right?? Why would he? Finally then, in any historical reconstruction, I can think of,? it just doesn't make sense.??? Sorry. Steve On 12/31/21 12:59 PM, Schmidhuber Juergen wrote: > Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen > > >> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >> >> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >> >> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >> >> My point was that DL requires both Learning algorithm (BP) and an architecture.. which seems to me much more responsible for the >> the success of Dl. >> >> S >> >> >> >> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>> Steve, this is not about machine learning in general, just about deep learning vs shallow learning. However, I added the Pandemonium - thanks for that! You ask: how is a linear regressor of 1800 (Gauss/Legendre) related to a linear neural network? It's formally equivalent, of course! (The only difference is that the weights are often called beta_i rather than w_i.) Shallow learning: one adaptive layer. Deep learning: many adaptive layers. Cheers, J?rgen >>> >>> >>> >>> >>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>> wrote: >>>> >>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>> >>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>> >>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>> >>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>> >>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>> >>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>> >>>> Steve >>>> >>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>> >>>>> Dear connectionists, >>>>> >>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>> >>>>> >>>>> >>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>>> >>>>> >>>>> >>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>> >>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training d! > at! >>> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! > , w! >>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>> failed to >>> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>> >>>>> J?rgen >>>>> >>>>> >>>>> >>>>> >>>>> ****************************** >>>>> >>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>> >>>>> >>>>> >>>>> wrote: >>>>> >>>>> Hi, fellow artificial neural network enthusiasts! >>>>> >>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>> >>>>> Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at >>>>> >>>>> juergen at idsia.ch >>>>> >>>>> : >>>>> >>>>> >>>>> >>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>>> >>>>> >>>>> >>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>> >>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>> >>>>> Thank you all in advance for your help! >>>>> >>>>> J?rgen Schmidhuber >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> -- >>>> >>>> >> -- >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From hussain.doctor at gmail.com Sat Jan 1 18:42:50 2022 From: hussain.doctor at gmail.com (Amir Hussain) Date: Sat, 1 Jan 2022 23:42:50 +0000 Subject: Connectionists: UK EPSRC funded COG-MHEAR Research Fellowship available (closing date: 14 Jan 2022) Message-ID: Dear connectionists, **Please help forward to potentially interested candidates** Happy New Year! The School of Computing at Edinburgh Napier University (ENU) has an immediate opening for a full-time research fellow. The post is funded as part of the UK Engineering and Physical Sciences Research Council (EPSRC) funded Programme Grant: COG-MHEAR (https://cogmhear.org). COG-MHEAR is a world-leading cross-disciplinary research programme funded under the EPSRC Transformative Healthcare Technologies 2050 Call. It comprises academics from seven UK Universities (led by ENU and including Edinburgh, Heriot-Watt, Glasgow, Manchester, Wolverhampton, and Nottingham) and a strong User-Group of industrial and clinical collaborators and end-user engagement organisations (including Sonova, Nokia-Bell Lab, Deaf Scotland and RNID UK). The ambitious COG-MHEAR programme aims to develop the world?s first multi-modal hearing-aid demonstrators by radically exploiting and integrating the transformative potential of privacy-assuring and explainable AI, 5G, IoT, and cybersecurity, coupled with flexible electronics. Full details of the research fellow position (which is being offered at salary grade 5: ?33,309 - ?39,739 per annum/pro-rata) can be found here: https://www.timeshighereducation.com/unijobs/listing/275428/cog-mhear-research-fellow/ (closing date: 14 Jan 2022) Many thanks Amir -- Professor Amir Hussain Programme Director: EPSRC COG-MHEAR (https://cogmhear.org) Editor-in-Chief: Cognitive Computation (Springer Nature - http://springer.com/12559) Director: Centre for AI & Data Science, School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, Scotland, UK https://www.napier.ac.uk/people/amir-hussain -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Sun Jan 2 08:43:47 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Sun, 2 Jan 2022 13:43:47 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially in Sec. XII. The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. Happy New Year! J?rgen > On 2 Jan 2022, at 03:43, Asim Roy wrote: > > And, by the way, Paul Werbos was also there at the same debate. And so was Teuvo Kohonen. > > Asim > > -----Original Message----- > From: Asim Roy > Sent: Saturday, January 1, 2022 3:19 PM > To: Schmidhuber Juergen ; connectionists at cs.cmu.edu > Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. > > In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. > > Asim Roy > Professor, Information Systems > Arizona State University > https://isearch.asu.edu/profile/9973 > https://lifeboat.com/ex/bios.asim.roy On 2 Jan 2022, at 02:31, Stephen Jos? Hanson wrote: Juergen: Happy New Year! "are not quite the same".. I understand that its expedient sometimes to use linear regression to approximate the Perceptron.(i've had other connectionist friends tell me the same thing) which has its own incremental update rule..that is doing <0,1> classification. So I guess if you don't like the analogy to logistic regression.. maybe Fisher's LDA? This whole thing still doesn't scan for me. So, again the point here is context. Do you really believe that Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it slipped his mind?? He certainly understood modern statistics (of the 1940s and 1950s) Certainly you'd agree that FR could have referenced linear regression as a precursor, or "pretty similar" to what he was working on, it seems disingenuous to imply he was plagiarizing Gauss et al.--right? Why would he? Finally then, in any historical reconstruction, I can think of, it just doesn't make sense. Sorry. Steve > -----Original Message----- > From: Connectionists On Behalf Of Schmidhuber Juergen > Sent: Friday, December 31, 2021 11:00 AM > To: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. > > Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen > > >> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >> >> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >> >> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >> >> My point was that DL requires both Learning algorithm (BP) and an >> architecture.. which seems to me much more responsible for the the success of Dl. >> >> S >> >> >> >> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>> Steve, this is not about machine learning in general, just about deep >>> learning vs shallow learning. However, I added the Pandemonium - >>> thanks for that! You ask: how is a linear regressor of 1800 >>> (Gauss/Legendre) related to a linear neural network? It's formally >>> equivalent, of course! (The only difference is that the weights are >>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>> >>> >>> >>> >>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>> >>>> wrote: >>>> >>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>> >>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>> >>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>> >>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>> >>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>> >>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>> >>>> Steve >>>> >>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>> >>>>> Dear connectionists, >>>>> >>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>> >>>>> >>>>> >>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>> >>>>> >>>>> >>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>> >>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training d! > at! >>>>> >>> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! > , w! >>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>> failed to >>> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>> >>>>> J?rgen >>>>> >>>>> >>>>> >>>>> >>>>> ****************************** >>>>> >>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>> >>>>> >>>>> >>>>> wrote: >>>>> >>>>> Hi, fellow artificial neural network enthusiasts! >>>>> >>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>> >>>>> Following the great success of massive open online peer review >>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>> article ever published in the journal Neural Networks), I've >>>>> decided to put forward another piece for MOOR. I want to thank the >>>>> many experts who have already provided me with comments on it. >>>>> Please send additional relevant references and suggestions for >>>>> improvements for the following draft directly to me at >>>>> >>>>> juergen at idsia.ch >>>>> >>>>> : >>>>> >>>>> >>>>> >>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>> >>>>> >>>>> >>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>> >>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>> >>>>> Thank you all in advance for your help! >>>>> >>>>> J?rgen Schmidhuber >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> -- >>>> >>>> >>> >> -- >> > > From jose at rubic.rutgers.edu Sun Jan 2 09:25:12 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Sun, 2 Jan 2022 09:25:12 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: Geoffrey. we should at least get his name right, as his citation proclivities are bandied about. On 1/1/22 5:18 PM, Asim Roy wrote: > In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. > > Asim Roy > Professor, Information Systems > Arizona State University > https://isearch.asu.edu/profile/9973 > https://lifeboat.com/ex/bios.asim.roy > > > > -----Original Message----- > From: Connectionists On Behalf Of Schmidhuber Juergen > Sent: Friday, December 31, 2021 11:00 AM > To: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. > > Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen > > >> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >> >> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >> >> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >> >> My point was that DL requires both Learning algorithm (BP) and an >> architecture.. which seems to me much more responsible for the the success of Dl. >> >> S >> >> >> >> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>> Steve, this is not about machine learning in general, just about deep >>> learning vs shallow learning. However, I added the Pandemonium - >>> thanks for that! You ask: how is a linear regressor of 1800 >>> (Gauss/Legendre) related to a linear neural network? It's formally >>> equivalent, of course! (The only difference is that the weights are >>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>> >>> >>> >>> >>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>> >>>> wrote: >>>> >>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>> >>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>> >>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>> >>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>> >>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>> >>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>> >>>> Steve >>>> >>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>> >>>>> Dear connectionists, >>>>> >>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>> >>>>> >>>>> >>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>> >>>>> >>>>> >>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>> >>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training d! > at! >>> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! > , w! >>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>> failed to >>> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>> >>>>> J?rgen >>>>> >>>>> >>>>> >>>>> >>>>> ****************************** >>>>> >>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>> >>>>> >>>>> >>>>> wrote: >>>>> >>>>> Hi, fellow artificial neural network enthusiasts! >>>>> >>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>> >>>>> Following the great success of massive open online peer review >>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>> article ever published in the journal Neural Networks), I've >>>>> decided to put forward another piece for MOOR. I want to thank the >>>>> many experts who have already provided me with comments on it. >>>>> Please send additional relevant references and suggestions for >>>>> improvements for the following draft directly to me at >>>>> >>>>> juergen at idsia.ch >>>>> >>>>> : >>>>> >>>>> >>>>> >>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>> >>>>> >>>>> >>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>> >>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>> >>>>> Thank you all in advance for your help! >>>>> >>>>> J?rgen Schmidhuber >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> -- >>>> >>>> >> -- >> > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From jose at rubic.rutgers.edu Sun Jan 2 09:55:42 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Sun, 2 Jan 2022 09:55:42 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: Juergen: 'And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares" Indeed there are many thing students are taught that are wrong or misleading for simplification or from ignorance.. That doesn't justify more of them. So this is multiple linear regression you are talking about it.. but, again a different model from a Neural Network. Not a matter of math.. not talking eigenvectors here, we are still talking about a model of a biological neuron. Maybe not a great model, but a step in the direction of brain like modeling. Multiple regression is not such a model. minimizing sums of squares and taking partials wrt parameters will result in formulae for beta weights and intercepts. A useful model for interpreting the linear effects of non-collinear variables on a response. widely useful in many scientific fields. But not a Neural Network--not a model of neurons and synapses and dendrites. Nonetheless, a useful pragmatic model developed for matrices of data with multiple variables and observations. There was simply no reason that Frank Rosenblatt should have referenced this math, as it had nothing whatsoever to do with the Perceptron, since no partials of sums of squares could be computed. Its the math and should be clear now. Steve On 1/2/22 8:43 AM, Schmidhuber Juergen wrote: > And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From terry at snl.salk.edu Sun Jan 2 15:29:36 2022 From: terry at snl.salk.edu (Terry Sejnowski) Date: Sun, 2 Jan 2022 12:29:36 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: We would be remiss not to acknowledge that backprop would not be possible without the calculus, so Isaac newton should also have been given credit, at least as much credit as Gauss. All these threads will be sorted out by historians one hundred years from now. Our precious time is better spent moving the field forward.? There is much more to discover. A new generation with better computational and mathematical tools than we had back in the last century have joined us, so let us be good role models and mentors to them. Terry ----- On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: > Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially in Se! > c. XII. > > The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. > > Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). > > Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. > > Happy New Year! > > J?rgen > > >> On 2 Jan 2022, at 03:43, Asim Roy wrote: >> >> And, by the way, Paul Werbos was also there at the same debate. And so was Teuvo Kohonen. >> >> Asim >> >> -----Original Message----- >> From: Asim Roy >> Sent: Saturday, January 1, 2022 3:19 PM >> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu >> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >> >> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. >> >> Asim Roy >> Professor, Information Systems >> Arizona State University >> https://isearch.asu.edu/profile/9973 >> https://lifeboat.com/ex/bios.asim.roy > > On 2 Jan 2022, at 02:31, Stephen Jos? Hanson wrote: > > Juergen: Happy New Year! > > "are not quite the same".. > > I understand that its expedient sometimes to use linear regression to approximate the Perceptron.(i've had other connectionist friends tell me the same thing) which has its own incremental update rule..that is doing <0,1> classification. So I guess if you don't like the analogy to logistic regression.. maybe Fisher's LDA? This whole thing still doesn't scan for me. > > So, again the point here is context. Do you really believe that Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it slipped his mind?? He certainly understood modern statistics (of the 1940s and 1950s) > > Certainly you'd agree that FR could have referenced linear regression as a precursor, or "pretty similar" to what he was working on, it seems disingenuous to imply he was plagiarizing Gauss et al.--right? Why would he? > > Finally then, in any historical reconstruction, I can think of, it just doesn't make sense. Sorry. > > Steve > > >> -----Original Message----- >> From: Connectionists On Behalf Of Schmidhuber Juergen >> Sent: Friday, December 31, 2021 11:00 AM >> To: connectionists at cs.cmu.edu >> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >> >> Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen >> >> >>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >>> >>> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >>> >>> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >>> >>> My point was that DL requires both Learning algorithm (BP) and an >>> architecture.. which seems to me much more responsible for the the success of Dl. >>> >>> S >>> >>> >>> >>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>> Steve, this is not about machine learning in general, just about deep >>>> learning vs shallow learning. However, I added the Pandemonium - >>>> thanks for that! You ask: how is a linear regressor of 1800 >>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>> equivalent, of course! (The only difference is that the weights are >>>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>> >>>> >>>> >>>> >>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>> >>>>> wrote: >>>>> >>>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>>> >>>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>>> >>>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>>> >>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>>> >>>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>>> >>>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>>> >>>>> Steve >>>>> >>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>> >>>>>> Dear connectionists, >>>>>> >>>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>>> >>>>>> >>>>>> >>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>> >>>>>> >>>>>> >>>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>>> >>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training ! > d! >> at! >>>> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe! >> , w! >>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>>> failed to >>>> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>>> >>>>>> J?rgen >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ****************************** >>>>>> >>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>> >>>>>> >>>>>> >>>>>> wrote: >>>>>> >>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>> >>>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>>> >>>>>> Following the great success of massive open online peer review >>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>> article ever published in the journal Neural Networks), I've >>>>>> decided to put forward another piece for MOOR. I want to thank the >>>>>> many experts who have already provided me with comments on it. >>>>>> Please send additional relevant references and suggestions for >>>>>> improvements for the following draft directly to me at >>>>>> >>>>>> juergen at idsia.ch >>>>>> >>>>>> : >>>>>> >>>>>> >>>>>> >>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>> >>>>>> >>>>>> >>>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>>> >>>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>>> >>>>>> Thank you all in advance for your help! >>>>>> >>>>>> J?rgen Schmidhuber >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> -- >>>>> >>>>> >>> -- >>> >> > From juergen at idsia.ch Sun Jan 2 10:47:23 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Sun, 2 Jan 2022 15:47:23 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: <54CB34D3-F098-4664-B00B-6460CF1A4587@supsi.ch> Steve, almost all of deep learning is about engineering and problem solving, not about explaining or modeling biological neurons/synapses/dendrites. The most successful deep learners of today generalize the linear regressors of 1800. Same error. Same objective. Same basic architecture per unit or ?neuron." Same weights per neuron. But now with nonlinear differentiable activation functions and deeper global architectures. J?rgen > On 2 Jan 2022, at 17:55, Stephen Jos? Hanson wrote: > > Juergen: > > 'And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares" > > Indeed there are many thing students are taught that are wrong or misleading for simplification or from ignorance.. That doesn't justify more of them. > > So this is multiple linear regression you are talking about it.. but, again a different model from a Neural Network. > > Not a matter of math.. not talking eigenvectors here, we are still talking about a model of a biological neuron. Maybe not a great model, but a step in the direction of brain like modeling. Multiple regression is not such a model. > > minimizing sums of squares and taking partials wrt parameters will result in formulae for beta weights and intercepts. A useful model for interpreting the linear effects of non-collinear variables on a response. widely useful in many scientific fields. But not a Neural Network--not a model of neurons and synapses and dendrites. Nonetheless, a useful pragmatic model developed for matrices of data with multiple variables and observations. > > There was simply no reason that Frank Rosenblatt should have referenced this math, as it had nothing whatsoever to do with the Perceptron, since no partials of sums of squares could be computed. Its the math and should be clear now. > > Steve > > On 1/2/22 8:43 AM, Schmidhuber Juergen wrote: >> And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares > -- > From juergen at idsia.ch Mon Jan 3 03:38:05 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Mon, 3 Jan 2022 08:38:05 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> Terry, please don't throw smoke candles like that! This is not about basic math such as Calculus (actually first published by Leibniz; later Newton was also credited for his unpublished work; Archimedes already had special cases thereof over 2000 years ago; the Indian Kerala school made essential contributions around 1400). In fact, my report addresses such smoke candles in Sec. XII: "Some claim that 'backpropagation' is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970 [BP1]." You write: "All these threads will be sorted out by historians one hundred years from now." To answer that, let me just cut and paste the last sentence of my conclusions: "However, today's scientists won't have to wait for AI historians to establish proper credit assignment. It is easy enough to do the right thing right now." You write: "let us be good role models and mentors" to the new generation. Then please do what's right! Your recent survey [S20] does not help. It's mentioned in my report as follows: "ACM seems to be influenced by a misleading 'history of deep learning' propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. XIII). It goes more or less like this: 'In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s [S20].' However, as mentioned above, the 1969 book [M69] addressed a 'problem' of Gauss & Legendre's shallow learning (~1800)[DL1-2] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method [DEEP1-2][DL2] (and then also by Amari's SGD for MLPs [GD1-2]). Minsky was apparently unaware of this and failed to correct it later [HIN](Sec. I).... deep learning research was alive and kicking also in the 1970s, especially outside of the Anglosphere." Just follow ACM's Code of Ethics and Professional Conduct [ACM18] which states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." No need to wait for 100 years. J?rgen > On 2 Jan 2022, at 23:29, Terry Sejnowski wrote: > > We would be remiss not to acknowledge that backprop would not be possible without the calculus, > so Isaac newton should also have been given credit, at least as much credit as Gauss. > > All these threads will be sorted out by historians one hundred years from now. > Our precious time is better spent moving the field forward. There is much more to discover. > > A new generation with better computational and mathematical tools than we had back > in the last century have joined us, so let us be good role models and mentors to them. > > Terry > > ----- > > On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially in ! > Se! >> c. XII. >> >> The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. >> >> Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). >> >> Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. >> >> Happy New Year! >> >> J?rgen >> >> >>> On 2 Jan 2022, at 03:43, Asim Roy wrote: >>> >>> And, by the way, Paul Werbos was also there at the same debate. And so was Teuvo Kohonen. >>> >>> Asim >>> >>> -----Original Message----- >>> From: Asim Roy >>> Sent: Saturday, January 1, 2022 3:19 PM >>> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu >>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >>> >>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. >>> >>> Asim Roy >>> Professor, Information Systems >>> Arizona State University >>> https://isearch.asu.edu/profile/9973 >>> https://lifeboat.com/ex/bios.asim.roy >> >> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson wrote: >> >> Juergen: Happy New Year! >> >> "are not quite the same".. >> >> I understand that its expedient sometimes to use linear regression to approximate the Perceptron.(i've had other connectionist friends tell me the same thing) which has its own incremental update rule..that is doing <0,1> classification. So I guess if you don't like the analogy to logistic regression.. maybe Fisher's LDA? This whole thing still doesn't scan for me. >> >> So, again the point here is context. Do you really believe that Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it slipped his mind?? He certainly understood modern statistics (of the 1940s and 1950s) >> >> Certainly you'd agree that FR could have referenced linear regression as a precursor, or "pretty similar" to what he was working on, it seems disingenuous to imply he was plagiarizing Gauss et al.--right? Why would he? >> >> Finally then, in any historical reconstruction, I can think of, it just doesn't make sense. Sorry. >> >> Steve >> >> >>> -----Original Message----- >>> From: Connectionists On Behalf Of Schmidhuber Juergen >>> Sent: Friday, December 31, 2021 11:00 AM >>> To: connectionists at cs.cmu.edu >>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >>> >>> Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen >>> >>> >>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >>>> >>>> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >>>> >>>> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >>>> >>>> My point was that DL requires both Learning algorithm (BP) and an >>>> architecture.. which seems to me much more responsible for the the success of Dl. >>>> >>>> S >>>> >>>> >>>> >>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>>> Steve, this is not about machine learning in general, just about deep >>>>> learning vs shallow learning. However, I added the Pandemonium - >>>>> thanks for that! You ask: how is a linear regressor of 1800 >>>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>>> equivalent, of course! (The only difference is that the weights are >>>>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>>> >>>>> >>>>> >>>>> >>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>>> >>>>>> wrote: >>>>>> >>>>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>>>> >>>>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>>>> >>>>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>>>> >>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>>>> >>>>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>>>> >>>>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>>>> >>>>>> Steve >>>>>> >>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>>> >>>>>>> Dear connectionists, >>>>>>> >>>>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>>>> >>>>>>> >>>>>>> >>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>>>> >>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training! > ! >> d! >>> at! >>>>> a through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europ! > e! >>> , w! >>>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>>>> failed to >>>>> cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>>>> >>>>>>> J?rgen >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ****************************** >>>>>>> >>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>>> >>>>>>> >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>>> >>>>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>>>> >>>>>>> Following the great success of massive open online peer review >>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>>> article ever published in the journal Neural Networks), I've >>>>>>> decided to put forward another piece for MOOR. I want to thank the >>>>>>> many experts who have already provided me with comments on it. >>>>>>> Please send additional relevant references and suggestions for >>>>>>> improvements for the following draft directly to me at >>>>>>> >>>>>>> juergen at idsia.ch >>>>>>> >>>>>>> : >>>>>>> >>>>>>> >>>>>>> >>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>>>> >>>>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>>>> >>>>>>> Thank you all in advance for your help! >>>>>>> >>>>>>> J?rgen Schmidhuber >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> -- >>>>>> >>>>>> >>>> -- >>>> >>> >> > > From talking2sandip at gmail.com Mon Jan 3 06:00:28 2022 From: talking2sandip at gmail.com (Sandip Paul) Date: Mon, 3 Jan 2022 16:30:28 +0530 Subject: Connectionists: Fwd: Winter School on Deep Learning From Perceptrons to Transformers Organized by The Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata In-Reply-To: References: Message-ID: ** Please ignore if you received multiple copies and cross-posting of this mail ** Kindly Circulate this mail to the suitable audience Winter School on Deep Learning: From Perceptrons to Transformers 21st January - 12th March 2022 (Fridays and Saturdays) Electronics and Communication Sciences Unit Indian Statistical Institute, Kolkata Important Dates: Submit Application on Website Dec 28, 2021 ? Jan 12, 2022 Notification to Selected Applicants Jan 13, 2022 Registration Jan 14 ? Jan 17, 2022 Course Duration Jan 21 ? Mar 12, 2022 Call for Participation The Objective: The Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata is organizing the Winter School on Deep Learning: From Perceptrons to Transformers. This winter school will focus heavily on imparting hands-on experience towards developing a wide range of classical and advanced deep learning models, in addition to making the associated theory easy to understand. Participants will learn from the basics of machine learning to the advanced deep learning-based approaches with application to Computer Vision and Natural Language Processing. Theoretical lectures will be delivered by renowned professors and scientists (from the Indian Statistical Institute and other esteemed organizations) who have made significant contributions in their areas of research. The lectures will be supplemented by extremely detailed hands-on sessions instructed by post-docs and research scholars. Course coverage: The winter school will have the following course structure (theory and associated hands-on) Basics of Python Basics of the Deep Learning Library: PyTorch Essentials of Vector Calculus and Linear Algebra for Machine Learning Conceptual Fundamentals of Machine Learning, Image Processing, Computer Vision, Natural Language Processing Perceptrons and Backpropagation Ingredients of Deep Learning: Gradient Descent, Batch Normalization, Regularization, Dropout Convolutional Neural Networks (CNN), Convolutional Autoencoders CNN for Object Classification, Detection, and Segmentation Recurrent Neural Network, LSTM, Word Embedding Attention Models and Transformer (BERT and Visual Transformer) Deep Generative Models (GAN and VAE) Weakly Supervised Deep Learning, Self-Supervised Learning Meta-Learning and Few-Shot Learning Deep Reinforcement Learning Explainable Artificial Intelligence Geometric Deep Learning Mode of tutorials: Lectures and Hands-on sessions will be conducted in online mode only. All sessions will be on Fridays and Saturdays, and the recordings will be shared with all the participants. Who can apply? Professionals from academia and industry, research/project scholars, masters and final-year bachelors students. Interested candidates must submit an online application. Selected applicants will be informed to register for the school. For application, registration fees and other details: www.sites.google.com/view/wsdl2022/ -- Electronics and Communication Sciences Unit Indian Statistical Institute 203 B T Road Kolkata 700108 Email: wsdl2022 at isical.ac.in Web: Winter School -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cfp-deep-winter-school.pdf Type: application/pdf Size: 206677 bytes Desc: not available URL: From achler at gmail.com Mon Jan 3 05:56:42 2022 From: achler at gmail.com (Tsvi Achler) Date: Mon, 3 Jan 2022 02:56:42 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: I have been resistant to comment because we are ultimately talking about bad position jockeying behavior in this field. The reality is that everyone here that is successful has had to display some sort of skill in it. This is the downfall of academia and will in my opinion be its ultimate undoing. With this discussion we have an opportunity to raise awareness and work towards not repeating it, or at least discussing what can be done about it. However I am really triggered when it is actively obscured in combination with claims to be above it by somehow focusing on "new ideas" while instead normalizing and actively doing it. We all know this jockeying and bad behavior unjustifiably helps inhibit new ideas and then mis-credit when the new ideas eventually get through. This inhibition and bad behavior is occurring today. For example because of jockeying and turf this community is especially resistant to looking at models which primarily use self-inhibitory presynaptic feedback (25 years). Please dont be dismissive of the problems displayed here and act like we are above it all when we are contributing to the problems right as we speak. I suggest instead embracing the problem and discussing how not to repeat it for future generations. Sincerely, -Tsvi On Sun, Jan 2, 2022 at 11:12 PM Terry Sejnowski wrote: > We would be remiss not to acknowledge that backprop would not be > possible without the calculus, > so Isaac newton should also have been given credit, at least as much > credit as Gauss. > > All these threads will be sorted out by historians one hundred years > from now. > Our precious time is better spent moving the field forward. There is > much more to discover. > > A new generation with better computational and mathematical tools than > we had back > in the last century have joined us, so let us be good role models and > mentors to them. > > Terry > > ----- > > On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: > > Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work > of Amari in a debate about connectionism at the ICNN?97 .... He literally > said 'Amari invented back propagation'..." when he sat next to Amari and > Werbos. Later, however, he failed to cite Amari?s stochastic gradient > descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey > [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD > [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even > backprop. Backprop is just a particularly efficient way of computing > gradients in differentiable networks, known as the reverse mode of > automatic differentiation, due to Linnainmaa (1970) [BP1] (see also > Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, > and in 2019 embarrassingly did not hesitate to accept an award for having > "created ... the backpropagation algorithm? [HIN]. All references and more > on this can be found in the report, especially in ! > Se! > > c. XII. > > > > The deontology of science requires: If one "re-invents" something that > was already known, and only becomes aware of it later, one must at least > clarify it later [DLC], and correctly give credit in all follow-up papers > and presentations. Also, ACM's Code of Ethics and Professional Conduct > [ACM18] states: "Computing professionals should therefore credit the > creators of ideas, inventions, work, and artifacts, and respect copyrights, > patents, trade secrets, license agreements, and other methods of protecting > authors' works." LBH didn't. > > > > Steve still doesn't believe that linear regression of 200 years ago is > equivalent to linear NNs. In a mature field such as math we would not have > such a discussion. The math is clear. And even today, many students are > taught NNs like this: let's start with a linear single-layer NN (activation > = sum of weighted inputs). Now minimize mean squared error on the training > set. That's good old linear regression (method of least squares). Now let's > introduce multiple layers and nonlinear but differentiable activation > functions, and derive backprop for deeper nets in 1960-70 style (still used > today, half a century later). > > > > Sure, an important new variation of the 1950s (emphasized by Steve) was > to transform linear NNs into binary classifiers with threshold functions. > Nevertheless, the first adaptive NNs (still widely used today) are 1.5 > centuries older except for the name. > > > > Happy New Year! > > > > J?rgen > > > > > >> On 2 Jan 2022, at 03:43, Asim Roy wrote: > >> > >> And, by the way, Paul Werbos was also there at the same debate. And so > was Teuvo Kohonen. > >> > >> Asim > >> > >> -----Original Message----- > >> From: Asim Roy > >> Sent: Saturday, January 1, 2022 3:19 PM > >> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu > >> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > >> > >> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in > a debate about connectionism at the ICNN?97 (International Conference on > Neural Networks) in Houston. He literally said "Amari invented back > propagation" and Amari was sitting next to him. I still have a recording of > that debate. > >> > >> Asim Roy > >> Professor, Information Systems > >> Arizona State University > >> https://isearch.asu.edu/profile/9973 > >> https://lifeboat.com/ex/bios.asim.roy > > > > On 2 Jan 2022, at 02:31, Stephen Jos? Hanson > wrote: > > > > Juergen: Happy New Year! > > > > "are not quite the same".. > > > > I understand that its expedient sometimes to use linear regression to > approximate the Perceptron.(i've had other connectionist friends tell me > the same thing) which has its own incremental update rule..that is doing > <0,1> classification. So I guess if you don't like the analogy to > logistic regression.. maybe Fisher's LDA? This whole thing still doesn't > scan for me. > > > > So, again the point here is context. Do you really believe that Frank > Rosenblatt didn't reference Gauss/Legendre/Laplace because it slipped his > mind?? He certainly understood modern statistics (of the 1940s and 1950s) > > > > Certainly you'd agree that FR could have referenced linear regression as > a precursor, or "pretty similar" to what he was working on, it seems > disingenuous to imply he was plagiarizing Gauss et al.--right? Why would > he? > > > > Finally then, in any historical reconstruction, I can think of, it just > doesn't make sense. Sorry. > > > > Steve > > > > > >> -----Original Message----- > >> From: Connectionists > On Behalf Of Schmidhuber Juergen > >> Sent: Friday, December 31, 2021 11:00 AM > >> To: connectionists at cs.cmu.edu > >> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > >> > >> Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s > are not quite the same, but the obvious origin and ancestor of all those > single-layer ?shallow learning? architectures/methods is indeed linear > regression; today?s simplest NNs minimizing mean squared error are exactly > what they had 2 centuries ago. And the first working deep learning methods > of the 1960s did NOT really require ?modern? backprop (published in 1970 by > Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] > incrementally trained and pruned their deep networks layer by layer to > learn internal representations, using regression and a separate validation > set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to > learn internal representations WITHOUT ?modern" backprop in his multilayer > perceptrons. J?rgen > >> > >> > >>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson > wrote: > >>> > >>> Well the perceptron is closer to logistic regression... but the > heaviside function of course is <0,1> so technically not related to > linear regression which is using covariance to estimate betas... > >>> > >>> does that matter? Yes, if you want to be hyper correct--as this > appears to be-- Berkson (1944) coined the logit.. as log odds.. for > probabilistic classification.. this was formally developed by Cox in the > early 60s, so unlikely even in this case to be a precursor to perceptron. > >>> > >>> My point was that DL requires both Learning algorithm (BP) and an > >>> architecture.. which seems to me much more responsible for the the > success of Dl. > >>> > >>> S > >>> > >>> > >>> > >>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: > >>>> Steve, this is not about machine learning in general, just about deep > >>>> learning vs shallow learning. However, I added the Pandemonium - > >>>> thanks for that! You ask: how is a linear regressor of 1800 > >>>> (Gauss/Legendre) related to a linear neural network? It's formally > >>>> equivalent, of course! (The only difference is that the weights are > >>>> often called beta_i rather than w_i.) Shallow learning: one adaptive > >>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen > >>>> > >>>> > >>>> > >>>> > >>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson > >>>>> > >>>>> wrote: > >>>>> > >>>>> Despite the comprehensive feel of this it still appears to me to be > too focused on Back-propagation per se.. (except for that pesky > Gauss/Legendre ref--which still baffles me at least how this is related to > a "neural network"), and at the same time it appears to be missing other > more general epoch-conceptually relevant cases, say: > >>>>> > >>>>> Oliver Selfridge and his Pandemonium model.. which was a > hierarchical feature analysis system.. which certainly was in the air > during the Neural network learning heyday...in fact, Minsky cites Selfridge > as one of his mentors. > >>>>> > >>>>> Arthur Samuels: Checker playing system.. which learned a evaluation > function from a hierarchical search. > >>>>> > >>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt > perceptual psychologist who introduced the concept that the world was > stochastic and the the organism had to adapt to this variance somehow.. he > called it "probabilistic functionalism" which brought attention to > learning, perception and decision theory, certainly all piece parts of what > we call neural networks. > >>>>> > >>>>> There are many other such examples that influenced or provided > context for the yeasty mix that was 1940s and 1950s where Neural Networks > first appeared partly due to PItts and McCulloch which entangled the human > brain with computation and early computers themselves. > >>>>> > >>>>> I just don't see this as didactic, in the sense of a conceptual view > of the multidimensional history of the field, as opposed to a > 1-dimensional exegesis of mathematical threads through various statistical > algorithms. > >>>>> > >>>>> Steve > >>>>> > >>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: > >>>>> > >>>>>> Dear connectionists, > >>>>>> > >>>>>> in the wake of massive open online peer review, public comments on > the connectionists mailing list [CONN21] and many additional private > comments (some by well-known deep learning pioneers) helped to update and > improve upon version 1 of the report. The essential statements of the text > remain unchanged as their accuracy remains unchallenged. I'd like to thank > everyone from the bottom of my heart for their feedback up until this point > and hope everyone will be satisfied with the changes. Here is the revised > version 2 with over 300 references: > >>>>>> > >>>>>> > >>>>>> > >>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient > >>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ > >>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ > >>>>>> > >>>>>> > >>>>>> > >>>>>> In particular, Sec. II has become a brief history of deep learning > up to the 1970s: > >>>>>> > >>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) > were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed > in 1956 by Kleene [K56] - the closely related prior work in physics by > Lenz, Ising, Kramers, and Wannier dates back to the 1920s > [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial > evolution [TUR1] and learning NNs. He failed to formally publish his ideas > though, which explains the obscurity of his thoughts here. Minsky's simple > neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a > single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an > earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar > Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" > actually started around 1800 when Gauss & Legendre introduced linear > regression and the method of least squares [DL1-2] - a famous early example > of pattern recognition and generalization from training! > ! > > d! > >> at! > >>>> a through a parameterized predictor is Gauss' rediscovery of the > asteroid Ceres based on previous astronomical observations. Deeper > multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), > Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about > "back-propagating errors" in an MLP with a hidden layer [R62], but did not > yet have a general deep learning algorithm for deep MLPs (what's now > called backpropagation is quite different and was first published by > Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep > architectures started in 1965 when Ivakhnenko & Lapa published the first > general, working learning algorithms for deep MLPs with arbitrarily many > hidden layers (already containing the now popular multiplicative gates) > [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning > net with 8 layers, trained by their highly cited method which was still > popular in the new millennium [DL2], especially in Eastern Europ! > e! > >> , w! > >>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! > >>>> failed to > >>>> cite this, just like they failed to cite Amari [GD1], who in 1967 > proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose > implementation [GD2,GD2a] (with Saito) learned internal representations at > a time when compute was billions of times more expensive than today (see > also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later > sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) > Fukushima's now widely used deep convolutional NN architecture was first > introduced in the 1970s [CNN1]. > >>>> > >>>>>> J?rgen > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> ****************************** > >>>>>> > >>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen > >>>>>> > >>>>>> > >>>>>> > >>>>>> wrote: > >>>>>> > >>>>>> Hi, fellow artificial neural network enthusiasts! > >>>>>> > >>>>>> The connectionists mailing list is perhaps the oldest mailing list > on ANNs, and many neural net pioneers are still subscribed to it. I am > hoping that some of them - as well as their contemporaries - might be able > to provide additional valuable insights into the history of the field. > >>>>>> > >>>>>> Following the great success of massive open online peer review > >>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited > >>>>>> article ever published in the journal Neural Networks), I've > >>>>>> decided to put forward another piece for MOOR. I want to thank the > >>>>>> many experts who have already provided me with comments on it. > >>>>>> Please send additional relevant references and suggestions for > >>>>>> improvements for the following draft directly to me at > >>>>>> > >>>>>> juergen at idsia.ch > >>>>>> > >>>>>> : > >>>>>> > >>>>>> > >>>>>> > >>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient > >>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ > >>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ > >>>>>> > >>>>>> > >>>>>> > >>>>>> The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > >>>>>> > >>>>>> I know that some view this as a controversial topic. However, it is > the very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > >>>>>> > >>>>>> Thank you all in advance for your help! > >>>>>> > >>>>>> J?rgen Schmidhuber > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> -- > >>>>> > >>>>> > >>> -- > >>> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfbaldi at ics.uci.edu Mon Jan 3 09:55:06 2022 From: pfbaldi at ics.uci.edu (Baldi,Pierre) Date: Mon, 3 Jan 2022 06:55:06 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: <1ac9ee85-39db-3289-b5fd-88136f7c3b72@ics.uci.edu> Terry: We can all agree on the importance of mentoring the next generation. However, given that: 1) you have been in full and sole control of the NIPS/NeurIPS foundation since the 1980s; 2) you have been in full and sole control of Neural Computation since the 1980s; 3) you have extensively published in Neural Computation (and now also PNAS); 4) you have made sure, year after year,? that you and your BHL/CIFAR friends were able to control and subtly manipulate NIPS/NeurIPS (misleading the field in wrong directions, preventing news ideas and outsiders from flourishing, and distorting credit attribution). Can you please explain to this mailing list how this serves as being "a good role model" (to use your own words) for the next generation? Or did you mean it in a more cynical way--indeed this is one of the possible ways for a scientist to be "successful"? --Pierre On 1/2/2022 12:29 PM, Terry Sejnowski wrote: > We would be remiss not to acknowledge that backprop would not be > possible without the calculus, > so Isaac newton should also have been given credit, at least as much > credit as Gauss. > > All these threads will be sorted out by historians one hundred years > from now. > Our precious time is better spent moving the field forward.? There is > much more to discover. > > A new generation with better computational and mathematical tools than > we had back > in the last century have joined us, so let us be good role models and > mentors to them. > > Terry > > ----- > > On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the >> work of Amari in a debate about connectionism at the ICNN?97 .... He >> literally said 'Amari invented back propagation'..." when he sat next >> to Amari and Werbos. Later, however, he failed to cite Amari?s >> stochastic gradient descent (SGD) for multilayer NNs (1967-68) >> [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and >> other surveys.? Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, >> Wolfowitz, 1951-52) is not even backprop. Backprop is just a >> particularly efficient way of computing gradients in differentiable >> networks, known as the reverse mode of automatic differentiation, due >> to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 >> [BPa]). Hinton did not cite these papers either, and in 2019 >> embarrassingly did not hesitate to accept an award for having >> "created ... the backpropagation algorithm? [HIN]. All references and >> more on this can be found in the report, especially in ! > Se! >> ? c. XII. >> >> The deontology of science requires: If one "re-invents" something >> that was already known, and only becomes aware of it later, one must >> at least clarify it later [DLC], and correctly give credit in all >> follow-up papers and presentations. Also, ACM's Code of Ethics and >> Professional Conduct [ACM18] states: "Computing professionals should >> therefore credit the creators of ideas, inventions, work, and >> artifacts, and respect copyrights, patents, trade secrets, license >> agreements, and other methods of protecting authors' works." LBH didn't. >> >> Steve still doesn't believe that linear regression of 200 years ago >> is equivalent to linear NNs. In a mature field such as math we would >> not have such a discussion. The math is clear. And even today, many >> students are taught NNs like this: let's start with a linear >> single-layer NN (activation = sum of weighted inputs). Now minimize >> mean squared error on the training set. That's good old linear >> regression (method of least squares). Now let's introduce multiple >> layers and nonlinear but differentiable activation functions, and >> derive backprop for deeper nets in 1960-70 style (still used today, >> half a century later). >> >> Sure, an important new variation of the 1950s (emphasized by Steve) >> was to transform linear NNs into binary classifiers with threshold >> functions. Nevertheless, the first adaptive NNs (still widely used >> today) are 1.5 centuries older except for the name. >> >> Happy New Year! >> >> J?rgen >> >> >>> On 2 Jan 2022, at 03:43, Asim Roy wrote: >>> >>> And, by the way, Paul Werbos was also there at the same debate. And >>> so was Teuvo Kohonen. >>> >>> Asim >>> >>> -----Original Message----- >>> From: Asim Roy >>> Sent: Saturday, January 1, 2022 3:19 PM >>> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu >>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari >>> in a debate about connectionism at the ICNN?97 (International >>> Conference on Neural Networks) in Houston. He literally said "Amari >>> invented back propagation" and Amari was sitting next to him. I >>> still have a recording of that debate. >>> >>> Asim Roy >>> Professor, Information Systems >>> Arizona State University >>> https://isearch.asu.edu/profile/9973 >>> https://lifeboat.com/ex/bios.asim.roy >> >> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson >> wrote: >> >> Juergen:? Happy New Year! >> >> "are not quite the same".. >> >> I understand that its expedient sometimes to use linear regression to >> approximate the Perceptron.(i've had other connectionist friends tell >> me the same thing) which has its own incremental update rule..that is >> doing <0,1> classification.??? So I guess if you don't like the >> analogy to logistic regression.. maybe Fisher's LDA?? This whole >> thing still doesn't scan for me. >> >> So, again the point here is context.?? Do you really believe that >> Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it >> slipped his mind???? He certainly understood modern statistics (of >> the 1940s and 1950s) >> >> Certainly you'd agree that FR could have referenced linear regression >> as a precursor, or "pretty similar" to what he was working on, it >> seems disingenuous to imply he was plagiarizing Gauss et al.--right?? >> Why would he? >> >> Finally then, in any historical reconstruction, I can think of, it >> just doesn't make sense.??? Sorry. >> >> Steve >> >> >>> -----Original Message----- >>> From: Connectionists >>> On Behalf Of Schmidhuber Juergen >>> Sent: Friday, December 31, 2021 11:00 AM >>> To: connectionists at cs.cmu.edu >>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> Sure, Steve, perceptron/Adaline/other similar methods of the >>> 1950s/60s are not quite the same, but the obvious origin and >>> ancestor of all those single-layer? ?shallow learning? >>> architectures/methods is indeed linear regression; today?s simplest >>> NNs minimizing mean squared error are exactly what they had 2 >>> centuries ago. And the first working deep learning methods of the >>> 1960s did NOT really require ?modern? backprop (published in 1970 by >>> Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] >>> incrementally trained and pruned their deep networks layer by layer >>> to learn internal representations, using regression and a separate >>> validation set. Amari (1967-68)[GD1] used stochastic gradient >>> descent [STO51-52] to learn internal representations WITHOUT >>> ?modern" backprop in his multilayer perceptrons. J?rgen >>> >>> >>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson >>>> wrote: >>>> >>>> Well the perceptron is closer to logistic regression... but the >>>> heaviside function? of course is <0,1>?? so technically not related >>>> to linear regression which is using covariance to estimate betas... >>>> >>>> does that matter?? Yes, if you want to be hyper correct--as this >>>> appears to be-- Berkson (1944) coined the logit.. as log odds.. for >>>> probabilistic classification.. this was formally developed by Cox >>>> in the early 60s, so unlikely even in this case to be a precursor >>>> to perceptron. >>>> >>>> My point was that DL requires both Learning algorithm (BP) and an >>>> architecture.. which seems to me much more responsible for the the >>>> success of Dl. >>>> >>>> S >>>> >>>> >>>> >>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>>> Steve, this is not about machine learning in general, just about deep >>>>> learning vs shallow learning. However, I added the Pandemonium - >>>>> thanks for that! You ask: how is a linear regressor of 1800 >>>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>>> equivalent, of course! (The only difference is that the weights are >>>>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>>> >>>>> >>>>> >>>>> >>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>>> >>>>>> wrote: >>>>>> >>>>>> Despite the comprehensive feel of this it still appears to me to >>>>>> be? too focused on Back-propagation per se.. (except for that >>>>>> pesky Gauss/Legendre ref--which still baffles me at least how >>>>>> this is related to a "neural network"), and at the same time it >>>>>> appears to be missing other more general epoch-conceptually >>>>>> relevant cases, say: >>>>>> >>>>>> Oliver Selfridge? and his Pandemonium model.. which was a >>>>>> hierarchical feature analysis system.. which certainly was in the >>>>>> air during the Neural network learning heyday...in fact, Minsky >>>>>> cites Selfridge as one of his mentors. >>>>>> >>>>>> Arthur Samuels:? Checker playing system.. which learned a >>>>>> evaluation function from a hierarchical search. >>>>>> >>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt >>>>>> perceptual psychologist who introduced the concept that the world >>>>>> was stochastic and the the organism had to adapt to this variance >>>>>> somehow.. he called it "probabilistic functionalism"? which >>>>>> brought attention to learning, perception and decision theory, >>>>>> certainly all piece parts of what we call neural networks. >>>>>> >>>>>> There are many other such examples that influenced or provided >>>>>> context for the yeasty mix that was 1940s and 1950s where Neural >>>>>> Networks? first appeared partly due to PItts and McCulloch which >>>>>> entangled the human brain with computation and early computers >>>>>> themselves. >>>>>> >>>>>> I just don't see this as didactic, in the sense of a conceptual >>>>>> view of the? multidimensional history of the???????? field, as >>>>>> opposed to? a 1-dimensional exegesis of mathematical threads >>>>>> through various statistical algorithms. >>>>>> >>>>>> Steve >>>>>> >>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>>> >>>>>>> Dear connectionists, >>>>>>> >>>>>>> in the wake of massive open online peer review, public comments >>>>>>> on the connectionists mailing list [CONN21] and many additional >>>>>>> private comments (some by well-known deep learning pioneers) >>>>>>> helped to update and improve upon version 1 of the report. The >>>>>>> essential statements of the text remain unchanged as their >>>>>>> accuracy remains unchallenged. I'd like to thank everyone from >>>>>>> the bottom of my heart for their feedback up until this point >>>>>>> and hope everyone will be satisfied with the changes. Here is >>>>>>> the revised version 2 with over 300 references: >>>>>>> >>>>>>> >>>>>>> >>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> In particular, Sec. II has become a brief history of deep >>>>>>> learning up to the 1970s: >>>>>>> >>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) >>>>>>> were discussed in 1943 by McCulloch and Pitts [MC43] and >>>>>>> formally analyzed in 1956 by Kleene [K56] - the closely related >>>>>>> prior work in physics by Lenz, Ising, Kramers, and Wannier dates >>>>>>> back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up >>>>>>> ideas related to artificial evolution [TUR1] and learning NNs. >>>>>>> He failed to formally publish his ideas though, which explains >>>>>>> the obscurity of his thoughts here. Minsky's simple neural SNARC >>>>>>> computer dates back to 1951. Rosenblatt's perceptron with a >>>>>>> single adaptive layer learned in 1958 [R58] (Joseph [R61] >>>>>>> mentions an earlier perceptron-like device by Farley & Clark); >>>>>>> Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such >>>>>>> single-layer "shallow learning" actually started around 1800 >>>>>>> when Gauss & Legendre introduced linear regression and the >>>>>>> method of least squares [DL1-2] - a famous early example of >>>>>>> pattern recognition and generalization from training! > ?! >> ? d! >>> at! >>>>> a through a parameterized predictor is Gauss' rediscovery of the >>>>> asteroid Ceres based on previous astronomical observations. Deeper >>>>> multilayer perceptrons (MLPs) were discussed by Steinbuch >>>>> [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] >>>>> (1962), who wrote about "back-propagating errors" in an MLP with a >>>>> hidden layer [R62], but did not yet have a general deep learning >>>>> algorithm for deep MLPs? (what's now called backpropagation is >>>>> quite different and was first published by Linnainmaa in 1970 >>>>> [BP1-BP5][BPA-C]). Successful learning in deep architectures >>>>> started in 1965 when Ivakhnenko & Lapa published the first >>>>> general, working learning algorithms for deep MLPs with >>>>> arbitrarily many hidden layers (already containing the now popular >>>>> multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] >>>>> already described a deep learning net with 8 layers, trained by >>>>> their highly cited method which was still popular in the new >>>>> millennium [DL2], especially in Eastern Europ! > e! >>> , w! >>>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>>>> failed to >>>>> cite this, just like they failed to cite Amari [GD1], who in 1967 >>>>> proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and >>>>> whose implementation [GD2,GD2a] (with Saito) learned internal >>>>> representations at a time when compute was billions of times more >>>>> expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, >>>>> Amari also published what was later sometimes called the Hopfield >>>>> network or Amari-Hopfield Network [AMH1-3].) Fukushima's now >>>>> widely used deep convolutional NN architecture was first >>>>> introduced in the 1970s [CNN1]. >>>>> >>>>>>> J?rgen >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ****************************** >>>>>>> >>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>>> >>>>>>> >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>>> >>>>>>> The connectionists mailing list is perhaps the oldest mailing >>>>>>> list on ANNs, and many neural net pioneers are still subscribed >>>>>>> to it. I am hoping that some of them - as well as their >>>>>>> contemporaries - might be able to provide additional valuable >>>>>>> insights into the history of the field. >>>>>>> >>>>>>> Following the great success of massive open online peer review >>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>>> article ever published in the journal Neural Networks), I've >>>>>>> decided to put forward another piece for MOOR. I want to thank the >>>>>>> many experts who have already provided me with comments on it. >>>>>>> Please send additional relevant references and suggestions for >>>>>>> improvements for the following draft directly to me at >>>>>>> >>>>>>> juergen at idsia.ch >>>>>>> >>>>>>> : >>>>>>> >>>>>>> >>>>>>> >>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> The above is a point-for-point critique of factual errors in >>>>>>> ACM's justification of the ACM A. M. Turing Award for deep >>>>>>> learning and a critique of the Turing Lecture published by ACM >>>>>>> in July 2021. This work can also be seen as a short history of >>>>>>> deep learning, at least as far as ACM's errors and the Turing >>>>>>> Lecture are concerned. >>>>>>> >>>>>>> I know that some view this as a controversial topic. However, it >>>>>>> is the very nature of science to resolve controversies through >>>>>>> facts. Credit assignment is as core to scientific history as it >>>>>>> is to machine learning. My aim is to ensure that the true >>>>>>> history of our field is preserved for posterity. >>>>>>> >>>>>>> Thank you all in advance for your help! >>>>>>> >>>>>>> J?rgen Schmidhuber >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> -- >>>>>> >>>>>> >>>> -- >>>> >>> >> > > -- Pierre Baldi, Ph.D. Distinguished Professor, Department of Computer Science Director, Institute for Genomics and Bioinformatics Associate Director, Center for Machine Learning and Intelligent Systems University of California, Irvine Irvine, CA 92697-3435 (949) 824-5809 (949) 824-9813 [FAX] Assistant: Janet Ko jko at uci.edu From calendarsites at insticc.org Mon Jan 3 10:12:48 2022 From: calendarsites at insticc.org (calendarsites at insticc.org) Date: Mon, 3 Jan 2022 15:12:48 -0000 Subject: Connectionists: =?iso-8859-1?q?IMPROVE_2022_-_New_Submission_Oppo?= =?iso-8859-1?q?rtunity?= Message-ID: <007301d800b4$5f99d520$1ecd7f60$@insticc.org> CALL FOR PAPERS 2nd International Conference on Image Processing and Vision Engineering **Submission Deadline: January 20, 2022** https://improve.scitevents.org/ April 22 - 24, 2022 Online Streaming Dear Colleagues, We would be very pleased to receive a regular or position paper submission from you, with recent results, to be presented at IMPROVE 2022 until the 20th of January 2022. The conference registration fees have been strongly reduced, in order to give the community an unique opportunity to contribute and submit an original research paper to this conference. IMPROVE is a comprehensive conference of academic and technical nature, focused on image processing and computer vision practical applications. It brings together researchers, engineers and practitioners working either in fundamental areas of image processing, developing new methods and techniques, including innovative machine learning approaches, as well as multimedia communications technology and applications of image processing and artificial vision in diverse areas. The conference will include in its technical program remarkable distinguished speakers, such as: Jiri Matas, Czech Technical University in Prague, Faculty of Electrical Engineering, Czech Republic Michael Bronstein, Imperial College London, United Kingdom Ren? Vidal, The Johns Hopkins University, United States Proceedings will be submitted for indexation by: SCOPUS, Google Scholar, The DBLP Computer Science Bibliography, Semantic Scholar, Microsoft Academic, Engineering Index (EI), Web of Science / Conference Proceedings Citation Index. A short list of presented papers will be invited for a post-conference special issue of the Springer Nature Computer Science journal. All papers presented at the conference venue will also be available at the SCITEPRESS Digital Library. We hope this interests you, as it would be a great pleasure to count on your participation at our conference. Kind regards, Monica Saramago IMPROVE Secretariat Web: http://improve.scitevents.org e-mail: improve.secretariat at insticc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhi.kuai at gmail.com Mon Jan 3 08:35:05 2022 From: hongzhi.kuai at gmail.com (H.Z. Kuai) Date: Mon, 3 Jan 2022 22:35:05 +0900 Subject: Connectionists: Call for Papers: Brain Informatics 2022 Message-ID: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CALL FOR PAPERS The 15th International Conference on Brain Informatics (BI'22) July 15-17, 2022 A Hybrid Conference with both Online and Offline Modes Co-hosted by University of Padua & University of Queensland Padova, Italy (In-Person) & Queensland, Australia (Online) Homepage: wi-consortium.org/conferences/bi2022/ The key theme: Brain Science meets Artificial Intelligence Celebrating the University of Padua's 800 years birthday +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ *** IMPORTANT DATES *** - January 15, 2022: Workshop/Special Session proposal deadline - February 28, 2022: Paper submission deadline - March 15, 2022: Abstract submission deadline The International Conference on Brain Informatics (BI) series has established itself as the world's premier research conference on Brain Informatics, which is an emerging interdisciplinary and multidisciplinary research field that combines the efforts of Cognitive Science, Neuroscience, Machine Learning, Data Science, Artificial Intelligence (AI), and Information and Communication Technology (ICT) to explore the main problems that lie in the interplay between human brain studies and informatics research. The 15th International Conference on Brain Informatics (BI'22) provides a premier international forum to bring together researchers and practitioners from diverse fields for presentation of original research results, as well as exchange and dissemination of innovative and practical development experiences on Brain Informatics research, brain-inspired technologies and brain/mental health applications. *** Topics and Areas *** The key theme of the conference is "Brain Science meets Artificial Intelligence". The BI'22 solicits high-quality original research and application papers (both full paper and abstract submissions). Relevant topics include but are not limited to: Track 1: Cognitive and Computational Foundations of Brain Science Track 2: Human Information Processing Systems Track 3: Brain Big Data Analytics, Curation and Management Track 4: Informatics Paradigms for Brain and Mental Health Research Track 5: Brain-Machine Intelligence and Brain-Inspired Computing *** Paper Submission and Publications *** Paper Submission: ----------------- Full papers should be limited to a maximum of 10 pages including figures and references in Springer LNCS Proceedings format ( https://www.springer.com/us/computer-science/lncs/conference-proceedings-guidelines ). Additional pages will be charged. All papers will be peer-reviewed and accepted based on originality, significance of contribution, technical merit, and presentation quality. All papers accepted (and all workshop & special sessions' full-length papers) will be published by Springer as a volume of the Springer-Nature LNAI Brain Informatics Book Series ( https://link.springer.com/conference/brain). Abstract Submission: -------------------- Research abstracts are encouraged and will be accepted for presentations in an oral presentation format and/or poster presentation format. Each abstract submission should include the title of the paper and an abstract body within 500 words. Journal Opportunities: ---------------------- High-quality BI conference papers will be nominated for a fast-track review and publication at the Brain Informatics Journal ( https://braininformatics.springeropen.com/), an international, peer-reviewed, interdisciplinary Open Access journal published by Springer Nature. Special Issues & Books: ----------------------- Workshop/special session organizers and BI conference session chairs may consider and can be invited to prepare a book proposal of special topics for possible book publication in the Springer-Nature Brain Informatics & Health Book Series (https://www.springer.com/series/15148), or a special issue at the Brain Informatics Journal. *** Workshop & Special Sessions *** Proposal Submissions: --------------------- BI'22 will be hosting a series of workshops and special sessions featuring topics relevant to the brain informatics community on the latest research and industry applications. Papers & Presentations: ----------------------- A workshop/special session typically takes a half-day (or full-day) and includes a mix of regular and invited presentations including regular papers, abstracts, invited papers as well as invited presentations. The paper and abstract submissions to workshops/special sessions will follow the same format as the BI conference papers and abstracts. Proposal Guidelines: -------------------- Each proposal should include 1) workshop/special session title; 2) length of the workshop (half/full day); 3) names, main contact, and a short bio of the workshop organizers; 4) brief description of the workshop scope and timeline; 5) prior history of the workshop (if any); 6) potential program committee members and invited speakers; 7) any other relevant information. Publications: ------------- Accepted workshop and special session full papers will be published at the same BI proceedings at the Springer-Nature LNAI Brain Informatics Book Series ( https://link.springer.com/conference/brain). Workshop organizers can be invited to contribute a book publication in the Springer-Nature Brain Informatics & Health Book Series, or a special issue at the Brain Informatics Journal. *** IMPORTANT DATES *** - January 15, 2022: Workshop/Special Session proposal deadline - February 28, 2022: Paper submission deadline - March 15, 2022: Abstract submission deadline - April 15, 2022: Paper acceptance notification - April 20, 2022: Notification of abstract acceptance - April 30, 2022: Final paper and abstract submission deadline - May 5, 2022: Accepted paper and abstract registration deadline - July 15-17, 2022: Conference Organizing Committee ++++++++++++++++++++++ Advisory Board Chair: Ning Zhong (Maebashi Institute of Technology, Japan) * Maurizio Corbetta (Padua Neuroscience Center & University of Padova, Italy) * Tianzi Jiang (Institute of Automation, CAS, China) * Nikola Kasabov (Auckland University of Technology, New Zealand) * Peipeng Liang (CNU School of Psychology, China) * Hesheng Liu (Harvard Medical School & Massachusetts General Hospital, USA) * Guoming Luan (Sanbo Brain Hospital, China) * Stefano Panzeri (University Medical Center Hamburg-Eppendorf, Germany) * Hanchuan Peng (SEU-Allen Institute for Brain & Intelligence, China) * Shinsuke Shimojo (California Institute of Technology, USA) General Chairs * Mufti Mahmud (Nottingham Trent University, UK) * Stefano Vassanelli (University of Padova, Italy) * Andre van Zundert (University of Queensland, Australia) Program Chairs * Alessandra Bertoldo (University of Padova, Italy) * Gopikrishna Deshpande (Auburn University, USA) * Jing He (University of Queensland, Australia) Publication Chair * Can Wang (Griffith University, Australia) Workshop/Special Session/Tutorial Chairs * Alessia Sarica (Magna Graecia University, Italy) * Xiaohui Tao (University of Southern Queensland, Australia) * Alberto Testolin (University of Padova, Italy) * Vassiliy Tsytsarev (University of Maryland, USA) * Juan Velasquez (University of Chile, Chile) * Vicky Yamamoto (USC Keck School of Medicine, USA) * Yang Yang (BFU Department of Psychology, China) Local Organization Chairs * Michele Allegra (University of Padova, Italy) * Claudia Cecchetto (University of Padova, Italy) * Daniela Pietrobon (University of Padova, Italy) * Samir Suweis (University of Padova, Italy) * Mattia Tambaro (University of Padova, Italy) Publicity Chairs * Abzetdin Adamov (ADA University, Azerbaijan) * M Shamim Kaiser (Jahangirnagar University, Bangladesh) * Hongzhi Kuai (Maebashi Institute of Technology, Japan) * Francesco Morabito (Mediterranean University of Reggio Calabria, Italy) * Yanqing Zhang (Georgia State University, USA) Contact Us: http://wi-consortium.org/conferences/bi2022/contact.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From calendarsites at insticc.org Mon Jan 3 10:12:48 2022 From: calendarsites at insticc.org (calendarsites at insticc.org) Date: Mon, 3 Jan 2022 15:12:48 -0000 Subject: Connectionists: =?iso-8859-1?q?IMPROVE_2022_-_New_Submission_Oppo?= =?iso-8859-1?q?rtunity?= Message-ID: <007801d800b4$601f0ab0$205d2010$@insticc.org> CALL FOR PAPERS 2nd International Conference on Image Processing and Vision Engineering **Submission Deadline: January 20, 2022** https://improve.scitevents.org/ April 22 - 24, 2022 Online Streaming Dear Colleagues, We would be very pleased to receive a regular or position paper submission from you, with recent results, to be presented at IMPROVE 2022 until the 20th of January 2022. The conference registration fees have been strongly reduced, in order to give the community an unique opportunity to contribute and submit an original research paper to this conference. IMPROVE is a comprehensive conference of academic and technical nature, focused on image processing and computer vision practical applications. It brings together researchers, engineers and practitioners working either in fundamental areas of image processing, developing new methods and techniques, including innovative machine learning approaches, as well as multimedia communications technology and applications of image processing and artificial vision in diverse areas. The conference will include in its technical program remarkable distinguished speakers, such as: Jiri Matas, Czech Technical University in Prague, Faculty of Electrical Engineering, Czech Republic Michael Bronstein, Imperial College London, United Kingdom Ren? Vidal, The Johns Hopkins University, United States Proceedings will be submitted for indexation by: SCOPUS, Google Scholar, The DBLP Computer Science Bibliography, Semantic Scholar, Microsoft Academic, Engineering Index (EI), Web of Science / Conference Proceedings Citation Index. A short list of presented papers will be invited for a post-conference special issue of the Springer Nature Computer Science journal. All papers presented at the conference venue will also be available at the SCITEPRESS Digital Library. We hope this interests you, as it would be a great pleasure to count on your participation at our conference. Kind regards, Monica Saramago IMPROVE Secretariat Web: http://improve.scitevents.org e-mail: improve.secretariat at insticc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Mon Jan 3 19:53:21 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Mon, 3 Jan 2022 19:53:21 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Message-ID: Schmidhuber Juergen wrote: "Steve, almost all of deep learning is about engineering and problem solving, not about explaining or modeling biological neurons/synapses/dendrites." You would be surprised: Any model about modeling biological brains needs CONSCIOUSNESS as a necessary condition. My model for biological brains has been rejected by AAAI 2021, ICDL 2021, and finally accepted by an IEEE Electronics Conference! Human nature is blocking our capability to understand biological brains. We often mention biological brains. But when a model is presented, we cannot understand it by a huge margin. As correctly challenged by Michael Jordan \cite{Gome14}, the present model must holistically solve some open problems. The present work holistically solves the following 20 "million-dollar problems": 1. the image-annotation problem (e.g., giving retina a bounding box to learn as in ImageNet \cite{Russakovsky15}), 2. the sensorimotor recurrence problem (e.g., any big data sets are invalid \cite{WengPSUTS21}), 3. the motor-supervision problem (e.g., impractical to supervise motors all the time), 4. the sensor calibration problem (e.g., a life calibrates the eyes automatically), 5. the inverse kinematics problem (e.g., a life calibrates all redundant limbs automatically), 6. the government-free problem (i.e., no intelligent governments inside the brain), 7. the closed-skull problem (e.g., supervising hidden neurons are not biologically plausible), 8. the nonlinear controller problem (e.g., a brain is a highly nonlinear controller), 9. the curse of dimensionality problem (e.g., too many receptors on the retina), 10. the under-sample problem (i.e., few available events in a life \cite{WengLCA09}), 11. the distributed vs. local representations problem (i.e., both representations emerge), 12. the frame problem (also called symbol grounding problem, thus must be free from any symbols), 13. the local minima problem (so, must avoid error-backprop learning \cite{Krizhevsky17,LeCun15}), 14. the abstraction problem (i.e., require various invariances and transfers) \cite{WengIEEE-IS2014}, 15. the rule-like manipulation problem (e.g., not just fitting big data \cite{Harnad90,WengIJHR2020}), 16. the smooth representations problem (e.g., so as to recruit neurons under brain injuries \cite{Elman97,Wu2019DN-2}), 17. the motivation problem (e.g., including reinforcement and various emotions \cite{Dreyfus92,WengNAI2e}), 18. the global optimality problem (e.g., comparisons under the Three Learning Conditions below\cite{WengPSUTS-ICDL21}), 19. the auto-programming for general purposes problem (e.g., writing a complex program \cite{WengIJHR2020}) and 20. the brain-thinking problem (e.g., planning and discovery \cite{Turing50,WuThink21}). Best regards, -John -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.fuertinger at esi-frankfurt.de Tue Jan 4 06:43:01 2022 From: stefan.fuertinger at esi-frankfurt.de (=?UTF-8?Q?Stefan_F=c3=bcrtinger?=) Date: Tue, 4 Jan 2022 12:43:01 +0100 Subject: Connectionists: Open Position: Scientific Software Developer (f/m/d) at ESI Frankfurt Message-ID: Dear colleagues, Happy New Year and apologies for cross-postings. The IT department at the Ernst Str?ngmann Institute (ESI) for Neuroscience in cooperation with Max Planck Society in Frankfurt, Germany, is looking for a *Software Developer (f/m/d)* to closely collaborate with resident research groups developing custom-tailored software applications for experimental data acquisition and analysis. Data processing is performed on premises using a local high-performance computing (HPC) cluster comprising multiple hardware architectures (x86, IBM Power, GPU). /Responsibilities:/ - Development of scientific software applications in Python - Administration of on-premise software development platforms (GitLab, SVN, Perforce) - Platform-specific code modifications and patch development for existing open-source analysis software - HPC cluster operation support /Requirements:/ - Profound experience with application development in Python - Experience with application development in C/C++ - Knowledge of and interest in common DevOps techniques (git and CI/CD pipelines) - Strong interest and experience in the Linux and open-source ecosystem The successful candidate will join a welcoming team at a dynamic international research institute. We provide a secure position in an exciting work environment offering a variety of interesting tasks and the possibility to grow and learn. Equal opportunities and diversity are important to us! All potential candidates are equally welcome and encouraged to apply before _January 31st 2022_, the full job ad is available on our website: https://www.esi-frankfurt.de/jobs/2021_12_07_softwareentwicklerin/ Please feel free to forward this message to interested candidates. All the best, Stefan -- Dr. Stefan F?rtinger Scientific Software Developer, IT Ernst Str?ngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society Deutschordenstra?e 46 60528 Frankfurt am Main Germany E-Mail: stefan.fuertinger at esi-frankfurt.de Mobile: +49 151 688 174 23 Office: +49 69 96769 586 GitHub: https://github.com/esi-neuroscience Registered at Local Court Frankfurt am Main - HRB 84266 CEO: Prof. David Poeppel, PhD www.esi-frankfurt.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5399 bytes Desc: S/MIME Cryptographic Signature URL: From beckers at mcmaster.ca Tue Jan 4 10:47:20 2022 From: beckers at mcmaster.ca (Sue Becker) Date: Tue, 04 Jan 2022 10:47:20 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> Message-ID: <14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca> Pierre, I'm responding to your comment here: > Terry: ... you have made sure, year after year,? that you and your > BHL/CIFAR > friends were able to control and subtly manipulate NIPS/NeurIPS > (misleading the field in wrong directions, preventing news ideas and > outsiders from flourishing, and distorting credit attribution). > > Can you please explain to this mailing list how this serves as being "a > good role model" (to use your own words) for the next generation? As loathe as I am to wade into what has become a cesspool of a debate, you have gone way outside the bounds of accuracy, not to mention civility and decency, in directing your mudslinging at Terry Sejnowski. If anything, Terry deserves recognition and thanks for his many years of service to this community. If you think that NeurIPS is run by a bunch of insiders, try stepping up and volunteering your service to this conference, be a longtime committed reviewer, then become an Area Chair, do an outstanding job and be selected as the next program chair and then general chair. That is one path to influencing the future of the conference. Much more importantly, the hundreds of dedicated reviewers are the ones who actually determine the content of the meeting, by identifying the very best papers out of the thousands of submissions received each year. There is no top-down control or manipulation over that process. Cheers, Sue --- Sue Becker, Professor Neurotechnology and Neuroplasticity Lab, PI Dept. of Psychology Neuroscience & Behaviour, McMaster University www.science.mcmaster.ca/pnb/department/becker On 2022-01-03 09:55, Baldi,Pierre wrote: > Terry: > > We can all agree on the importance of mentoring the next generation. > However, given that: > > 1) you have been in full and sole control of the NIPS/NeurIPS > foundation > since the 1980s; > > 2) you have been in full and sole control of Neural Computation since > the 1980s; > > 3) you have extensively published in Neural Computation (and now also > PNAS); > > 4) you have made sure, year after year,? that you and your BHL/CIFAR > friends were able to control and subtly manipulate NIPS/NeurIPS > (misleading the field in wrong directions, preventing news ideas and > outsiders from flourishing, and distorting credit attribution). > > Can you please explain to this mailing list how this serves as being "a > good role model" (to use your own words) for the next generation? > > Or did you mean it in a more cynical way--indeed this is one of the > possible ways for a scientist to be "successful"? > > --Pierre > > > > On 1/2/2022 12:29 PM, Terry Sejnowski wrote: >> We would be remiss not to acknowledge that backprop would not be >> possible without the calculus, >> so Isaac newton should also have been given credit, at least as much >> credit as Gauss. >> >> All these threads will be sorted out by historians one hundred years >> from now. >> Our precious time is better spent moving the field forward.? There is >> much more to discover. >> >> A new generation with better computational and mathematical tools than >> we had back >> in the last century have joined us, so let us be good role models and >> mentors to them. >> >> Terry >> >> ----- >> >> On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >>> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the >>> work of Amari in a debate about connectionism at the ICNN?97 .... He >>> literally said 'Amari invented back propagation'..." when he sat next >>> to Amari and Werbos. Later, however, he failed to cite Amari?s >>> stochastic gradient descent (SGD) for multilayer NNs (1967-68) >>> [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and >>> other surveys.? Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, >>> Wolfowitz, 1951-52) is not even backprop. Backprop is just a >>> particularly efficient way of computing gradients in differentiable >>> networks, known as the reverse mode of automatic differentiation, due >>> to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 >>> [BPa]). Hinton did not cite these papers either, and in 2019 >>> embarrassingly did not hesitate to accept an award for having >>> "created ... the backpropagation algorithm? [HIN]. All references and >>> more on this can be found in the report, especially in ! >> Se! >>> ? c. XII. >>> >>> The deontology of science requires: If one "re-invents" something >>> that was already known, and only becomes aware of it later, one must >>> at least clarify it later [DLC], and correctly give credit in all >>> follow-up papers and presentations. Also, ACM's Code of Ethics and >>> Professional Conduct [ACM18] states: "Computing professionals should >>> therefore credit the creators of ideas, inventions, work, and >>> artifacts, and respect copyrights, patents, trade secrets, license >>> agreements, and other methods of protecting authors' works." LBH >>> didn't. >>> >>> Steve still doesn't believe that linear regression of 200 years ago >>> is equivalent to linear NNs. In a mature field such as math we would >>> not have such a discussion. The math is clear. And even today, many >>> students are taught NNs like this: let's start with a linear >>> single-layer NN (activation = sum of weighted inputs). Now minimize >>> mean squared error on the training set. That's good old linear >>> regression (method of least squares). Now let's introduce multiple >>> layers and nonlinear but differentiable activation functions, and >>> derive backprop for deeper nets in 1960-70 style (still used today, >>> half a century later). >>> >>> Sure, an important new variation of the 1950s (emphasized by Steve) >>> was to transform linear NNs into binary classifiers with threshold >>> functions. Nevertheless, the first adaptive NNs (still widely used >>> today) are 1.5 centuries older except for the name. >>> >>> Happy New Year! >>> >>> J?rgen >>> >>> >>>> On 2 Jan 2022, at 03:43, Asim Roy wrote: >>>> >>>> And, by the way, Paul Werbos was also there at the same debate. And >>>> so was Teuvo Kohonen. >>>> >>>> Asim >>>> >>>> -----Original Message----- >>>> From: Asim Roy >>>> Sent: Saturday, January 1, 2022 3:19 PM >>>> To: Schmidhuber Juergen ; >>>> connectionists at cs.cmu.edu >>>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari >>>> in a debate about connectionism at the ICNN?97 (International >>>> Conference on Neural Networks) in Houston. He literally said "Amari >>>> invented back propagation" and Amari was sitting next to him. I >>>> still have a recording of that debate. >>>> >>>> Asim Roy >>>> Professor, Information Systems >>>> Arizona State University >>>> https://isearch.asu.edu/profile/9973 >>>> https://lifeboat.com/ex/bios.asim.roy >>> >>> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson >>> wrote: >>> >>> Juergen:? Happy New Year! >>> >>> "are not quite the same".. >>> >>> I understand that its expedient sometimes to use linear regression to >>> approximate the Perceptron.(i've had other connectionist friends tell >>> me the same thing) which has its own incremental update rule..that is >>> doing <0,1> classification.??? So I guess if you don't like the >>> analogy to logistic regression.. maybe Fisher's LDA?? This whole >>> thing still doesn't scan for me. >>> >>> So, again the point here is context.?? Do you really believe that >>> Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it >>> slipped his mind???? He certainly understood modern statistics (of >>> the 1940s and 1950s) >>> >>> Certainly you'd agree that FR could have referenced linear regression >>> as a precursor, or "pretty similar" to what he was working on, it >>> seems disingenuous to imply he was plagiarizing Gauss et al.--right?? >>> Why would he? >>> >>> Finally then, in any historical reconstruction, I can think of, it >>> just doesn't make sense.??? Sorry. >>> >>> Steve >>> >>> >>>> -----Original Message----- >>>> From: Connectionists >>>> On Behalf Of Schmidhuber Juergen >>>> Sent: Friday, December 31, 2021 11:00 AM >>>> To: connectionists at cs.cmu.edu >>>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> Sure, Steve, perceptron/Adaline/other similar methods of the >>>> 1950s/60s are not quite the same, but the obvious origin and >>>> ancestor of all those single-layer? ?shallow learning? >>>> architectures/methods is indeed linear regression; today?s simplest >>>> NNs minimizing mean squared error are exactly what they had 2 >>>> centuries ago. And the first working deep learning methods of the >>>> 1960s did NOT really require ?modern? backprop (published in 1970 by >>>> Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] >>>> incrementally trained and pruned their deep networks layer by layer >>>> to learn internal representations, using regression and a separate >>>> validation set. Amari (1967-68)[GD1] used stochastic gradient >>>> descent [STO51-52] to learn internal representations WITHOUT >>>> ?modern" backprop in his multilayer perceptrons. J?rgen >>>> >>>> >>>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson >>>>> wrote: >>>>> >>>>> Well the perceptron is closer to logistic regression... but the >>>>> heaviside function? of course is <0,1>?? so technically not related >>>>> to linear regression which is using covariance to estimate betas... >>>>> >>>>> does that matter?? Yes, if you want to be hyper correct--as this >>>>> appears to be-- Berkson (1944) coined the logit.. as log odds.. for >>>>> probabilistic classification.. this was formally developed by Cox >>>>> in the early 60s, so unlikely even in this case to be a precursor >>>>> to perceptron. >>>>> >>>>> My point was that DL requires both Learning algorithm (BP) and an >>>>> architecture.. which seems to me much more responsible for the the >>>>> success of Dl. >>>>> >>>>> S >>>>> >>>>> >>>>> >>>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>>>> Steve, this is not about machine learning in general, just about >>>>>> deep >>>>>> learning vs shallow learning. However, I added the Pandemonium - >>>>>> thanks for that! You ask: how is a linear regressor of 1800 >>>>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>>>> equivalent, of course! (The only difference is that the weights >>>>>> are >>>>>> often called beta_i rather than w_i.) Shallow learning: one >>>>>> adaptive >>>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> Despite the comprehensive feel of this it still appears to me to >>>>>>> be? too focused on Back-propagation per se.. (except for that >>>>>>> pesky Gauss/Legendre ref--which still baffles me at least how >>>>>>> this is related to a "neural network"), and at the same time it >>>>>>> appears to be missing other more general epoch-conceptually >>>>>>> relevant cases, say: >>>>>>> >>>>>>> Oliver Selfridge? and his Pandemonium model.. which was a >>>>>>> hierarchical feature analysis system.. which certainly was in the >>>>>>> air during the Neural network learning heyday...in fact, Minsky >>>>>>> cites Selfridge as one of his mentors. >>>>>>> >>>>>>> Arthur Samuels:? Checker playing system.. which learned a >>>>>>> evaluation function from a hierarchical search. >>>>>>> >>>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt >>>>>>> perceptual psychologist who introduced the concept that the world >>>>>>> was stochastic and the the organism had to adapt to this variance >>>>>>> somehow.. he called it "probabilistic functionalism"? which >>>>>>> brought attention to learning, perception and decision theory, >>>>>>> certainly all piece parts of what we call neural networks. >>>>>>> >>>>>>> There are many other such examples that influenced or provided >>>>>>> context for the yeasty mix that was 1940s and 1950s where Neural >>>>>>> Networks? first appeared partly due to PItts and McCulloch which >>>>>>> entangled the human brain with computation and early computers >>>>>>> themselves. >>>>>>> >>>>>>> I just don't see this as didactic, in the sense of a conceptual >>>>>>> view of the? multidimensional history of the???????? field, as >>>>>>> opposed to? a 1-dimensional exegesis of mathematical threads >>>>>>> through various statistical algorithms. >>>>>>> >>>>>>> Steve >>>>>>> >>>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>>>> >>>>>>>> Dear connectionists, >>>>>>>> >>>>>>>> in the wake of massive open online peer review, public comments >>>>>>>> on the connectionists mailing list [CONN21] and many additional >>>>>>>> private comments (some by well-known deep learning pioneers) >>>>>>>> helped to update and improve upon version 1 of the report. The >>>>>>>> essential statements of the text remain unchanged as their >>>>>>>> accuracy remains unchallenged. I'd like to thank everyone from >>>>>>>> the bottom of my heart for their feedback up until this point >>>>>>>> and hope everyone will be satisfied with the changes. Here is >>>>>>>> the revised version 2 with over 300 references: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> In particular, Sec. II has become a brief history of deep >>>>>>>> learning up to the 1970s: >>>>>>>> >>>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) >>>>>>>> were discussed in 1943 by McCulloch and Pitts [MC43] and >>>>>>>> formally analyzed in 1956 by Kleene [K56] - the closely related >>>>>>>> prior work in physics by Lenz, Ising, Kramers, and Wannier dates >>>>>>>> back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up >>>>>>>> ideas related to artificial evolution [TUR1] and learning NNs. >>>>>>>> He failed to formally publish his ideas though, which explains >>>>>>>> the obscurity of his thoughts here. Minsky's simple neural SNARC >>>>>>>> computer dates back to 1951. Rosenblatt's perceptron with a >>>>>>>> single adaptive layer learned in 1958 [R58] (Joseph [R61] >>>>>>>> mentions an earlier perceptron-like device by Farley & Clark); >>>>>>>> Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such >>>>>>>> single-layer "shallow learning" actually started around 1800 >>>>>>>> when Gauss & Legendre introduced linear regression and the >>>>>>>> method of least squares [DL1-2] - a famous early example of >>>>>>>> pattern recognition and generalization from training! >> ?! >>> ? d! >>>> at! >>>>>> a through a parameterized predictor is Gauss' rediscovery of the >>>>>> asteroid Ceres based on previous astronomical observations. Deeper >>>>>> multilayer perceptrons (MLPs) were discussed by Steinbuch >>>>>> [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] >>>>>> (1962), who wrote about "back-propagating errors" in an MLP with a >>>>>> hidden layer [R62], but did not yet have a general deep learning >>>>>> algorithm for deep MLPs? (what's now called backpropagation is >>>>>> quite different and was first published by Linnainmaa in 1970 >>>>>> [BP1-BP5][BPA-C]). Successful learning in deep architectures >>>>>> started in 1965 when Ivakhnenko & Lapa published the first >>>>>> general, working learning algorithms for deep MLPs with >>>>>> arbitrarily many hidden layers (already containing the now popular >>>>>> multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] >>>>>> already described a deep learning net with 8 layers, trained by >>>>>> their highly cited method which was still popular in the new >>>>>> millennium [DL2], especially in Eastern Europ! >> e! >>>> , w! >>>>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>>>>> failed to >>>>>> cite this, just like they failed to cite Amari [GD1], who in 1967 >>>>>> proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and >>>>>> whose implementation [GD2,GD2a] (with Saito) learned internal >>>>>> representations at a time when compute was billions of times more >>>>>> expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, >>>>>> Amari also published what was later sometimes called the Hopfield >>>>>> network or Amari-Hopfield Network [AMH1-3].) Fukushima's now >>>>>> widely used deep convolutional NN architecture was first >>>>>> introduced in the 1970s [CNN1]. >>>>>> >>>>>>>> J?rgen >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ****************************** >>>>>>>> >>>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>>>> >>>>>>>> The connectionists mailing list is perhaps the oldest mailing >>>>>>>> list on ANNs, and many neural net pioneers are still subscribed >>>>>>>> to it. I am hoping that some of them - as well as their >>>>>>>> contemporaries - might be able to provide additional valuable >>>>>>>> insights into the history of the field. >>>>>>>> >>>>>>>> Following the great success of massive open online peer review >>>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>>>> article ever published in the journal Neural Networks), I've >>>>>>>> decided to put forward another piece for MOOR. I want to thank >>>>>>>> the >>>>>>>> many experts who have already provided me with comments on it. >>>>>>>> Please send additional relevant references and suggestions for >>>>>>>> improvements for the following draft directly to me at >>>>>>>> >>>>>>>> juergen at idsia.ch >>>>>>>> >>>>>>>> : >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The above is a point-for-point critique of factual errors in >>>>>>>> ACM's justification of the ACM A. M. Turing Award for deep >>>>>>>> learning and a critique of the Turing Lecture published by ACM >>>>>>>> in July 2021. This work can also be seen as a short history of >>>>>>>> deep learning, at least as far as ACM's errors and the Turing >>>>>>>> Lecture are concerned. >>>>>>>> >>>>>>>> I know that some view this as a controversial topic. However, it >>>>>>>> is the very nature of science to resolve controversies through >>>>>>>> facts. Credit assignment is as core to scientific history as it >>>>>>>> is to machine learning. My aim is to ensure that the true >>>>>>>> history of our field is preserved for posterity. >>>>>>>> >>>>>>>> Thank you all in advance for your help! >>>>>>>> >>>>>>>> J?rgen Schmidhuber >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>>> >>>>>>> >>>>> -- >>>>> >>>> >>> >> >> From doya at oist.jp Tue Jan 4 11:54:31 2022 From: doya at oist.jp (Kenji Doya) Date: Tue, 4 Jan 2022 16:54:31 +0000 Subject: Connectionists: Neuro2022 in Okinawa: Abstract submission by January 20th Message-ID: <916FC83D-F915-43D0-87CB-631685D397C2@oist.jp> NEURO2022 June 30 - July 3, 2022, Okinawa, Japan https://neuro2022.jnss.org/en/ Plenary Lectures: Anne Churchland, University of California, Los Angeles Martin Schwab, University of Zurich Erin M. Schuman, Max Planck Institute for Brain Research Brain Prize Lecture: Adrian Peter Bird, University of Edinburgh Neuro2022 is a joint conference of Japanese Neuroscience, Neurochemistry and Neural Networks Societies, held at Okinawa Convention Center and adjacent seaside facilities: https://www.oki-conven.jp The conference aims to be a forum for neuroscientists, clinical researchers, machine learning and AI engineers from around the world. All scientific programs will be in English and the conference will be in a hybrid format to allow online participation. Abstract submissions are open till January 20th, 8am UTC. Please visit the web site for details: https://neuro2022.jnss.org/en/abstract.html Neuro2022 is preceded by OIST Computational Neuroscience Course https://groups.oist.jp/ocnc and followed by the second edition of International Symposium on AI and Brain Science http://www.brain-ai.jp/symposium2020/ both held at Okinawa Institute of Science and Technology https://www.oist.jp https://groups.oist.jp/conference-venues We hope to see many of you on this beautiful semitropical island in this coming summer. Best wishes, Kenji Doya, Japan Neuroscience Society Kohtaro Takei, Japanese Society for Neurochemistry Kazushi Ikeda, Japanese Neural Network Society ---- Kenji Doya Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University 1919-1 Tancha, Onna, Okinawa 904-0495, Japan Phone: +81-98-966-8594; Fax: +81-98-966-2891 https://groups.oist.jp/ncu From g.brostow at cs.ucl.ac.uk Tue Jan 4 13:40:54 2022 From: g.brostow at cs.ucl.ac.uk (Gabriel J. Brostow) Date: Tue, 4 Jan 2022 18:40:54 +0000 Subject: Connectionists: Niantic R&D: 5 London openings and 6 internships in Computer Vision Message-ID: Our Research & Development team works on core problems around 3D modeling, understanding, perception, and human interaction. We publish code, libraries, and papers. Where possible, we also help bring research to millions of end-users through Niantic products like PokemonGo and through the publicly available ARDK. We're based in central London. Eventually, full-timers will be required to work from the office a few days per week. Current openings: *Internships for PhD students (full or part-time):* https://careers.nianticlabs.com/openings/software-engineering-phd-intern-research-and-development/ Interns for 2022 can be hired while living in the UK, Switzerland, Germany, California + other US states, regardless of where you're doing your PhD. We have desks for you in London too! *Research Software Engineer (PhD-level experience recommended):* https://careers.nianticlabs.com/openings/research-software-engineer/ (Senior version of this role is here: https://careers.nianticlabs.com/openings/senior-research-software-engineer-r-d-infrastructure/ ) *AR Software Engineer:* https://careers.nianticlabs.com/openings/ar-software-engineer/ *Engineering Manager in R&D* https://careers.nianticlabs.com/openings/engineering-manager-r-d/ If you're unsure which role to apply for, please reach out to Georgia Ind for help in applying: gind at nianticlabs.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Tue Jan 4 23:40:08 2022 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 04 Jan 2022 20:40:08 -0800 Subject: Connectionists: NEURAL COMPUTATION - January 1, 2022 In-Reply-To: Message-ID: Neural Computation - Volume 34, Number 1 - January 1, 2022 available online for download now: http://www.mitpressjournals.org/toc/neco/34/1 http://cognet.mit.edu/content/neural-computation ----- Review Predictive Coding, Variational Autoencoders, and Biological Connections Joseph Marino Article Confidence-controlled Hebbian Learning Efficiently Extracts Category Membership >From Stimuli Encoded in View of a Categorization Task Kevin Berlemont, Jean-Pierre Nadal, and Christopher Summerfield Letters Traveling Waves in Quasi One-dimensional Neuronal Minicolumns Vincent Baker, Luis Cruz Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding Xiaolin Hu, Zhigang Zeng Modeling the Ventral and Dorsal Cortical Visual Pathways Using Artificial Neural Networks Zhixian Han, Anne B. Sereno Towards a Brain-inspired Developmental Neural Network Based on Dendritic Spine Dynamics Feifei Zhao, Yi Zeng, and Jun Bai Spatial Attention Enhances Crowded Stimulus Encoding Across Modeled Receptive Fields by Increasing Redundancy of Feature Representations Justin Theiss, Joel Donald Bowen, and Michael Andrew Silver A Double-Layer Multi-Resolution Classification Model for Decoding Spatio-Temporal Patterns of Spikes With Small Sample Size Xiwei She, Theodore W Berger, and Dong Song Neural Networks With Disabilities: An Introduction to Complementary Artificial Intelligence Vagan Terziyan, Olena Kaikova ----- ON-LINE -- http://www.mitpressjournals.org/neco MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-cs at mit.edu ----- From jose at rubic.rutgers.edu Tue Jan 4 12:49:22 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Tue, 4 Jan 2022 12:49:22 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca> References: <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca> Message-ID: <7fef2d7d-5594-4597-001d-bd16290a8550@rubic.rutgers.edu> +1 So I was the 4th program chair of NIPS back in 1991, the 5th General Chair. I have been to every advisory/exec board meeting since that time and almost every NIPS till Covid hit--last one in 2019. I have never seen or experienced a "cabal" of Terry "cohorts", just the opposite.?? Terry has maintained a high integrity, honest environment and listened to all input and concerns.???? NIPS is truly a democratic, serious and fair enterprise and organization, which is due to Terry's careful and light touch on the rudder. Sue's points are obvious to anyone who has curated this conference.? Its really impossible to "subtly" or otherwise change the direction of the conference or prevent good research from being accepted.? Is there a huge MISS rate.. no doubt.? But the conference has been a success, because of its transparency and its scientific diversity. I really don't understand where this is coming from, but certainly not from the well documented concerns that Juergen has raised.? He and I disagree on historical interpretation.. but I don't think this should be taken as evidence of some larger paranoid view of the field and the invisible hand that is controlling it. Steve On 1/4/22 10:47 AM, Sue Becker wrote: > Pierre, hI'm responding to your comment here: > >> Terry: ... you have made sure, year after year,? that you and your >> BHL/CIFAR >> friends were able to control and subtly manipulate NIPS/NeurIPS >> (misleading the field in wrong directions, preventing news ideas and >> outsiders from flourishing, and distorting credit attribution). >> >> Can you please explain to this mailing list how this serves as being "a >> good role model" (to use your own words) for the next generation? > > As loathe as I am to wade into what has become a cesspool of a > debate,? you have gone way outside the bounds of accuracy, not to > mention civility and decency, in directing your mudslinging at Terry > Sejnowski. If anything,? Terry deserves recognition and thanks for his > many years of service to this community. > > If you think that NeurIPS is run by a bunch of insiders, try stepping > up and volunteering your service to this conference, be a longtime > committed reviewer, then become an Area Chair, do an outstanding job > and be selected as the next program chair and then general chair. That > is one path to influencing the future of the conference. Much more > importantly, the hundreds of dedicated reviewers are the ones who > actually determine the content of the meeting, by identifying the very > best papers out of the thousands of submissions received each year.? > There is no top-down control or manipulation over that process. > > Cheers, > Sue > > --- > Sue Becker, Professor > Neurotechnology and Neuroplasticity Lab, PI > Dept. of Psychology Neuroscience & Behaviour, McMaster University > www.science.mcmaster.ca/pnb/department/becker > > > On 2022-01-03 09:55, Baldi,Pierre wrote: >> Terry: >> >> We can all agree on the importance of mentoring the next generation. >> However, given that: >> >> 1) you have been in full and sole control of the NIPS/NeurIPS foundation >> since the 1980s; >> >> 2) you have been in full and sole control of Neural Computation since >> the 1980s; >> >> 3) you have extensively published in Neural Computation (and now also >> PNAS); >> >> 4) you have made sure, year after year,? that you and your BHL/CIFAR >> friends were able to control and subtly manipulate NIPS/NeurIPS >> (misleading the field in wrong directions, preventing news ideas and >> outsiders from flourishing, and distorting credit attribution). >> >> Can you please explain to this mailing list how this serves as being "a >> good role model" (to use your own words) for the next generation? >> >> Or did you mean it in a more cynical way--indeed this is one of the >> possible ways for a scientist to be "successful"? >> >> --Pierre >> >> >> >> On 1/2/2022 12:29 PM, Terry Sejnowski wrote: >>> We would be remiss not to acknowledge that backprop would not be >>> possible without the calculus, >>> so Isaac newton should also have been given credit, at least as much >>> credit as Gauss. >>> >>> All these threads will be sorted out by historians one hundred years >>> from now. >>> Our precious time is better spent moving the field forward. There is >>> much more to discover. >>> >>> A new generation with better computational and mathematical tools than >>> we had back >>> in the last century have joined us, so let us be good role models and >>> mentors to them. >>> >>> Terry >>> >>> ----- >>> >>> On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >>>> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the >>>> work of Amari in a debate about connectionism at the ICNN?97 .... He >>>> literally said 'Amari invented back propagation'..." when he sat next >>>> to Amari and Werbos. Later, however, he failed to cite Amari?s >>>> stochastic gradient descent (SGD) for multilayer NNs (1967-68) >>>> [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and >>>> other surveys.? Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, >>>> Wolfowitz, 1951-52) is not even backprop. Backprop is just a >>>> particularly efficient way of computing gradients in differentiable >>>> networks, known as the reverse mode of automatic differentiation, due >>>> to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 >>>> [BPa]). Hinton did not cite these papers either, and in 2019 >>>> embarrassingly did not hesitate to accept an award for having >>>> "created ... the backpropagation algorithm? [HIN]. All references and >>>> more on this can be found in the report, especially in ! >>> Se! >>>> ? c. XII. >>>> >>>> The deontology of science requires: If one "re-invents" something >>>> that was already known, and only becomes aware of it later, one must >>>> at least clarify it later [DLC], and correctly give credit in all >>>> follow-up papers and presentations. Also, ACM's Code of Ethics and >>>> Professional Conduct [ACM18] states: "Computing professionals should >>>> therefore credit the creators of ideas, inventions, work, and >>>> artifacts, and respect copyrights, patents, trade secrets, license >>>> agreements, and other methods of protecting authors' works." LBH >>>> didn't. >>>> >>>> Steve still doesn't believe that linear regression of 200 years ago >>>> is equivalent to linear NNs. In a mature field such as math we would >>>> not have such a discussion. The math is clear. And even today, many >>>> students are taught NNs like this: let's start with a linear >>>> single-layer NN (activation = sum of weighted inputs). Now minimize >>>> mean squared error on the training set. That's good old linear >>>> regression (method of least squares). Now let's introduce multiple >>>> layers and nonlinear but differentiable activation functions, and >>>> derive backprop for deeper nets in 1960-70 style (still used today, >>>> half a century later). >>>> >>>> Sure, an important new variation of the 1950s (emphasized by Steve) >>>> was to transform linear NNs into binary classifiers with threshold >>>> functions. Nevertheless, the first adaptive NNs (still widely used >>>> today) are 1.5 centuries older except for the name. >>>> >>>> Happy New Year! >>>> >>>> J?rgen >>>> >>>> >>>>> On 2 Jan 2022, at 03:43, Asim Roy wrote: >>>>> >>>>> And, by the way, Paul Werbos was also there at the same debate. And >>>>> so was Teuvo Kohonen. >>>>> >>>>> Asim >>>>> >>>>> -----Original Message----- >>>>> From: Asim Roy >>>>> Sent: Saturday, January 1, 2022 3:19 PM >>>>> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu >>>>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing >>>>> Lecture, etc. >>>>> >>>>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari >>>>> in a debate about connectionism at the ICNN?97 (International >>>>> Conference on Neural Networks) in Houston. He literally said "Amari >>>>> invented back propagation" and Amari was sitting next to him. I >>>>> still have a recording of that debate. >>>>> >>>>> Asim Roy >>>>> Professor, Information Systems >>>>> Arizona State University >>>>> https://isearch.asu.edu/profile/9973 >>>>> https://lifeboat.com/ex/bios.asim.roy >>>> >>>> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson >>>> wrote: >>>> >>>> Juergen:? Happy New Year! >>>> >>>> "are not quite the same".. >>>> >>>> I understand that its expedient sometimes to use linear regression to >>>> approximate the Perceptron.(i've had other connectionist friends tell >>>> me the same thing) which has its own incremental update rule..that is >>>> doing <0,1> classification.??? So I guess if you don't like the >>>> analogy to logistic regression.. maybe Fisher's LDA?? This whole >>>> thing still doesn't scan for me. >>>> >>>> So, again the point here is context.?? Do you really believe that >>>> Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it >>>> slipped his mind???? He certainly understood modern statistics (of >>>> the 1940s and 1950s) >>>> >>>> Certainly you'd agree that FR could have referenced linear regression >>>> as a precursor, or "pretty similar" to what he was working on, it >>>> seems disingenuous to imply he was plagiarizing Gauss et al.--right? >>>> Why would he? >>>> >>>> Finally then, in any historical reconstruction, I can think of, it >>>> just doesn't make sense.??? Sorry. >>>> >>>> Steve >>>> >>>> >>>>> -----Original Message----- >>>>> From: Connectionists >>>>> On Behalf Of Schmidhuber Juergen >>>>> Sent: Friday, December 31, 2021 11:00 AM >>>>> To: connectionists at cs.cmu.edu >>>>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing >>>>> Lecture, etc. >>>>> >>>>> Sure, Steve, perceptron/Adaline/other similar methods of the >>>>> 1950s/60s are not quite the same, but the obvious origin and >>>>> ancestor of all those single-layer? ?shallow learning? >>>>> architectures/methods is indeed linear regression; today?s simplest >>>>> NNs minimizing mean squared error are exactly what they had 2 >>>>> centuries ago. And the first working deep learning methods of the >>>>> 1960s did NOT really require ?modern? backprop (published in 1970 by >>>>> Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] >>>>> incrementally trained and pruned their deep networks layer by layer >>>>> to learn internal representations, using regression and a separate >>>>> validation set. Amari (1967-68)[GD1] used stochastic gradient >>>>> descent [STO51-52] to learn internal representations WITHOUT >>>>> ?modern" backprop in his multilayer perceptrons. J?rgen >>>>> >>>>> >>>>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson >>>>>> wrote: >>>>>> >>>>>> Well the perceptron is closer to logistic regression... but the >>>>>> heaviside function? of course is <0,1>?? so technically not related >>>>>> to linear regression which is using covariance to estimate betas... >>>>>> >>>>>> does that matter?? Yes, if you want to be hyper correct--as this >>>>>> appears to be-- Berkson (1944) coined the logit.. as log odds.. for >>>>>> probabilistic classification.. this was formally developed by Cox >>>>>> in the early 60s, so unlikely even in this case to be a precursor >>>>>> to perceptron. >>>>>> >>>>>> My point was that DL requires both Learning algorithm (BP) and an >>>>>> architecture.. which seems to me much more responsible for the the >>>>>> success of Dl. >>>>>> >>>>>> S >>>>>> >>>>>> >>>>>> >>>>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>>>>> Steve, this is not about machine learning in general, just about >>>>>>> deep >>>>>>> learning vs shallow learning. However, I added the Pandemonium - >>>>>>> thanks for that! You ask: how is a linear regressor of 1800 >>>>>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>>>>> equivalent, of course! (The only difference is that the weights are >>>>>>> often called beta_i rather than w_i.) Shallow learning: one >>>>>>> adaptive >>>>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Despite the comprehensive feel of this it still appears to me to >>>>>>>> be? too focused on Back-propagation per se.. (except for that >>>>>>>> pesky Gauss/Legendre ref--which still baffles me at least how >>>>>>>> this is related to a "neural network"), and at the same time it >>>>>>>> appears to be missing other more general epoch-conceptually >>>>>>>> relevant cases, say: >>>>>>>> >>>>>>>> Oliver Selfridge? and his Pandemonium model.. which was a >>>>>>>> hierarchical feature analysis system.. which certainly was in the >>>>>>>> air during the Neural network learning heyday...in fact, Minsky >>>>>>>> cites Selfridge as one of his mentors. >>>>>>>> >>>>>>>> Arthur Samuels:? Checker playing system.. which learned a >>>>>>>> evaluation function from a hierarchical search. >>>>>>>> >>>>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt >>>>>>>> perceptual psychologist who introduced the concept that the world >>>>>>>> was stochastic and the the organism had to adapt to this variance >>>>>>>> somehow.. he called it "probabilistic functionalism"? which >>>>>>>> brought attention to learning, perception and decision theory, >>>>>>>> certainly all piece parts of what we call neural networks. >>>>>>>> >>>>>>>> There are many other such examples that influenced or provided >>>>>>>> context for the yeasty mix that was 1940s and 1950s where Neural >>>>>>>> Networks? first appeared partly due to PItts and McCulloch which >>>>>>>> entangled the human brain with computation and early computers >>>>>>>> themselves. >>>>>>>> >>>>>>>> I just don't see this as didactic, in the sense of a conceptual >>>>>>>> view of the? multidimensional history of the field, as >>>>>>>> opposed to? a 1-dimensional exegesis of mathematical threads >>>>>>>> through various statistical algorithms. >>>>>>>> >>>>>>>> Steve >>>>>>>> >>>>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>>>>> >>>>>>>>> Dear connectionists, >>>>>>>>> >>>>>>>>> in the wake of massive open online peer review, public comments >>>>>>>>> on the connectionists mailing list [CONN21] and many additional >>>>>>>>> private comments (some by well-known deep learning pioneers) >>>>>>>>> helped to update and improve upon version 1 of the report. The >>>>>>>>> essential statements of the text remain unchanged as their >>>>>>>>> accuracy remains unchallenged. I'd like to thank everyone from >>>>>>>>> the bottom of my heart for their feedback up until this point >>>>>>>>> and hope everyone will be satisfied with the changes. Here is >>>>>>>>> the revised version 2 with over 300 references: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>>> >>>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>>> >>>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> In particular, Sec. II has become a brief history of deep >>>>>>>>> learning up to the 1970s: >>>>>>>>> >>>>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) >>>>>>>>> were discussed in 1943 by McCulloch and Pitts [MC43] and >>>>>>>>> formally analyzed in 1956 by Kleene [K56] - the closely related >>>>>>>>> prior work in physics by Lenz, Ising, Kramers, and Wannier dates >>>>>>>>> back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up >>>>>>>>> ideas related to artificial evolution [TUR1] and learning NNs. >>>>>>>>> He failed to formally publish his ideas though, which explains >>>>>>>>> the obscurity of his thoughts here. Minsky's simple neural SNARC >>>>>>>>> computer dates back to 1951. Rosenblatt's perceptron with a >>>>>>>>> single adaptive layer learned in 1958 [R58] (Joseph [R61] >>>>>>>>> mentions an earlier perceptron-like device by Farley & Clark); >>>>>>>>> Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such >>>>>>>>> single-layer "shallow learning" actually started around 1800 >>>>>>>>> when Gauss & Legendre introduced linear regression and the >>>>>>>>> method of least squares [DL1-2] - a famous early example of >>>>>>>>> pattern recognition and generalization from training! >>> ?! >>>> ? d! >>>>> at! >>>>>>> a through a parameterized predictor is Gauss' rediscovery of the >>>>>>> asteroid Ceres based on previous astronomical observations. Deeper >>>>>>> multilayer perceptrons (MLPs) were discussed by Steinbuch >>>>>>> [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] >>>>>>> (1962), who wrote about "back-propagating errors" in an MLP with a >>>>>>> hidden layer [R62], but did not yet have a general deep learning >>>>>>> algorithm for deep MLPs? (what's now called backpropagation is >>>>>>> quite different and was first published by Linnainmaa in 1970 >>>>>>> [BP1-BP5][BPA-C]). Successful learning in deep architectures >>>>>>> started in 1965 when Ivakhnenko & Lapa published the first >>>>>>> general, working learning algorithms for deep MLPs with >>>>>>> arbitrarily many hidden layers (already containing the now popular >>>>>>> multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] >>>>>>> already described a deep learning net with 8 layers, trained by >>>>>>> their highly cited method which was still popular in the new >>>>>>> millennium [DL2], especially in Eastern Europ! >>> e! >>>>> , w! >>>>>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! >>>>>>> failed to >>>>>>> cite this, just like they failed to cite Amari [GD1], who in 1967 >>>>>>> proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and >>>>>>> whose implementation [GD2,GD2a] (with Saito) learned internal >>>>>>> representations at a time when compute was billions of times more >>>>>>> expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, >>>>>>> Amari also published what was later sometimes called the Hopfield >>>>>>> network or Amari-Hopfield Network [AMH1-3].) Fukushima's now >>>>>>> widely used deep convolutional NN architecture was first >>>>>>> introduced in the 1970s [CNN1]. >>>>>>> >>>>>>>>> J?rgen >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ****************************** >>>>>>>>> >>>>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>>>>> >>>>>>>>> The connectionists mailing list is perhaps the oldest mailing >>>>>>>>> list on ANNs, and many neural net pioneers are still subscribed >>>>>>>>> to it. I am hoping that some of them - as well as their >>>>>>>>> contemporaries - might be able to provide additional valuable >>>>>>>>> insights into the history of the field. >>>>>>>>> >>>>>>>>> Following the great success of massive open online peer review >>>>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>>>>> article ever published in the journal Neural Networks), I've >>>>>>>>> decided to put forward another piece for MOOR. I want to thank >>>>>>>>> the >>>>>>>>> many experts who have already provided me with comments on it. >>>>>>>>> Please send additional relevant references and suggestions for >>>>>>>>> improvements for the following draft directly to me at >>>>>>>>> >>>>>>>>> juergen at idsia.ch >>>>>>>>> >>>>>>>>> : >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>>> >>>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>>> >>>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> The above is a point-for-point critique of factual errors in >>>>>>>>> ACM's justification of the ACM A. M. Turing Award for deep >>>>>>>>> learning and a critique of the Turing Lecture published by ACM >>>>>>>>> in July 2021. This work can also be seen as a short history of >>>>>>>>> deep learning, at least as far as ACM's errors and the Turing >>>>>>>>> Lecture are concerned. >>>>>>>>> >>>>>>>>> I know that some view this as a controversial topic. However, it >>>>>>>>> is the very nature of science to resolve controversies through >>>>>>>>> facts. Credit assignment is as core to scientific history as it >>>>>>>>> is to machine learning. My aim is to ensure that the true >>>>>>>>> history of our field is preserved for posterity. >>>>>>>>> >>>>>>>>> Thank you all in advance for your help! >>>>>>>>> >>>>>>>>> J?rgen Schmidhuber >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>>> >>>>>> -- >>>>>> >>>>> >>>> >>> >>> -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From g.brostow at cs.ucl.ac.uk Tue Jan 4 13:40:52 2022 From: g.brostow at cs.ucl.ac.uk (Gabriel J. Brostow) Date: Tue, 4 Jan 2022 18:40:52 +0000 Subject: Connectionists: ~10+ PhD studentships in computer vision/learning at University College London (UCL) Message-ID: Happy New Year! I'm looking for 1 or 2 new PhD students next year, but my Vision / Learning colleagues at UCL are amazing too, and the application process is centralized. Overview: There are two paths into doing a PhD in vision-related areas at UCL. They have separate deadlines. ( https://www.ucl.ac.uk/computer-science/study/postgraduate-research) Applying for PhD programs means both getting accepted and getting funding, which are sometimes separate things. Below, they are lumped together. Note: Irish applicants can study in the UK as "home" students. Everyone else now needs an "overseas" studentship - which exist, but in smaller numbers. 1) UCL's Computer Science Department has two PhD admissions periods. One ends on 14th January 2022. Some edits can often be made to a submitted application within a week after the submission deadline. The other deadline is 16th April 2022. This UCL Computer Science PhD admission process is separate from the Foundational AI admissions process below. (Two ways in, to get the same CS PhD). While the CS Department process also accepts non-UK applicants, it has even fewer slots for overseas applicants than the Foundational AI one. I don't like it either. 2) Foundational AI: We (Vision, ML, Robotics, NLP, Graphics, and more within the CS Department) are recruiting for a cohort of star PhD students to work on Foundational AI, with exact numbers to be confirmed, but with funding from multiple sources (e.g. the UK government, DeepMind, Adobe, and Niantic, to name a few). The idea is that this cohort works on AI innovations for a "normal" PhD, embedded within a research group in the department, but also that members of the cohort will learn, publish, and work together, to get the benefits of being in a cross-disciplinary group, selected especially to be diverse in terms of people, specialties, and aims. In later years of your PhD, you will be encouraged to attend talks and workshops that prepare you particularly for either academia or entrepreneurship. To maximize your chances as an overseas student, apply by 3rd January 2022 (ok, that's today, so either your materials are ready and you apply within 24 hours, or see option (1) above). For Home students (no longer includes EU - grr), it's 20th February 2022. The final deadline for rolling places to be considered is 1st June 2022. To apply through the Foundational AI CDT, visit https://www.ucl.ac.uk/ai-centre/study/cdt-foundational-ai/applying-foundational-artificial-intelligence-mphilphd. Be sure to follow the advice there, though you may have to list PhD supervisor names based on whom you're most aligned with, without having met them first. I don't know why they give the advice to contact PhD supervisors in advance - that's hard to set up in a normal year, and even harder with covid. The point is to get your application in front of the supervisor(s) who align with your research passion. We particularly welcome female and non-binary applicants and those from an ethnic minority, as they are under-represented within UCL. My own research preferences lean heavily toward human-in-the-loop computer vision and ML, where a system learns from small numbers of examples to get better with use, and helps a person, e.g. to do their job, to learn faster, or to overcome a disability. The core research must be publishable in CVPR- or CHI- like conferences. Some students from my group have gone on to startups or faculty positions - but that's later on - after you do some great research in your PhD! Best of luck with your applications! -Gabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Tue Jan 4 13:48:59 2022 From: achler at gmail.com (Tsvi Achler) Date: Tue, 4 Jan 2022 20:48:59 +0200 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca> References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <14ffb6b0bc0902bb19bc1fc27b42eef8@mcmaster.ca> Message-ID: Hi Sue, I am sorry but I am having trouble understanding the logic behind the suggestion. The suggestion is if an organization is corrupt then one should volunteer their time to that organization, because that will somehow change the organization. I dont see it. With that logic one should volunteer in car dealerships to be a used car salesman. Unfortunately it is the status quo is a bit of a cesspool: there is a huge repeatability crisis, politics in peer review, awards, citations and governance and it is important that complaining about the status quo does not get labeled as "cesspool of debate". In fact one could argue there is a "cesspool of normalcy" My experience with NIPS is that there is only an interest in feedforward models and complete dismissal of the non-plausible rehearsal that is needed if it is to serve as a model of the brain. Moreover there is also a complete lack of interest in a model that captures the most brain phenomena with the least amount of parameters. Instead there is an interest in the status quo to make increments and specific metrics that flatter existing feedforward models. This is quite unfortunate because there are also many computer science conferences that essentially do the same thing. PS Feedforward models in this context are any models that primarily take the input multiply by weights to give the answer. Sincerely Tsvi On Tue, Jan 4, 2022 at 9:23 AM Sue Becker wrote: > Pierre, I'm responding to your comment here: > > > Terry: ... you have made sure, year after year, that you and your > > BHL/CIFAR > > friends were able to control and subtly manipulate NIPS/NeurIPS > > (misleading the field in wrong directions, preventing news ideas and > > outsiders from flourishing, and distorting credit attribution). > > > > Can you please explain to this mailing list how this serves as being "a > > good role model" (to use your own words) for the next generation? > > As loathe as I am to wade into what has become a cesspool of a debate, > you have gone way outside the bounds of accuracy, not to mention > civility and decency, in directing your mudslinging at Terry Sejnowski. > If anything, Terry deserves recognition and thanks for his many years > of service to this community. > > If you think that NeurIPS is run by a bunch of insiders, try stepping up > and volunteering your service to this conference, be a longtime > committed reviewer, then become an Area Chair, do an outstanding job and > be selected as the next program chair and then general chair. That is > one path to influencing the future of the conference. Much more > importantly, the hundreds of dedicated reviewers are the ones who > actually determine the content of the meeting, by identifying the very > best papers out of the thousands of submissions received each year. > There is no top-down control or manipulation over that process. > > Cheers, > Sue > > --- > Sue Becker, Professor > Neurotechnology and Neuroplasticity Lab, PI > Dept. of Psychology Neuroscience & Behaviour, McMaster University > www.science.mcmaster.ca/pnb/department/becker > > > On 2022-01-03 09:55, Baldi,Pierre wrote: > > Terry: > > > > We can all agree on the importance of mentoring the next generation. > > However, given that: > > > > 1) you have been in full and sole control of the NIPS/NeurIPS > > foundation > > since the 1980s; > > > > 2) you have been in full and sole control of Neural Computation since > > the 1980s; > > > > 3) you have extensively published in Neural Computation (and now also > > PNAS); > > > > 4) you have made sure, year after year, that you and your BHL/CIFAR > > friends were able to control and subtly manipulate NIPS/NeurIPS > > (misleading the field in wrong directions, preventing news ideas and > > outsiders from flourishing, and distorting credit attribution). > > > > Can you please explain to this mailing list how this serves as being "a > > good role model" (to use your own words) for the next generation? > > > > Or did you mean it in a more cynical way--indeed this is one of the > > possible ways for a scientist to be "successful"? > > > > --Pierre > > > > > > > > On 1/2/2022 12:29 PM, Terry Sejnowski wrote: > >> We would be remiss not to acknowledge that backprop would not be > >> possible without the calculus, > >> so Isaac newton should also have been given credit, at least as much > >> credit as Gauss. > >> > >> All these threads will be sorted out by historians one hundred years > >> from now. > >> Our precious time is better spent moving the field forward. There is > >> much more to discover. > >> > >> A new generation with better computational and mathematical tools than > >> we had back > >> in the last century have joined us, so let us be good role models and > >> mentors to them. > >> > >> Terry > >> > >> ----- > >> > >> On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: > >>> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the > >>> work of Amari in a debate about connectionism at the ICNN?97 .... He > >>> literally said 'Amari invented back propagation'..." when he sat next > >>> to Amari and Werbos. Later, however, he failed to cite Amari?s > >>> stochastic gradient descent (SGD) for multilayer NNs (1967-68) > >>> [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and > >>> other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, > >>> Wolfowitz, 1951-52) is not even backprop. Backprop is just a > >>> particularly efficient way of computing gradients in differentiable > >>> networks, known as the reverse mode of automatic differentiation, due > >>> to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 > >>> [BPa]). Hinton did not cite these papers either, and in 2019 > >>> embarrassingly did not hesitate to accept an award for having > >>> "created ... the backpropagation algorithm? [HIN]. All references and > >>> more on this can be found in the report, especially in ! > >> Se! > >>> c. XII. > >>> > >>> The deontology of science requires: If one "re-invents" something > >>> that was already known, and only becomes aware of it later, one must > >>> at least clarify it later [DLC], and correctly give credit in all > >>> follow-up papers and presentations. Also, ACM's Code of Ethics and > >>> Professional Conduct [ACM18] states: "Computing professionals should > >>> therefore credit the creators of ideas, inventions, work, and > >>> artifacts, and respect copyrights, patents, trade secrets, license > >>> agreements, and other methods of protecting authors' works." LBH > >>> didn't. > >>> > >>> Steve still doesn't believe that linear regression of 200 years ago > >>> is equivalent to linear NNs. In a mature field such as math we would > >>> not have such a discussion. The math is clear. And even today, many > >>> students are taught NNs like this: let's start with a linear > >>> single-layer NN (activation = sum of weighted inputs). Now minimize > >>> mean squared error on the training set. That's good old linear > >>> regression (method of least squares). Now let's introduce multiple > >>> layers and nonlinear but differentiable activation functions, and > >>> derive backprop for deeper nets in 1960-70 style (still used today, > >>> half a century later). > >>> > >>> Sure, an important new variation of the 1950s (emphasized by Steve) > >>> was to transform linear NNs into binary classifiers with threshold > >>> functions. Nevertheless, the first adaptive NNs (still widely used > >>> today) are 1.5 centuries older except for the name. > >>> > >>> Happy New Year! > >>> > >>> J?rgen > >>> > >>> > >>>> On 2 Jan 2022, at 03:43, Asim Roy wrote: > >>>> > >>>> And, by the way, Paul Werbos was also there at the same debate. And > >>>> so was Teuvo Kohonen. > >>>> > >>>> Asim > >>>> > >>>> -----Original Message----- > >>>> From: Asim Roy > >>>> Sent: Saturday, January 1, 2022 3:19 PM > >>>> To: Schmidhuber Juergen ; > >>>> connectionists at cs.cmu.edu > >>>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing > >>>> Lecture, etc. > >>>> > >>>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari > >>>> in a debate about connectionism at the ICNN?97 (International > >>>> Conference on Neural Networks) in Houston. He literally said "Amari > >>>> invented back propagation" and Amari was sitting next to him. I > >>>> still have a recording of that debate. > >>>> > >>>> Asim Roy > >>>> Professor, Information Systems > >>>> Arizona State University > >>>> https://isearch.asu.edu/profile/9973 > >>>> https://lifeboat.com/ex/bios.asim.roy > >>> > >>> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson > >>> wrote: > >>> > >>> Juergen: Happy New Year! > >>> > >>> "are not quite the same".. > >>> > >>> I understand that its expedient sometimes to use linear regression to > >>> approximate the Perceptron.(i've had other connectionist friends tell > >>> me the same thing) which has its own incremental update rule..that is > >>> doing <0,1> classification. So I guess if you don't like the > >>> analogy to logistic regression.. maybe Fisher's LDA? This whole > >>> thing still doesn't scan for me. > >>> > >>> So, again the point here is context. Do you really believe that > >>> Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it > >>> slipped his mind?? He certainly understood modern statistics (of > >>> the 1940s and 1950s) > >>> > >>> Certainly you'd agree that FR could have referenced linear regression > >>> as a precursor, or "pretty similar" to what he was working on, it > >>> seems disingenuous to imply he was plagiarizing Gauss et al.--right? > >>> Why would he? > >>> > >>> Finally then, in any historical reconstruction, I can think of, it > >>> just doesn't make sense. Sorry. > >>> > >>> Steve > >>> > >>> > >>>> -----Original Message----- > >>>> From: Connectionists > >>>> On Behalf Of Schmidhuber Juergen > >>>> Sent: Friday, December 31, 2021 11:00 AM > >>>> To: connectionists at cs.cmu.edu > >>>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > >>>> Lecture, etc. > >>>> > >>>> Sure, Steve, perceptron/Adaline/other similar methods of the > >>>> 1950s/60s are not quite the same, but the obvious origin and > >>>> ancestor of all those single-layer ?shallow learning? > >>>> architectures/methods is indeed linear regression; today?s simplest > >>>> NNs minimizing mean squared error are exactly what they had 2 > >>>> centuries ago. And the first working deep learning methods of the > >>>> 1960s did NOT really require ?modern? backprop (published in 1970 by > >>>> Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] > >>>> incrementally trained and pruned their deep networks layer by layer > >>>> to learn internal representations, using regression and a separate > >>>> validation set. Amari (1967-68)[GD1] used stochastic gradient > >>>> descent [STO51-52] to learn internal representations WITHOUT > >>>> ?modern" backprop in his multilayer perceptrons. J?rgen > >>>> > >>>> > >>>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson > >>>>> wrote: > >>>>> > >>>>> Well the perceptron is closer to logistic regression... but the > >>>>> heaviside function of course is <0,1> so technically not related > >>>>> to linear regression which is using covariance to estimate betas... > >>>>> > >>>>> does that matter? Yes, if you want to be hyper correct--as this > >>>>> appears to be-- Berkson (1944) coined the logit.. as log odds.. for > >>>>> probabilistic classification.. this was formally developed by Cox > >>>>> in the early 60s, so unlikely even in this case to be a precursor > >>>>> to perceptron. > >>>>> > >>>>> My point was that DL requires both Learning algorithm (BP) and an > >>>>> architecture.. which seems to me much more responsible for the the > >>>>> success of Dl. > >>>>> > >>>>> S > >>>>> > >>>>> > >>>>> > >>>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: > >>>>>> Steve, this is not about machine learning in general, just about > >>>>>> deep > >>>>>> learning vs shallow learning. However, I added the Pandemonium - > >>>>>> thanks for that! You ask: how is a linear regressor of 1800 > >>>>>> (Gauss/Legendre) related to a linear neural network? It's formally > >>>>>> equivalent, of course! (The only difference is that the weights > >>>>>> are > >>>>>> often called beta_i rather than w_i.) Shallow learning: one > >>>>>> adaptive > >>>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson > >>>>>>> > >>>>>>> wrote: > >>>>>>> > >>>>>>> Despite the comprehensive feel of this it still appears to me to > >>>>>>> be too focused on Back-propagation per se.. (except for that > >>>>>>> pesky Gauss/Legendre ref--which still baffles me at least how > >>>>>>> this is related to a "neural network"), and at the same time it > >>>>>>> appears to be missing other more general epoch-conceptually > >>>>>>> relevant cases, say: > >>>>>>> > >>>>>>> Oliver Selfridge and his Pandemonium model.. which was a > >>>>>>> hierarchical feature analysis system.. which certainly was in the > >>>>>>> air during the Neural network learning heyday...in fact, Minsky > >>>>>>> cites Selfridge as one of his mentors. > >>>>>>> > >>>>>>> Arthur Samuels: Checker playing system.. which learned a > >>>>>>> evaluation function from a hierarchical search. > >>>>>>> > >>>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt > >>>>>>> perceptual psychologist who introduced the concept that the world > >>>>>>> was stochastic and the the organism had to adapt to this variance > >>>>>>> somehow.. he called it "probabilistic functionalism" which > >>>>>>> brought attention to learning, perception and decision theory, > >>>>>>> certainly all piece parts of what we call neural networks. > >>>>>>> > >>>>>>> There are many other such examples that influenced or provided > >>>>>>> context for the yeasty mix that was 1940s and 1950s where Neural > >>>>>>> Networks first appeared partly due to PItts and McCulloch which > >>>>>>> entangled the human brain with computation and early computers > >>>>>>> themselves. > >>>>>>> > >>>>>>> I just don't see this as didactic, in the sense of a conceptual > >>>>>>> view of the multidimensional history of the field, as > >>>>>>> opposed to a 1-dimensional exegesis of mathematical threads > >>>>>>> through various statistical algorithms. > >>>>>>> > >>>>>>> Steve > >>>>>>> > >>>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: > >>>>>>> > >>>>>>>> Dear connectionists, > >>>>>>>> > >>>>>>>> in the wake of massive open online peer review, public comments > >>>>>>>> on the connectionists mailing list [CONN21] and many additional > >>>>>>>> private comments (some by well-known deep learning pioneers) > >>>>>>>> helped to update and improve upon version 1 of the report. The > >>>>>>>> essential statements of the text remain unchanged as their > >>>>>>>> accuracy remains unchallenged. I'd like to thank everyone from > >>>>>>>> the bottom of my heart for their feedback up until this point > >>>>>>>> and hope everyone will be satisfied with the changes. Here is > >>>>>>>> the revised version 2 with over 300 references: > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient > >>>>>>>> > ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ > >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> In particular, Sec. II has become a brief history of deep > >>>>>>>> learning up to the 1970s: > >>>>>>>> > >>>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) > >>>>>>>> were discussed in 1943 by McCulloch and Pitts [MC43] and > >>>>>>>> formally analyzed in 1956 by Kleene [K56] - the closely related > >>>>>>>> prior work in physics by Lenz, Ising, Kramers, and Wannier dates > >>>>>>>> back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up > >>>>>>>> ideas related to artificial evolution [TUR1] and learning NNs. > >>>>>>>> He failed to formally publish his ideas though, which explains > >>>>>>>> the obscurity of his thoughts here. Minsky's simple neural SNARC > >>>>>>>> computer dates back to 1951. Rosenblatt's perceptron with a > >>>>>>>> single adaptive layer learned in 1958 [R58] (Joseph [R61] > >>>>>>>> mentions an earlier perceptron-like device by Farley & Clark); > >>>>>>>> Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such > >>>>>>>> single-layer "shallow learning" actually started around 1800 > >>>>>>>> when Gauss & Legendre introduced linear regression and the > >>>>>>>> method of least squares [DL1-2] - a famous early example of > >>>>>>>> pattern recognition and generalization from training! > >> ! > >>> d! > >>>> at! > >>>>>> a through a parameterized predictor is Gauss' rediscovery of the > >>>>>> asteroid Ceres based on previous astronomical observations. Deeper > >>>>>> multilayer perceptrons (MLPs) were discussed by Steinbuch > >>>>>> [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] > >>>>>> (1962), who wrote about "back-propagating errors" in an MLP with a > >>>>>> hidden layer [R62], but did not yet have a general deep learning > >>>>>> algorithm for deep MLPs (what's now called backpropagation is > >>>>>> quite different and was first published by Linnainmaa in 1970 > >>>>>> [BP1-BP5][BPA-C]). Successful learning in deep architectures > >>>>>> started in 1965 when Ivakhnenko & Lapa published the first > >>>>>> general, working learning algorithms for deep MLPs with > >>>>>> arbitrarily many hidden layers (already containing the now popular > >>>>>> multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] > >>>>>> already described a deep learning net with 8 layers, trained by > >>>>>> their highly cited method which was still popular in the new > >>>>>> millennium [DL2], especially in Eastern Europ! > >> e! > >>>> , w! > >>>>>> here much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH ! > >>>>>> failed to > >>>>>> cite this, just like they failed to cite Amari [GD1], who in 1967 > >>>>>> proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and > >>>>>> whose implementation [GD2,GD2a] (with Saito) learned internal > >>>>>> representations at a time when compute was billions of times more > >>>>>> expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, > >>>>>> Amari also published what was later sometimes called the Hopfield > >>>>>> network or Amari-Hopfield Network [AMH1-3].) Fukushima's now > >>>>>> widely used deep convolutional NN architecture was first > >>>>>> introduced in the 1970s [CNN1]. > >>>>>> > >>>>>>>> J?rgen > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> ****************************** > >>>>>>>> > >>>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> wrote: > >>>>>>>> > >>>>>>>> Hi, fellow artificial neural network enthusiasts! > >>>>>>>> > >>>>>>>> The connectionists mailing list is perhaps the oldest mailing > >>>>>>>> list on ANNs, and many neural net pioneers are still subscribed > >>>>>>>> to it. I am hoping that some of them - as well as their > >>>>>>>> contemporaries - might be able to provide additional valuable > >>>>>>>> insights into the history of the field. > >>>>>>>> > >>>>>>>> Following the great success of massive open online peer review > >>>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited > >>>>>>>> article ever published in the journal Neural Networks), I've > >>>>>>>> decided to put forward another piece for MOOR. I want to thank > >>>>>>>> the > >>>>>>>> many experts who have already provided me with comments on it. > >>>>>>>> Please send additional relevant references and suggestions for > >>>>>>>> improvements for the following draft directly to me at > >>>>>>>> > >>>>>>>> juergen at idsia.ch > >>>>>>>> > >>>>>>>> : > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient > >>>>>>>> > ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ > >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> The above is a point-for-point critique of factual errors in > >>>>>>>> ACM's justification of the ACM A. M. Turing Award for deep > >>>>>>>> learning and a critique of the Turing Lecture published by ACM > >>>>>>>> in July 2021. This work can also be seen as a short history of > >>>>>>>> deep learning, at least as far as ACM's errors and the Turing > >>>>>>>> Lecture are concerned. > >>>>>>>> > >>>>>>>> I know that some view this as a controversial topic. However, it > >>>>>>>> is the very nature of science to resolve controversies through > >>>>>>>> facts. Credit assignment is as core to scientific history as it > >>>>>>>> is to machine learning. My aim is to ensure that the true > >>>>>>>> history of our field is preserved for posterity. > >>>>>>>> > >>>>>>>> Thank you all in advance for your help! > >>>>>>>> > >>>>>>>> J?rgen Schmidhuber > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> -- > >>>>>>> > >>>>>>> > >>>>> -- > >>>>> > >>>> > >>> > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jan 5 03:06:36 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 5 Jan 2022 10:06:36 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Bernhard_Rinn?= =?utf-8?q?er=3A_=E2=80=9CSelf-awareness_for_autonomous_systems?= =?utf-8?q?=E2=80=9D=2C_11th_January_2022_17=3A00-18=3A00_CET=2E_Up?= =?utf-8?q?coming_AIDA_AI_excellence_lectures?= References: <15a501d80203$6d58c040$480a40c0$@csd.auth.gr> <001801d80206$1fc9d550$5f5d7ff0$@csd.auth.gr> Message-ID: <16df01d8020b$29aed890$7d0c89b0$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Bernhard Rinner (Alpen-Adria-Universit?t Klagenfurt, Austria), a prominent AI researcher internationally, will deliver the e-lecture: ?Self-awareness for autonomous systems?, on Tuesday 11th January 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/s/99775795702 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From lep.mik at gmail.com Wed Jan 5 02:07:45 2022 From: lep.mik at gmail.com (=?UTF-8?Q?Mikkel_Elle_Lepper=C3=B8d?=) Date: Wed, 5 Jan 2022 08:07:45 +0100 Subject: Connectionists: =?utf-8?q?Hiring_-_CompSci_-_Call_2_Marie_Sk?= =?utf-8?q?=C5=82odowska-Curie_PhD_Fellowships_in_Natural_Sciences_?= =?utf-8?q?-_University_of_Oslo?= In-Reply-To: References: Message-ID: Do you want to work with the most talented scientists? 17 PhD scholarships in areas of astronomy, physics, chemistry, geoscience, bioscience, and mathematics and statistics are available in the CompSci programme ? a programme co funded by the Marie Sk?odowska Curie Actions. Two positions are within Bio-Inspired AI - combining machine learning with physics-based computational neuroscience and causal inference. See the attached flyer or visit www.mn.uio.no/compsci/english Hope to see your application! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2021.11.01 - Call 2 - CompSci - Flyer - Web.pdf Type: application/pdf Size: 2181624 bytes Desc: not available URL: From alessio.ferone at uniparthenope.it Wed Jan 5 02:55:12 2022 From: alessio.ferone at uniparthenope.it (ALESSIO FERONE) Date: Wed, 5 Jan 2022 07:55:12 +0000 Subject: Connectionists: [CfP] ICIAP2021 - Special Session: Computer Vision for Coastal and Marine Environment Monitoring Message-ID: <57D5CBC7-2A52-45EB-AE18-EDC3B4F3F5C8@uniparthenope.it> ******Apologies for multiple posting****** _________________________________________ ICIAP2021 Special Session Computer Vision for Coastal and Marine Environment Monitoring https://www.iciap2021.org/specialsession/ _________________________________________ The coastal and marine environment represents a vital part of the world, resulting in a complex ecosystem tightly linked to many human activities. For this reason, monitoring coastal and marine ecosystems is of critical importance for gaining a better understanding of their complexity with the goal of protecting such a fundamental resource. Coastal and marine environmental monitoring aims to employ leading technologies and methodologies to monitor and evaluate the marine environment both near the coast and underwater. This monitoring can be performed either on site, using sensors for collecting data, or remotely through seafloor cabled observatories, AUVs or ROVs, resulting in a huge amount of data that require advanced intelligent methodologies to extract useful information and knowledge on environmental conditions. A large part of this data is represented by images and videos produced by fixed and PTZ cameras either on the coast, on the marine surface or underwater. For th! is reason, the analysis of such volume of imagery data imposes a series of unique challenges, which need to be tackled by the computer vision community. The aim of the special session is to host recent research advances in the field of computer vision and image processing techniques applied to the monitoring of coastal and marine environment and to highlight research issues and still open questions. Full CfP at https://neptunia.uniparthenope.it/cfp/cv-cmem/ Important Dates: Paper Submission Deadline: January 17, 2022 Decision Notification: February 19, 2022 Camera Ready: March 6, 2022 Organizers: Angelo Ciaramella Sajid Javed Alessio Ferone -------------- next part -------------- An HTML attachment was scrubbed... URL: From watanabe at sys.t.u-tokyo.ac.jp Wed Jan 5 06:38:01 2022 From: watanabe at sys.t.u-tokyo.ac.jp (=?UTF-8?B?5rih6YKJIOato+WzsA==?=) Date: Wed, 5 Jan 2022 20:38:01 +0900 Subject: Connectionists: Final Call for Free Online Participation: Mechanism of Brain and Mind Message-ID: Dear Colleague, We are pleased to announce our 21st international workshop, Mechanism of Brain and Mind ?Complex cognition and decision-making? January 12, 2022 webpage: http://brainmind.umin.jp/eng-wt21.html free registration: https://oist.zoom.us/webinar/register/WN_EAwmbeF6SVShSlfsEH6w6w registration deadline: January 7, 2022 Following our tradition of holding the workshop at a cozy ski resort in Hokkaido,Japan, you will have plenty of time to interact with our speakers. (10 mins question + 30 mins breakout session) I hope to see you there and also at our physical meetings in the following years! January 12 Morning Session 9:10-10:00 (JST) Shogo Ohmae (Baylor College of Medicine) 10:00-10:30 (JST) Breakout Session 10:40-11:30 (JST) Hidenori Tanaka (Physics & Informatics Lab, NTT Research) 11:30-12:00 (JST) Breakout Session Afternoon Session 14:00-14:50 (JST) Mingbo Cai (The University of Tokyo International Research Center for Neurointelligence) 14:50-15:20 (JST) Breakout Session 15:30-16:20 (JST) Tianming Yang (Chinese Academy of Sciences) 16:20-16:50 (JST) Breakout Session Best Regards, Masataka Watanabe Vice Chairman, Workshop on the Mechanism of Brain and Mind Associate Professor, School of Engineering, University of Tokyo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgecbalmeida at gmail.com Wed Jan 5 09:36:41 2022 From: jorgecbalmeida at gmail.com (Jorge Almeida) Date: Wed, 5 Jan 2022 14:36:41 +0000 Subject: Connectionists: 1 Tenured Assistant Professor Positions in the area of Forensic Psychology (Cognitive Forensics; Forensic Cognitive Neuroscience) at the Faculty of Psychology and Educational Sciences of the University of Coimbra Portugal Message-ID: The Faculty of Psychology and Educational Sciences of the University of Coimbra, Portugal, will be opening a position for Assistant Professor in the area of Forensic Psychology. This could be basic research in forensic cognition and forensic cognitive neuroscience. This is a tenured position with a competitive salary for Portugal. The University has a 3T MRI, and the faculty has facilities for EEG, and other labs in the field. The Faculty of Psychology and Educational Science has several funded projects on-going including one ERC starting grant in Cognitive Neuroscience and object recognition. The University of Coimbra is a 700 year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the most lively university cities in the world, and it is a beautiful city with easy access to the beach and mountain. The applicants should have their Diplomas registered in Portugal. Depending on where you obtained your PhD, and whether the diploma is not in English (or Portuguese) you may need to have a notarized translation and you may need an Apostille. You can register your Diplomas online at the ministry of education website: https://www.dges.gov.pt/pt/content/recautomatico This process takes some time and it is a necessary process for all applications in Portugal, so you should start it right away. In terms of the language requirements, if you are not a native speaker of Portuguese OR English, you need to prove that you have a C1 level in English OR Portuguese (or both obviously, but one is enough). One option to do so is to have a statement under oath that you have the required level of English (or Portuguese). Although I am not in the jury, and will not be involved in any decisions, if you have questions about this, I can try and help. Hope to see you in Coimbra! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgecbalmeida at gmail.com Wed Jan 5 09:28:18 2022 From: jorgecbalmeida at gmail.com (Jorge Almeida) Date: Wed, 5 Jan 2022 14:28:18 +0000 Subject: Connectionists: Cognitive Neuroscience Research positions in Coimbra Portugal - open call for 2022 Message-ID: The Proaction Laboratory (proactionlab.fpce.uc.pt) at the University Coimbra Portugal is looking for Researchers at different levels of their career from newly PhD graduates, to assistant, associate or full level researchers to be part of the lab in a joint competitive application to a Portuguese Science Foundation (FCT) independent researcher call. We particularly encourage applications from women, and from underrepresented groups in academia. The applicants should have obtained a PhD, and have an interest in cognitive neuroscience, vision science and preferably (but not limited to) object recognition, conceptual processing, and action and perception. We are particularly interested in motivated and independent Researchers addressing these topics with strong expertise in behavioral, motion tracking, neuromodulation, fMRI and/or EEG approaches (in particular Multivariate EEG decoding, and EEG/fMRI multimodal approaches). Good programming skills, great communication and mentoring skills, and a great command of English are a plus. The applicant and the lab will work on a competitive project to be submitted. Results from the application are expected to be out in October/November 2022. The application deadline is around March 3. The positions are as independent researchers in the Proaction Lab, are for 6 years, and the salary is the same as the Portuguese payroll for University Professors (net values for junior, assistant, associate positions, for instance are approximately 1500, 1900 and 2100 euros per month in a 14-month salary per year; these are competitive salaries for the cost of living in Portugal and especially in Coimbra). The Proaction Lab is currently very well funded as we have a set of on-going funded projects including a Starting Grant ERC to Jorge Almeida and several FCT projects. We have access to a 3T MRI scanner with a 64-channel coil (with EEG inside the scanner), to tDCS, and to a fully set psychophysics lab. We have a 256 ch EEG, motion tracking and eyetracking on site. We also have a science communication office dedicated to the lab. Finally, the University of Coimbra is a 700 year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the most lively university cities in the world, and it is a beautiful city with easy access to the beach and mountain. The deadline for this pre-application is February 15, but you should apply as soon as you can - the sooner the better so that we can prepare the application. If interested send an email to jorgecbalmeida at gmail.com, with a CV, and motivation/scientific proposal letter. If there is a fit, we will jointly apply to these positions ? we have had in past applications a high success rate as a lab (in four previous editions, we got several applications that were offered a position). -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaudel142 at gmail.com Wed Jan 5 10:30:42 2022 From: rpaudel142 at gmail.com (Ramesh Paudel) Date: Wed, 5 Jan 2022 10:30:42 -0500 Subject: Connectionists: CFP(Deadline Jan 7) : SaT-CPS 2022 - ACM Workshop on Secure and Trustworthy Cyber-Physical Systems Message-ID: Dear Colleagues, *** *This is a reminder that the CFP deadline is Jan 07, 2022* *** Please consider submitting and/or forwarding to the appropriate groups/personnel the opportunity to submit to the ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS 2022), which will be held in Baltimore-Washington DC area (or virtually) on April 26, 2022 in conjunction with the 12th ACM Conference on Data and Application Security and Privacy (CODASPY 2022). *** Paper submission deadline: *December 30, 2021 (Jan 07, 2022)* *** *** Website: https://sites.google.com/view/sat-cps-2022/ *** SaT-CPS aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. Example topics of interest are given below, but are not limited to: Secure CPS architectures - Authentication mechanisms for CPS - Access control for CPS - Key management in CPS - Attack detection for CPS - Threat modeling for CPS - Forensics for CPS - Intrusion and anomaly detection for CPS - Trusted-computing in CPS - Energy-efficient and secure CPS - Availability, recovery, and auditing for CPS - Distributed secure solutions for CPS - Metrics and risk assessment approaches - Privacy and trust - Blockchain for CPS security - Data security and privacy for CPS - Digital twins for CPS - Wireless sensor network security - CPS/IoT malware analysis - CPS/IoT firmware analysis - Economics of security and privacy - Securing CPS in medical devices/systems - Securing CPS in civil engineering systems/devices - Physical layer security for CPS - Security on heterogeneous CPS - Securing CPS in automotive systems - Securing CPS in aerospace systems - Usability security and privacy of CPS - Secure protocol design in CPS - Vulnerability analysis of CPS - Anonymization in CPS - Embedded systems security - Formal security methods in CPS - Industrial control system security - Securing Internet-of-Things - Securing smart agriculture and related domains The workshop is planned for one day, April 26, 2022, on the last day of the conference. *Instructions for Paper Authors* All submissions must describe original research, not published nor currently under review for another workshop, conference, or journal. All papers must be submitted electronically via the Easychair system: https://easychair.org/conferences/?conf=acmsatcps2022 Full-length papers Papers must be at most 10 pages in length in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template). Submission implies the willingness of at least one author to attend the workshop and present the paper. Accepted papers will be included in the ACM Digital Library. The presenter must register for the workshop before the deadline for author registration. *Position papers and Work-in-progress papers* We also invite short position papers and work-in-progress papers. Such papers can be of length up to 6 pages in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template), and must clearly state "Position Paper" or "Work in progress," as the case may be in the title section of the paper. These papers will be reviewed and accepted papers will be published in the conference proceedings. *Important Dates* Due date for full workshop submissions: *December 30, 2021 (Jan 07, 2022)* Notification of acceptance to authors: February 10, 2022 Camera-ready of accepted papers: February 20, 2022 Workshop day: April 26, 2022 *- - - - - - - - - - -* *Ramesh Paudel, Ph.D.* Publicity and Web Co-Chair Research Scientist George Washington University Washington, DC. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Samuel.Neymotin at nki.rfmh.org Wed Jan 5 12:07:15 2022 From: Samuel.Neymotin at nki.rfmh.org (Neymotin, Samuel (NKI)) Date: Wed, 5 Jan 2022 17:07:15 +0000 Subject: Connectionists: Summer Undergraduate Computational Neuroscience Apprenticeship Message-ID: Several Army Research Office (ARO) undergraduate research apprenticeship program (URAP) positions are available for the summer of 2022 at the Nathan Kline Institute for Psychiatric Research (https://www.nki.rfmh.org/). Research will be conducted to help achieve the goals of an ARO grant that aims to train neocortical circuit models to play video games using biologically-plausible learning rules, and to compare model performance against commonly used deep reinforcement learning algorithms. Students will have the opportunity to learn cutting edge methods in computational neuroscience, machine learning, and Python software development, and publish their results. Several manuscripts under review, that included previous URAP students are available on Biorxiv here: https://www.biorxiv.org/content/10.1101/2021.07.29.454361v1 https://www.biorxiv.org/content/10.1101/2021.11.20.469405v1 Candidates will be considered based on their experience in computational neuroscience, machine learning, and Python programming. Candidates in the NYC metropolitan area will be preferred. Apply at https://www.usaeop.com/program/undergraduate-apprenticeships/ Make sure to use code U803 (Nathan Kline Institute) and apply by the deadline (February 28, 2022). Letters of recommendation are optional. For questions, email Samuel.Neymotin at nki.rfmh.org ________________________________ IMPORTANT NOTICE: This e-mail is meant only for the use of the intended recipient. It may contain confidential information which is legally privileged or otherwise protected by law. If you received this e-mail in error or from someone who was not authorized to send it to you, you are strictly prohibited from reviewing, using, disseminating, distributing or copying the e-mail. PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From claudio.gallicchio at unipi.it Wed Jan 5 12:46:11 2022 From: claudio.gallicchio at unipi.it (Claudio Gallicchio) Date: Wed, 5 Jan 2022 17:46:11 +0000 Subject: Connectionists: [CFP] Special Session on Reservoir Computing at WCCI IJCNN 2022 In-Reply-To: References: Message-ID: <22676af7ca1a45c9b0d3a5c20ab0b0e3@unipi.it> Special Session on Reservoir Computing: algorithms, implementations, and applications IEEE World Congress on Computational Intelligence (WCCI 2022) International Joint Conference on Neural Networks (IJCNN 2022) 18 - 23rd July 2022, Padoua (Italy) Papers submission deadline: 31 January 2022 More info at: https://sites.google.com/view/reservoir-computing-tf/activities/wcci-2022-special-session Submission link: https://cmt3.research.microsoft.com/IEEEWCCI2022/ Organizers Claudio Gallicchio (University of Pisa, Italy), Fatemeh Hadaeghi (Universit?tsklinikum Hamburg-Eppendorf (UKE), Germany), Miguel Cornelles Soriano (University of the Balearic Islands, Spain) Description Reservoir Computing (RC) approach for designing and training Recurrent Neural Networks (RNNs) has attracted a lot of attention in various fields of research due to its straightforward and efficient training process. Besides, RC neural networks are distinctively amenable to hardware implementations (including in neuromorphic unconventional substrates, like photonics), enable clean mathematical analysis (rooted, e.g., in the field of random matrix theory), and find natural applications in resource-constrained contexts, such as edge AI systems. Moreover, in the broader picture of Deep Learning development, RC is a breeding ground for testing innovative ideas, e.g. biologically plausible training algorithms beyond gradient back-propagation. Noticeably, although established in the Machine Learning field, RC lends itself naturally to interdisciplinarity, where ideas coming from diverse areas such as computational neuroscience, complex systems, and non-linear physics can lead to further developments and new applications. This session intends to strongly encourage RC research within the international neural networks community, bringing ideas from other connected interdisciplinary fields. We then invite the researchers to submit papers on all aspects of RC research, targeting contributions on new algorithms, implementations, and applications. Topics of Interests A list of relevant topics for this session includes, without being limited to, the following: ? New Reservoir Computing models and architectures, including Echo State Networks and Liquid State Machines ? Hardware, physical and neuromorphic implementations, including spintronics, photonics, and quantum Reservoir Computing ? Reservoir Computing in Computational Neuroscience ? Reservoir Computing on the edge systems ? Novel learning algorithms rooted in Reservoir Computing concepts ? Novel applications of Reservoir Computing, e.g., to images, video, and structured data ? Federated and Continual Learning in Reservoir Computing ? Deep Reservoir Computing Neural Networks ? Theory of complex and dynamical systems in Reservoir Computing ? Extensions of the Reservoir Computing framework, such as Conceptors ? Reservoir dimensionality reduction ? Efficient reservoir hyper-parameter search Important Dates - Papers submission deadline: January 31, 2022 (11:59 pm AoE) strict deadline - Decision notification: April 26, 2022 - Camera-ready paper: May 23, 2022 Submission Guidelines and Instructions Papers submission for this Special Session follows the same process as for the regular sessions of WCCI 2022 (see https://wcci2022.org/submission/), which uses Microsoft CMT. Submit your paper at the following link: https://cmt3.research.microsoft.com/IEEEWCCI2022/: 1) In the Author Console, click on Create Submission 2) In the drop-down menu, choose "IJCNN" 3) In the Subject Areas section, please indicate "IJCNN-SS-12 Reservoir Computing: algorithms, implementations, and applications" as the primary area. * Important note: differently from previous editions, the review process for WCCI/IJCNN 2022 will be double-blind. For prospected authors, it is therefore mandatory to anonymize their manuscripts. Paper Publication Accepted papers will be published in the IJCNN conference proceedings (to be published by IEEE). Sincerely, Organizing Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvernall at stanford.edu Wed Jan 5 21:38:09 2022 From: cvernall at stanford.edu (Carol S Vernallis) Date: Thu, 6 Jan 2022 02:38:09 +0000 Subject: Connectionists: New book: Cybermedia: Explorations in Science, Sound, and Vision In-Reply-To: References: Message-ID: Carol Vernallis, Holly Rogers and Lisa Perrott are happy to announce the fourth book in our Bloomsbury series, New Approaches to Sound, Music and Media. (https://www.bloomsbury.com/uk/series/new-approaches-to-sound-music-and-media/) Cybermedia: Explorations in Science, Sound, and Vision (edited by Carol Vernallis, Holly Rogers, Selmin Kara, and Jonathan Leal) traces how contemporary media engage with new technologies like robotics, psychometrics, big data, and AI. It pairs humanists? close readings of contemporary media (like Westworld and Black Mirror) with scientists? discussions of the science and math that inform them. Cybermedia bridges one of the gaps between science and the humanities. This text includes contributions by scholars from many disciplines (music, media, philosophy, computer science and neuroscience?as well as directors and other industry practitioners) to consider a range of films and TV shows including Ex Machina, Mr. Robot, Under the Skin, Sorry to Bother You, Black Mirror, and Westworld. Through a variety of critical, theoretical, and speculative approaches, the collection facilitates interdisciplinary thinking and collaboration as well as provides readers with the means to respond to these new technologies. REVIEWS ?The membrane between media and mind has been dissolving for a century. Cybermedia turns the membrane into an irrigation system. A new kind of practice as much as a book, Cybermedia brings makers, scientists and scholars into dialogues that pass through old borders, subtly transformed and transforming. From comic books to paranoia, neurotransmitters to Radiohead, Cybermedia opens a new landscape of social-technical minds and media as things to study and ways of studying them.??Sean Cubitt, Professor of Screen Studies, University of Melbourne, Australia ?Cybermedia testifies to the ways in which practitioners, scientists and scholars are keeping track of, or, indeed, anticipate, the resulting, emergent web of interrelations, and how the porosity between culture and science affects our sensorium. Relying on a two-way approach, a look at science through popular culture, and a science-informed exploration of popular culture, Cybermedia is both a critical tool box and an invitation to navigate the wondrous territories of media culture in the era of accelerated technologization.?? Martine Beugnet, Professor in Visual Studies, Universit? de Paris, France ?The advent of artificial intelligence and cyberspace has a dark and light side, with many challenges and alluring opportunities. This collection engages?literally?with the ?light side,? drilling down on how our scientific understanding of sentience underwrites our experience of the lived world?an experience that now rests so heavily on the media and its accompanying technologies. The multilateral perspective offered by this book is so timely, especially for the many of us who will have to transcend the worlds of science and media in the future.? ?Karl J. Friston, FRS, FMedSci, FRSB, Neurology, University College London, UK ?A thrilling exploration of the resonances between circuits of creativity, software, and the brain.? ?Steven Shaviro, DeRoy Professor of English, Wayne State University, USA CYBERMEDIA: EXPLORATIONS IN SCIENCE, SOUND, AND VISION TABLE OF CONTENTS 1. Introduction Jonathan Leal and Carol Vernallis Part I: AI and Robotics 2. ?Who?s Better at Maximizing Objective Functions, Real or Fictional AIs?? Jay McClelland (Stanford University) 3. ?Director Alex Garland Converses with Cybermedia?s Scientists and Media Scholars? Jonathan Leal (USC) and Carol Vernallis (Stanford University) 4. ?(S)Ex Machina and the Cartesian Theater of the Absurd? Simon D. Levy and Charles W. Lowney (Washington and Lee University) 5. ?Epiphany, Infinity and Transcendent AI? Zachary Mason (Amplitude Analytics) Part II: Big Data, Sentience, and the Universe 6. ?A MASSIVE Swirl of Pixels? Steen Ledet Christiansen (Aalborg University) 7. ?Body-Knowing and Neural Nets: Is a Machine's Ability to Learn Human Skills a Victory for Reductionism?? Charles W. Lowney (Washington and Lee University) 8. ?The Quantum Computer as Sci-Fi?s Favorite Character?Devs?s Approach to Quantum Physics? Leonardo De Assis (Stanford University) 9. ?Ex Machina as a Movie about Consciousness? Murray Shanahan (Imperial College London) Part III: The Neuroscience of Affect and Event Perception 10. ??A Solid Popularity Arc?: Affective Economies in Black Mirror?s ?Nosedive?? Dale Chapman (Bates University) 11. ?Cognitive Boundaries, ?Nosedive? and Under the Skin: Interview with Jeffrey Zacks? Carol Vernallis (Stanford University) and Jonathan Leal (USC) 12. ?Why Comics?: Toward An Affective Approach? Frederick Aldama and Laura Wagner (OSU) Part IV: The Digital West 13. ?Westworld: Some Philosophical Puzzles about Android Experience? Paul Skokowski (Stanford University) 14. ?A.I., Self, and Other: Westworld?s New Visions of the Old West? Christopher Minz (Georgia State University) 15. ?Automata and Player Pianos: A Close-Reading of Westworld?s Score (Then and Now)? Annabel J. Cohen (University of Prince Edward Island) 16. ?Color and Conservatism in Cybermedia? Alex Byrne (MIT) & David Hilbert (University of Illinois at Chicago) Part V: Interface, Desire, Collectivity 17. ?Veiled Sonics: Interface and Black" Liz Reich (Connecticut College) 18. ?Technology, Chaos, and the Nimble Subversion of Random Acts of Flyness? Eric Lyon (Virginia Tech) 19. ?Expecting the Twist: How Media Navigate the Intersections Among Different Sources of Prior Knowledge? Noah Fram (Stanford) Part VI: Productive Neuropathologies 20. ?Digital Vitalism? Marta Figlerowicz (Yale University) 21. ?Neuroplasticity, Closure, and the Brain? Sara Ferrando Colomer (Northwestern University) 22. ?Where is My Mind? Mr. Robot and the Digital Neuropolis? Patricia Pisters (University of Amsterdam) 23. ?Dopamine Circuits: Wanting, Liking, Habits, and Goals? An Interview about Mr. Robot with Neuroscientist Talia Lerner (Northwestern University) Carol Vernallis (Stanford University) and Jonathan Leal (USC) 24. ?Taste as Aesthetics and Biological Constraints? An Interview with Neuroscientist Hojoon Lee (Northwestern University) Julia Peres Guimaraes (Northwestern University), Selmin Kara (OCAD University), and Carol Vernallis (Stanford University) We?d like to share that Bloomsbury is offering 10-20% off book titles. Please feel free to send Holly, Lisa, or me proposals for manuscripts and collected volumes. Carol Vernallis, Ph.D. Department of Music Stanford University Stanford, CA 94305 (650)326-1705 -------------- next part -------------- An HTML attachment was scrubbed... URL: From valle at ime.unicamp.br Thu Jan 6 10:23:17 2022 From: valle at ime.unicamp.br (Marcos Eduardo Valle (IMECC-Unicamp)) Date: Thu, 6 Jan 2022 12:23:17 -0300 Subject: Connectionists: [CFP] Special Session on Hypercomplex-valued neural networks: Theory and Applications Message-ID: Special Session on Hypercomplex-valued neural networks: Theory and Applications at WCCI IEEE World Congress on Computational Intelligence (WCCI 2022) International Joint Conference on Neural Networks (IJCNN 2022) 18 - 23rd July 2022, Padua, Italy. Webpages (for further information): WCCI 2022: https://wcci2022.org/ Special Session: https://www.ime.unicamp.br/~valle/wcci2022/ Aim and Scope: Complex-valued, quaternion-valued, and more generally hypercomplex-valued neural networks (HVNNs) constitute a rapidly growing research area that has attracted continued interest for the last decade. Besides their natural ability to tread multidimensional data, hypercomplex-valued neural networks can benefit from hypercomplex numbers? geometric and algebraic properties. For example, complex-valued neural networks are essential for proper treatment of phase and the information contained in phase, including the treatment of wave- and rotation-related phenomena such as electromagnetism, light waves, quantum waves, and oscillatory phenomena. Quaternion-valued neural networks, which have potential applications in three- and four-dimensional data modeling, have been effectively used to process and analyze multivariate images such as color and polarimetric SAR images. Despite significant theoretical development and successful applications, there are still many research directions in HVNNs, including a formal generalization of the commonly used real-valued network architectures and training algorithms to the hypercomplex-valued case. There are also many exciting applications in pattern recognition and classification, nonlinear filtering, intelligent image processing, brain-computer interfaces, time-series prediction, bioinformatics, robotics, etc. This special session aims to be the proper forum for a systematic and comprehensive exchange of ideas, present the recent research results, and discuss future trends in HVNNs. We hope the proposed session will attract potential speakers and researchers interested in joining the HVNNs community. We also expect this session to benefit and inspire computational intelligence researchers and other specialties that need sophisticated neural network tools. List of Topics: This special session welcomes papers that are or might be related to all aspects of hypercomplex-valued neural networks, including complex-valued and quaternion-valued neural networks. Papers on theoretical advances as well as contributions of applied nature are appreciated. We also welcome interdisciplinary contributions from other areas on the borders of the proposed scope. Topics include, but are not limited to: - Theoretical aspects of HVNNs - Hypercomplex-valued activation functions - Learning algorithms for HVNNs - Hypercomplex-valued associative memories - Pattern recognition, classification, and time series prediction using HVNNs - HVNNs in nonlinear filtering - Dynamics of hypercomplex-valued neurons - Learning algorithms for HVNNs - Chaos in the complex domain - Hypercomplex-valued deep neural networks - Spatiotemporal processing using HVNNs - Frequency domain processing using complex-valued neural networks - Phase-sensitive signal processing - Applications of HVNNs in image processing, speech processing, and bioinformatics - Quantum computation and quantum neural networks - HVNN in brain-computer interfaces - HVNNs in robotics - HVNN-based hardware/devices such as lightwave and spin-wave neural devices Past Special Sessions: Special session on HVNNs has become a traditional event of the IJCNN conference. Ten special sessions have been organized since 2006 (WCCI-IJCNN 2006, Vancouver; WCCI-IJCNN 2008, Hong Kong; IJCNN 2009, Atlanta; WCCI-IJCNN 2010, Barcelona; IJCNN-2011, San Jose; WCCI-IJCNN 2012, Brisbane; IJCNN-2013, Dallas; WCCI-IJCNN 2014, Beijing; WCCI-IJCNN 2016, Vancouver; WCCI-IJCNN 2018, Rio De Janeiro; WCCI-IJCNN 2020, Glasgow). These special sessions attracted numerous submissions and had large audiences. They featured many engaging presentations and very productive discussions. Because of the recent advances in HVNNs, we expect this special session to be as exciting and fruitful as the past special sessions. Organizers: Marcos Eduardo Valle (University of Campinas, Brazil) Danilo Mandic (Imperial College, London, UK) Igor Aizenberg (Manhattan College, USA) Akira Hirose (University of Tokyo, Japan) Important Dates: - Paper Submission: January 31, 2022 - Notification of Acceptance: April 26, 2022 - Final Paper Submission: May 23, 2022 - IEEE WCCI 2022, Padua, Italy. 18-23 July 2012 Instructions: Submit your paper at https://cmt3.research.microsoft.com/IEEEWCCI2022/ Please select the special session ?IJCNN-SS-6: Hypercomplex-valued neural networks - Theory and Applications? under the list of research topics in the submission system during the submission. Papers submitted for this special sessions should attend the WCCI 2022 submission? requirements, including: - The paper should not explicitly reveal the authors' identities (double-blind review process). - Submissions should be original and not currently under review at another venue. - Submissions should be typed using the IEEE-style files for conference proceedings. Also, each paper is limited to 8 pages, including figures, tables, and references. Further information on the submission guidelines are available at https://wcci2022.org/submission/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.plumbley at surrey.ac.uk Thu Jan 6 06:31:22 2022 From: m.plumbley at surrey.ac.uk (Mark Plumbley) Date: Thu, 6 Jan 2022 11:31:22 +0000 Subject: Connectionists: PhD Studentships at University of Surrey: Deadlines approaching (12 Jan, 17 Jan) Message-ID: Dear Connectionist, There are a number of open opportunities for PhD studentships at the University of Surrey with approaching application deadlines (12 Jan or 17 Jan), including for PhD study in the Centre for Vision, Speech and Signal Processing (CVSSP). For more information about CVSSP research in AI and machine perception, including in topics related to audio and sound, see https://www.surrey.ac.uk/centre-vision-speech-signal-processing and https://www.surrey.ac.uk/centre-vision-speech-signal-processing/research/creative For more information about PhD study in CVSSP, see https://www.surrey.ac.uk/centre-vision-speech-signal-processing/postgraduate-research-study, and explore potential PhD supervisors here https://www.surrey.ac.uk/centre-vision-speech-signal-processing/people/academic-staff Links to the individual funding schemes, with deadlines, are below. I would particularly like to highlight the Shine Scholars and Breaking Barriers studentships, which aim to support applicants from under-represented groups. Best wishes, Mark Plumbley [Research area: AI & machine learning for analysis and recognition of sounds] --- Deadline 12 Jan 2022: * China Scholarship Council-Surrey Awards are open to outstanding Chinese students. Deadline 12 Jan 2022. https://www.surrey.ac.uk/fees-and-funding/studentships/china-scholarship-council-surrey-awards Deadline 17 Jan 2022: * Shine Scholars Studentships aim to help address the lack of representation of Black British Students at Doctoral level, and are open to Black British UK-permanent residents. Deadline: 17 January 2022. https://www.surrey.ac.uk/fees-and-funding/studentships/shine-scholars-studentship-award-2022 * Breaking Barriers Studentships aim to help advance gender equality and diversity, and are open to UK and International applicants who meet the specific eligibility criteria (in CVSSP: applicants who identify as women or non-binary). Deadline: 17 January 2022. https://www.surrey.ac.uk/fees-and-funding/studentships/breaking-barriers-studentship-award-2022 * Vice-Chancellor's Studentships aim to attract exceptional global research talent, and are open to exceptional International applicants. Deadline: 17 January 2022. https://www.surrey.ac.uk/fees-and-funding/studentships/vice-chancellors-studentship-award-2022 Ongoing: CVSSP PhD studentships are open for Home/EU candidates. https://www.surrey.ac.uk/fees-and-funding/studentships/cvssp-studentship-opportunities UK applicants for PhD study may also wish to consider a Postgraduate Doctoral Loan, open to applicants from England & Wales. More information at: https://www.gov.uk/doctoral-loan -- Prof Mark D Plumbley Head of School of Computer Science and Electronic Engineering Email: Head-of-School-CSEE at surrey.ac.uk Professor of Signal Processing University of Surrey, Guildford, Surrey, GU2 7XH, UK Email: m.plumbley at surrey.ac.uk PA: Kelly Green Email: k.d.green at surrey.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tom.Hanika at cs.uni-kassel.de Thu Jan 6 10:16:44 2022 From: Tom.Hanika at cs.uni-kassel.de (Tom Hanika) Date: Thu, 6 Jan 2022 16:16:44 +0100 Subject: Connectionists: PhD Position in High-Dimensional Machine Learning -- LOEWE-Project "Dimension Curse Detector" Message-ID: <5f3af9b9-4be3-2040-74d1-33456b0c13c8@cs.uni-kassel.de> The research group Knowledge & Data Engineering at the University of Kassel announces a *PhD position (salary scale TV-H E13, 100%)* in the context of the LOEWE-Project "Dimension Curse Detector" [1]. The position [2] is supervised by PI Dr.?Tom Hanika [3] in cooperation with Prof.?Dr. Gerd Stumme and Prof.?Dr. Martin Schneider (TU Freiberg). The group's emphasis is on researching and developing methods and algorithms at the confluence of Knowledge Discovery, Artificial Intelligence and Mathematics. The research project aims at exhibiting concentration phenomena in the realm of high-dimensional machine learning and data science applications. We welcome talented and highly motivated candidates with knowledge in discrete structures or mathematical foundations of machine learning, as well as good programming skills in Python, GNU R, Fortran, or Java. Furthermore, a high social competency and a very good command of the English language are expected. Applicants to the PhD position must have a relevant *Master's degree* (or equivalent) in Computer Science, Mathematics, Physics or a related field. The position is (at first) limited to *2 years* (cf. Qualifikationsstelle gem. ? 65 HHG i. V. m. ? 2 Abs. 1 Satz 1 WissZeitVG; Promotionsm?glichkeit). Please submit your complete application (including all necessary documents and a letter of intent) via the application portal [4]. Applications are accepted as of now until *2022-02-15*. [1] https://wissenschaft.hessen.de/Presse/Vom-Fluch-der-Dimension-Tigermuecken-und-einem-Mikroskop-mit-Teilchenbeschleuniger [2] http://www.uni-kassel.de/go/dcdposition/ [3] https://www.kde.cs.uni-kassel.de/hanika [4] https://stellen.uni-kassel.de/en/jobposting/064cea431f7a4a00172788d51a6b0b9f331193da0/apply -- Dr. Tom Hanika Universit?t Kassel Tel.: +49 (0) 561 804 6350 https://www.kde.cs.uni-kassel.de/hanika --------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From victorpitron at yahoo.fr Thu Jan 6 09:19:45 2022 From: victorpitron at yahoo.fr (Victor Pitron) Date: Thu, 6 Jan 2022 15:19:45 +0100 Subject: Connectionists: job announcement : 2 PhD and 1 post-doc, Paris References: <9b653604-413d-ba87-6c71-388ac699719e.ref@yahoo.fr> Message-ID: <9b653604-413d-ba87-6c71-388ac699719e@yahoo.fr> > Dear colleague, > > It is my great pleasure to announce that we are launching a research > project about Idiopathic Environmental Intolerance at the H?tel-Dieu > hospital of Paris. The project will combine behavioural experiments, > computational modelling, and the development of a dedicated treatment > program. Together with an international and interdisciplinary team of > physicians and scientists led by Prof. C?dric Lemogne, we will recruit > 2 PhD students and 1 post-doc fellow for 3 years from September 2022 > on. The job announcement is attached to this mail. > > Would you please accept to post this job announcement on the mailing > list? > > Kind regards, > > Victor Pitron > Dr Victor Pitron MD, PhD Psychiatre, Chef de Clinique Assistant, Service de pathologies professionnelles et environnementales, H?tel-Dieu, Paris -------------- next part -------------- A non-text attachment was scrubbed... Name: job announcement_new deadline.pdf Type: application/pdf Size: 594220 bytes Desc: not available URL: From juyang.weng at gmail.com Thu Jan 6 20:38:55 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Thu, 6 Jan 2022 20:38:55 -0500 Subject: Connectionists: Call for Proposals: BMI Machine Conscious Learning Project Message-ID: Ever since humankind came into being, holistic mechanisms of Natural General Intelligence (NGI) and Artificial General Intelligence (AGI) have been elusive. For example, the Third World Science and Technology Development Forum Nov. 6-7, 2021 published "The Ten Scientific Problems for the Development of Human Society for 2021". The No. 1 Problem in the information domain is "what are the mechanisms for human brains to process information and for generating human intelligence?" Many machine learning experts hoped that NGI and AGI can be modeled by, or achieved by, training increasingly larger neural networks on increasingly larger data sets that are static. Unfortunately, such approaches are categorically hopeless for AGI, not only because of the alleged Post Selection protocol flaws. but something much deeper and more fundamental. The recent discovery of Conscious Learning by Weng 2022 revealed a surprising principle, namely consciousness is recursively necessary across every time instant of learning by humans and machines in order to reach their NGI and AGI at each corresponding mental age. BMI, the Brain-Mind Institute, is pleased to announce a funded Project, called BMI Conscious Machine Learning Project, for all those who are interested. This announcement calls for professors, graduate students, and undergraduate students to apply for an appropriate position in the Project. The open positions include the following three categories: 1. *Research advisors*: There are four categories, assistant professors, associate professors, full professors and retired professors, corresponding to your current rank. The responsibilities include advising local students. It is desirable that each professor recruits a few of his students locally. Send your CV to BMI with the names, affiliations and contact information of the students who will submit applications in association with you. Each BMI paid student will correspond a part of budget for his research advisor. 2. *Graduate students: *There are two categories, PhD program and MS program. Each student is expected to spend at least 10 hours each week during his university semesters and 40 hours each week during summer. The student's time spent on the projects will be paid by BMI at a rate suited for his own country. Each applicant should identify a local research advisor who supervises the project on a weekly basis. If you are a graduate student in a university and are interested in applying for the Project, find a professor in your local university who can supervise you. Ask him to jointly apply for a professor position at the Project. You two should name each other in the applications. Send your CV and official transcripts during the undergraduate years and the graduate years. 3. *Undergraduate students: *There are four categories, freshmen, sophomore, junior and senior, corresponding to your year in your home university. Other requirements are similar to the Graduate student category. Admission terms: summer session 2022 or fall 2022. Specify your preferred starting summer date and fall date, as each country has a different date. Send your filled application form , your application and supporting material to juyang.weng at gmail.com with a subject: Application: BMI Conscious Machine Learning Project. *Important dates:* *January 15, 2022:* Deadline for application *March 15, 2022:* Notice of admission http://www.brain-mind-institute.org/program-summer.html -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From hussain.doctor at gmail.com Fri Jan 7 05:22:43 2022 From: hussain.doctor at gmail.com (Amir Hussain) Date: Fri, 7 Jan 2022 10:22:43 +0000 Subject: Connectionists: UK EPSRC funded (3-year) COG-MHEAR Research Fellowship (Deadline approaching: 14 Jan) In-Reply-To: References: Message-ID: Dear connectionists, **Please help forward to potentially interested candidates, many thanks in advance!** The School of Computing at Edinburgh Napier University (ENU) in Edinburgh, Scotland, UK, has an immediate opening for a full-time postdoctoral research fellow. The post is funded as part of the UK Engineering and Physical Sciences Research Council (EPSRC) funded Programme Grant: COG-MHEAR (https://cogmhear.org). One of the current COG-MHEAR priority areas of interest is the innovative design of fully-decentralized, privacy-preserving federated learning algorithms that can also leverage the hardware and latency constraints of multi-modal (audio-visual) hearing-assistive technologies e.g., by specialized compression techniques for latency optimization and self-supervised techniques to decrease the dependency on remote machines. Details of other priority areas for the research fellow position (which is being offered at salary grade 5: ?33,309 - ?39,739 per annum/pro-rata, for up to 3 years in the first instance) can be found here: https://www.timeshighereducation.com/unijobs/listing/275428/cog-mhear-research-fellow/ (closing date: 14 Jan 2022) Please get in touch if you would like further information. Many thanks and kindest regards Amir -- Professor Amir Hussain Programme Director: EPSRC COG-MHEAR (https://cogmhear.org) Editor-in-Chief: Cognitive Computation (Springer Nature - http://springer.com/12559) Director: Centre for AI & Data Science, School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, Scotland, UK https://www.napier.ac.uk/people/amir-hussain == ?What you seek is seeking you?, Rumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From marinella.petrocchi at iit.cnr.it Fri Jan 7 10:02:35 2022 From: marinella.petrocchi at iit.cnr.it (Marinella Petrocchi) Date: Fri, 07 Jan 2022 16:02:35 +0100 Subject: Connectionists: [3rd CfP - Extended Deadline!][ECIR 2022] ROMCIR 2022: The 2nd International Workshop on Reducing Online Misinformation through Credible Information Retrieval Message-ID: <4ba49eaf5b201e2ae37de63bb47abd8e@iit.cnr.it> [Apologies for multiple postings] [Extended Deadline! Abstract: January 16, 2022 - Paper: January 23, 2022] ******************************************************************************************************************** ROMCIR 2022: The 2nd International Workshop on Reducing Online Misinformation through Credible Information Retrieval Stavanger, Norway, April 10, 2022 Conference website: https://romcir2022.disco.unimib.it/ Submission link: https://easychair.org/conferences/?conf=romcir2022 ******************************************************************************************************************** ***AIM AND THEMES*** Within the ECIR 2022 conference (https://ecir2022.org/), the second edition of the ROMCIR workshop is particularly focused on discussing and addressing issues related to reducing misinformation through Information Retrieval solutions. Hence, the central topic of the workshop concerns providing access to users to credible and/or verified information, to mitigate the information disorder phenomenon. By "information disorder" we mean all forms of communication pollution. From misinformation made out of ignorance, to intentional sharing of false content. In this context, all those approaches that can serve to the assessment of the credibility of information circulating online and in social media, in particular, find their place. This topic is very broad, as it concerns different contents (e.g., Web pages, news, reviews, medical information, online accounts, etc.), different Web and social media platforms (e.g., microblogging platforms, social networking services, social question-answering systems, etc.), and different purposes (e.g., identifying false information, accessing information based on its credibility, retrieving credible information, etc.). For this reason, the themes of interest include, but are not limited to, the following: - Access to credible information - Bias detection - Bot/Spam/Troll detection - Computational fact-checking - Crowdsourcing for credibility - Deep fakes - Disinformation/Misinformation detection - Evaluation strategies to assess information credibility - Fake news detection - Fake reviews detection - Filter Bubbles and Echo chambers - Harassment/bullying - Hate-speech detection - Information polarization in online communities - Propaganda identification/analysis - Retrieval of credible information - Security, privacy, and credibility - Sentiment/Emotional analysis - Stance detection - Trust and Reputation systems (to mitigate the effects of disinformation) - Understanding and guiding the societal reaction in the presence of disinformation Data-driven approaches in the IR field or related fields, supported by publicly available datasets, are more than welcome. ***CONTRIBUTIONS*** The workshop solicits the sending of two types of contributions relevant to the workshop and suitable to generate discussion: - Original, unpublished contributions (pre-prints submitted to ArXiv are eligible) that will be included in an open-access post-proceedings volume of CEUR Workshop Proceedings (http://ceur-ws.org/), indexed by both Scopus and DBLP. - Already published or preliminary work that will not be included in the post-proceedings volume. All submissions will undergo double-blind peer review by the program committee. Submissions are to be done electronically through the EasyChair at: https://easychair.org/conferences/?conf=romcir2022 ***SUBMISSION INSTRUCTIONS*** Submissions must be: - no more than 10 pages long (regular papers) - between 5 and 9 pages long (short papers) We recommend that authors use the new CEUR-ART style for writing papers to be published: - An Overleaf page for LaTeX users is available at: https://www.overleaf.com/read/gwhxnqcghhdt - An offline version with the style files including DOCX template files is available at: http://ceur-ws.org/Vol-XXX/CEURART.zip - The paper must contain, as the name of the conference: ROMCIR 2022: The 2nd Workshop on Reducing Online Misinformation through Credible Information Retrieval, held as part of ECIR 2022: the 44th European Conference on Information Retrieval, April 10-14, 2022, Stavanger, Norway - The title of the paper should follow the regular capitalization of English - Please, choose the single-column template - According to CEUR-WS policy, the papers will be published under a CC BY 4.0 license: https://creativecommons.org/licenses/by/4.0/deed.en If the paper is accepted, authors will be asked to sign (at pen) an author agreement with CEUR: - In case you do not employ Third-Party Material (TPM) in your draft, sign the document at http://ceur-ws.org/ceur-author-agreement-ccby-ntp.pdf?ver=2020-03-02 - If you do use TPM, the agreement can be found at http://ceur-ws.org/ceur-author-agreement-ccby-tp.pdf?ver=2020-03-02 Please submit an anonymized version of the submission (do not indicate the names of authors and institutions and cite your work in an impersonal way) ***IMPORTANT DATES*** - Abstract Submission Deadline: January 16, 2022 - Paper Submission Deadline: January 23, 2022 - Decision Notifications: February 18, 2022 - Workshop day: April 10, 2022 ***ORGANIZERS*** The following people contribute to the workshop in various capacities and roles: *Workshop Chairs* - Marinella Petrocchi (https://www.iit.cnr.it/en/marinella.petrocchi/), IIT-CNR, Pisa, Italy - Marco Viviani (https://ikr3.disco.unimib.it/people/marco-viviani/), University of Milano-Bicocca *Proceedings Chair* - Rishabh Upadhyay, University of Milano-Bicocca *Program Committee* - Rino Falcone, Institute of Cognitive Sciences and Technologies-CNR, Rome, Italy - Carlos A. Iglesias, Universidad Polit?cnica de Madrid, Madrid, Spain - Petr Knoth, The Open University, London, UK - Udo Kruschwitz, University of Regensburg, Regensburg, Germany - Yelena Mejova, ISI Foundation, Turin, Italy - Preslav Nakov, Qatar Computing Research Institute, HBKU, Doha, Qatar - Symeon Papadopoulos, Information Technologies Institute (ITI), Thessaloniki, Greece - Gabriella Pasi, University of Milano-Bicocca, Milan, Italy - Marinella Petrocchi, IIT ? CNR ? Istituto di Informatica e Telematica, Pisa, Italy - Adrian Popescu, CEA LIST, Gif-sur-Yvette, France - Paolo Rosso, Universitat Polit?cnica de Val?ncia, Val?ncia, Spain - Fabio Saracco, IMT School for Advanced Studies, Lucca, Italy - Marco Viviani, University of Milano-Bicocca, Milan, Italy - Xinyi Zhou, Syracuse University, Syracuse, NY, USA - Arkaitz Zubiaga, Queen Mary University of London, London, UK -- Marinella Petrocchi Senior Researcher @Institute of Informatics and Telematics (IIT) National Research Council (CNR) Pisa (Italy) Mobile: +39 348 8260773 Skype: m_arinell_a Web: https://www.iit.cnr.it/en/marinella.petrocchi/ `Luck is a matter of geography' (Bandabardo') From antonioj.rodriguezsanchez at gmail.com Fri Jan 7 04:56:11 2022 From: antonioj.rodriguezsanchez at gmail.com (Antonio Rodriguez-Sanchez) Date: Fri, 7 Jan 2022 10:56:11 +0100 Subject: Connectionists: Job - PhD position Message-ID: <39589B8B-3ED1-4569-B205-4DFB3E986389@gmail.com> Please share with the mailing list subscribers the following PhD position: A Ph.D. Student Position in Deep Learning for Computer and Robot Vision at U. Innsbruck, Austria The Intelligent and Interactive Systems group (Prof. Justus Piater and Asst. Prof. Antonio Rodr?guez S?nchez) is looking for a talented Ph.D. candidate. This position is aimed at enthusiastic students about Artificial Intelligence. 3D sensing has become a major research field for machine learning thanks to its applicability in areas like robotics, autonomous cars or augmented reality. The PhD candidate will develop new methodologies and algorithms in deep learning aimed at computer and robot vision tasks that lead to new architectures and algorithms for vision-based robotic agents. The goal of this PhD position is to develop deep learning algorithms that, combined with neuroscience-inspired structures, can be used as a guidance principle to build core components of artificial systems with human-like capabilities. Areas of interest include 3D point-clouds, Capsule Networks, Spiking Neural Networks and other state-of-the-art deep learning strategies. Knowledge of Neuromorphic computation is a plus. This is a university assistant position that includes minor teaching requirements. Your Profile Applicants must have earned, or be about to earn, an M.Sc. degree or equivalent in computer science or other relevant area, an excellent academic record, a strong background in machine learning, computer vision, and robotics, excellent mathematical and coding skills (C/C++, Matlab, ROS, Python), excellent written and oral communication skills in English, and enthusiasm for leading-edge research, a team spirit and independent problem-solving skills How to Apply Applications must include a letter of motivation, a curriculm vitae including URLs of English-language theses and dissertations, scanned transcripts (including grades) and diplomas, a list of projects you have worked on with brief descriptions of your contributions, and contact information of at least two references. Applicants please apply at https://lfuonline.uibk.ac.at/public/karriereportal.details?asg_id_in=12337. Applications must be received by January 31, 2022. The starting date is March 15, 2022. The University of Innsbruck, Austria The University of Innsbruck dates back to 1669. It offers a complete set of academic curricula and currently counts 28000 students. Founded in 2001, our young Department of Computer Science is highly productive in diverse research domains, and is internationally very well connected. Innsbruck is home to 35000 students who imprint a distinctive, international student atmosphere upon this lively city of 130000. Beautifully located in the Tyrolean Alps, on the Inn river and surrounded by summits of up to 2718m, Innsbruck offers outstanding opportunities and quality of life all around the year. -------------- next part -------------- An HTML attachment was scrubbed... URL: From TYAMANE at jp.ibm.com Fri Jan 7 01:10:46 2022 From: TYAMANE at jp.ibm.com (Toshiyuki Yamane) Date: Fri, 7 Jan 2022 06:10:46 +0000 Subject: Connectionists: [CFP] Special Session on Edge AI at WCCI 2022 Message-ID: Dear connectionist, This is an invitation to the special session Prospects of Edge AI at IEEE World Congress on Computational Intelligence (WCCI 2022), 18 - 23rd July 2022, Padua, Italy. Please take a look at the CFP below and consider submitting your work if you interested in this topic. We are looking forward to your contributions. Call for Papers for the Special Session at WCCI 2022 Prospects of edge AI: algorithms, devices, and applications Aim and scope of the special session Internet of things (IoT) era will enable various activities having high impacts on the ways of works and human life based on intelligent computing with the information network system. With the remarkable progress in machine-learning algorithms on traditional large-scale processors with GPU acceleration, cloud AI technologies have allowed us to enjoy the benefits of intelligent computation targeting a broad range of data. However, recent surge in unstructured data obtained in edge domains brings about heavy network traffic and it is becoming unfeasible to utilize cloud computing with the present network system. Thus, developing disruptive technologies for this challenge is indispensable to implement next-generation information network systems. The most promising technology is an emerging research field, called "edge computing" or "edge AI", that reduces network traffic with intelligent computing almost in edge domains. Edge AI requires computational performances completely different from those for cloud computing technologies: suitable and prompt computing of unstructured data with restricted hardware resources in terms of circuit scale, processing speed, power supply, and memory storage. In other words, higher priorities lie in real-time computation with highly-energy efficient ways, while the machine learning algorithm and required computing accuracy change according to application. The objective of this special session is to discuss challenges and future directions of AI systems, based on the novel computing paradigms specialized to edge AI. The special session covers various aspects: 1) novel machine learning models and algorithms, 2) novel AI hardware and neuromorphic devices, natural computing for hardware innovation, and 3) emerging AI applications in edge environments. List of candidate topics The topics of interest include (but not limited to). 1. Adaptation of existing cloud AI models and algorithms to edge environments: l model compression, model pruning, lower precision numerical MAC operations l new AI models algorithms specialized for edge environments 2. Novel AI/neuromorphic devices and natural computing for hardware innovation l Digital and analog AI/neuromorphic devices l Optical/Photonic Computing l Physical reservoir computing (optics/photonics, materials, mechanics etc.) l Probabilistic computing, Stochastic computing, Reversible computing l Any other topics related to natural computing 3. Applications of edge AI utilizing the technologies specialized to edge environments l Internet-of-Things, sensor data analytics l Surveillance, anomaly detection l Autonomous vehicles, robots and drones l intelligent networking systems l machine-to-machine(M2M) communications l Any other topics related to edge AI Special session organizers (* primary contact) *Toshiyuki Yamane, IBM Research - Tokyo, tyamane at jp.ibm.com Ryosho Nakane, University of Tokyo, nakane at cryst.t.u-tokyo.ac.jp Nikola Kasabov, Auckland University of Technology, nkasabov at aut.ac.nz Akira Hirose, University of Tokyo, ahirose at ee.t.u-tokyo.ac.jp Submission instructions and important dates Prospective authors should follow the guidelines of WCCI 2022. Paper Submission: January 31, 2022 (11:59 PM AoE) Notification of Acceptance: April 26, 2022 Final Paper Submission: May 23, 2022 Conference: IEEE WCCI 2022 (IJCNN), Padua, Italy. 18-23 July 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.nasser at agroscope.admin.ch Fri Jan 7 04:22:17 2022 From: roland.nasser at agroscope.admin.ch (roland.nasser at agroscope.admin.ch) Date: Fri, 7 Jan 2022 09:22:17 +0000 Subject: Connectionists: =?utf-8?q?Data_Scientist_Position_at_Johann_Heinr?= =?utf-8?q?ich_von_Th=C3=BCnen_Institute?= In-Reply-To: <56e9e070-fffe-b598-cc23-38106b08278c@gmail.com> References: <56e9e070-fffe-b598-cc23-38106b08278c@gmail.com> Message-ID: Data scientist (permanent position) available at Johann Heinrich von Th?nen Institute in Germany. Main duties of post holder: ? Statistical consulting in the field of experimental design and analysis. ? Statistical method development in the field of digitalisation on farms ? Preparation of scientific publications ? Presentation of results at national and international conferences ? Design and acquisition of projects in collaborative projects This job description is intended as a guideline for the general range of duties and is neither conclusive nor restrictive. It will be reviewed with the post holder from time to time. Qualifications/Skills/Knowledge Essential: ? University degree (M.Sc. or equivalent) in the field of statistics or related subjects. ? The Ability to work independently and scientifically, as demonstrated by a doctoral degree ? The Ability to present complex issues clearly and comprehensibly, both orally and in writing ? In-depth knowledge of or interest in time series analysis for the development of monitoring systems ? Proven scientific publications ? Goal-oriented and independent working style, high degree of initiative ? Enjoy scientific work in an interdisciplinary environment and supporting colleagues ? Very good knowledge of English and, if no knowledge of German is available, willingness to learn German; a good knowledge of German is an advantage; the official language is German ? Desirable Knowledge in the field of agricultural sciences is an advantage We offer you an interesting and versatile position with a high degree of personal responsibility in a supportive environment. You will be given a high degree of personal freedom and can pursue your own ideas in the field of statistical method development. As your professional and personal development is important to us, we offer a family friendly working environment, flexible working time models and a comprehensive range of further training opportunities. You will work on a large park-like research site with leisure facilities (tennis, volleyball) and a kindergarten (parent initiative). The employment is governed by the Wage Agreement for Public Services (TV?D-Bund). The payment of remuneration is carried out according to tariff-category 14 TV?D. A part time position is also possible. The Th?nen Institute promotes the professional equality of women and men and is thus especially interested in applications from women. Severely disabled applicants with equal qualification will be given particular consideration. Only a minimum physical aptitude is expected from them. For technical questions, please contact Prof. Dr. Christina Umst?tter (Tel. 0531/ 596 4101; e-mail: christina.umstaetter at thuenen.de). Interested candidates should send their applications (including motivation letter, CV, list of publications, copies of relevant certificates, names and addresses of personal references) via e-mail in one PDF file referring to ?2022-297-AT? by 16.01.2022 to at-bewerbungen at thuenen.de Prof. Dr. Christina Umst?tter Th?nen-Institut f?r Agrartechnologie Germany Information about Artikel 13 DSGVO: www.thuenen.de/datenschutzhinweis-bewerbungen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2021-297-AT_Data_Scientist.pdf Type: application/pdf Size: 495445 bytes Desc: 2021-297-AT_Data_Scientist.pdf URL: From victorpitron at yahoo.fr Fri Jan 7 03:47:34 2022 From: victorpitron at yahoo.fr (Victor Pitron) Date: Fri, 7 Jan 2022 09:47:34 +0100 Subject: Connectionists: open positions: 2 PhD and 1 post-doc in Paris to begin in September 2022, submission deadline January the 31th, about the cognitive investigation of idiopathic environmental intolerance References: Message-ID: The Research Project: Symptoms that patients attribute to the environment while medical examination shows no bodily malfunction are labeled as ?idiopathic environmental intolerance? (IEI). People suffering from IEI single out several agents from the environment, including chemical substances and electromagnetic fields, which they blame for a wide range of chronic and unspecific symptoms such as diffuse pain, fatigue, dizziness, dyspnea, or palpitations. IEI is an emerging health issue and specific diagnostic tools as well as evidence-based treatment programs are still lacking. In the last years, several works suggested that cognitive biases contribute to IEI. In this research project funded by the French Fondation pour la Recherche M?dicale and the Agence Nationale de S?curit? Sanitaire, we will test the relevance of a cognitive model based on the assumption that symptoms of IEI result from impairments in interoceptive awareness. The project will combine behavioral experiments, computational modeling of behavior and beliefs, and the development and testing of a dedicated treatment program with Cognitive Behavioral Therapy (CBT). Three Positions available: Two PhD students and one post-doc fellow will be recruited in September 2022 for 3 years. Ideal candidates are highly motivated to work on this project, good team players open to an interdisciplinary approach between medicine, cognitive science, and computational approaches, and speak and write English fluently. Three complementary profiles are proposed: - One candidate with a good knowledge of scientific methods and statistics for behavioral experiments in cognitive science, psychology, or a related field. Experience with psychometric testing of patients, data analysis, and programming in Matlab or similar software is advantageous. The candidate?s main missions will be to program, run and analyze behavioral tests with patients suffering from IEI, involving interoceptive tasks and tests of cognitive biases. - One candidate with previous experience in data analysis, programming in Matlab or Python, and experience in developing computational models of psychopathological conditions (computational psychiatry) and in the model-based analysis of behavioral data, using methods such as Bayesian inference, reinforcement learning, and deep learning. The candidate?s main missions will be to build, simulate, fit and test computational models of human behavior for patients with IEI. - One candidate needs to be a French speaking CBT-trained psychologist with great clinical experience and a strong interest in innovative CBT programs about environmental issues. Experience in qualitative analysis is a plus. The candidate?s main missions will be to build, run and test the CBT treatment program with patients suffering from IEI. A high level of proactive involvement will be expected from all members of the team, which will be expected to be physically present for the term of the project. The postdoc position moreover offers the opportunity to train in soft skills, crucial for becoming a PI, since the postdoctoral candidate is expected to contribute to lead the core-team composed by her/him and the two PhD candidates, together with our supervising team. The supervising team: An international medical and scientific supervision is organized with complementary skills for this interdisciplinary project that targets an emerging field of medicine. The main medical and scientific supervisor is Pr C?dric Lemogne, assisted by Dr Victor Pitron (both psychiatrists, MD, PhD, H?tel-Dieu, Paris). Dr Liane Schmidt and Dr Leonie Koban (both PI researchers at the Control-Interoception-Attention team at the Paris Brain Institute, Piti?-Salp?tri?re hospital, Paris) will provide additional scientific supervision for computational modelling. Pr Damien L?ger, and Dr Lynda Bensefa-Colas (both Occupational and Environmental physicians, MD, PhD, H?tel-Dieu, Paris) will provide additional medical supervision about IEI. Three senior European researchers will offer monthly supervision: Pr Omer Van den Bergh (Leuven) and Pr Michael Witth?ft (Mainz) for the work on the behavioral testing and the treatment program, Pr Giovanni Pezzulo (Rome) for the work on computational modelling. The work environment: The research team will be based at the VIFASOM lab of the H?tel-Dieu, a beautiful hospital in the heart of ancient neighborhoods of Paris, where patients will come for testing and treatment. The lab currently houses 3 PI and > 10 PhD students and engineers working on various fields of cognitive science. This will offer the opportunity for fruitful discussions and collaborations and a stimulating workplace. Nearby, the Paris Brain Institute (Piti?-Salp?tri?re hospital, Paris) and the Ecole Normale Sup?rieure also offer many opportunities for exciting scientific training and conferences in cognitive science. The PhD students will have courses and scientific supervision at the Doctorate School Bio SPC of Paris. All supervisors endorse values of equity and diversity, and are committed to ensuring a safe, welcoming, and inclusive workplace. Everyone is therefore strongly encouraged to apply. Application : CV, motivational and recommandation letters should be sent to Dr Victor Pitron : victor.pitron at aphp.fr. Applications are reviewed on a rolling basis and all candidates will receive full consideration. Deadline for application is January the 31th 2022. From papaleon at sch.gr Fri Jan 7 13:19:41 2022 From: papaleon at sch.gr (Papaleonidas Antonis) Date: Fri, 7 Jan 2022 20:19:41 +0200 Subject: Connectionists: 2nd Artificial Intelligence & Ethics Hybrid Workshop Message-ID: <088601d803f3$23d00040$6b7000c0$@sch.gr> 2nd AI & Ethics workshops will be part of 18th International Conference on Artificial Intelligence Applications and Innovations & 23rd International Conference on Engineering Applications of Neural Networks joint events The 2nd AIETH workshop should aim in responsible global AI. Respective scientists must be prepared to act preemptively and ensure that our societies will avoid negative effects of AI and of 4th Industrial Revolution in general. The workshop on AI Ethics will be organized by the University of Sunderland, United Kingdom and it will discuss potential major ethical issues that will arise in the near future. Coordinator: Professor John Macintyre University of Sunderland UK Submission details can be found at AIAI conference submission page. Extended versions of selected workshop papers will be considered for publication in Springer's Journal on AI and Ethics. More info can be found at https://ifipaiai.org/2022/special-issues/ You can submit you AI & Ethics paper at http://www.easyacademia.org/aiai2022 or https://www.easyacademia.org/eann2022 More info can be found at https://ifipaiai.org/2022/workshops/#aiethics https://ifipaiai.org/2022/ https://eannconf.org/2022/ *** Apologies for cross-posting *** Dr Papaleonidas Antonios Organizing - Publication & Publicity co-Chair of 23rd EANN 2022 & 18th AIAI 2022 Civil Engineering Department Democritus University of Thrace papaleon at civil.duth.gr papaleon at sch.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From nasim.sonboli at gmail.com Thu Jan 6 20:25:33 2022 From: nasim.sonboli at gmail.com (Nasim Sonboli) Date: Thu, 6 Jan 2022 18:25:33 -0700 Subject: Connectionists: =?utf-8?q?2nd_CfP=3A_30th_ACM_Conference_on_User_?= =?utf-8?q?Modeling=2C_Adaptation_and_Personalization_=28UMAP_?= =?utf-8?b?4oCZMjIp?= Message-ID: Call for Papers - ACM UMAP 2022 --- Please forward to anyone who might be interested --- --- Apologies for cross-posting --- ----------------------------------------------------------------------------- **30th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ?22)** Barcelona*, Spain, July 4?7, 2022 http://www.um.org/umap2022/ * Due to the ongoing COVID-19 pandemic, we are planning for a hybrid conference and will accommodate online presentations where needed. Submission Deadline: * Abstracts due: February 10, 2022 (mandatory) * Full paper due: February 17, 2022 -------------------------------------------- **BACKGROUND AND SCOPE** ============================================ **ACM UMAP** ? ***User Modeling, Adaptation and Personalization*** ? is the premier international conference for researchers and practitioners working on systems that adapt to individual users or to groups of users, and that collect, represent, and model user information. **ACM UMAP** is sponsored by ACM SIGCHI (https://sigchi.org) and SIGWEB (https://www.sigweb.org), and organized with User Modeling Inc. (https://um.org) as the core Steering Committee, extended with past years? chairs. The proceedings are published by the **ACM** and will be part of the ACM Digital Library ( https://dl.acm.org). **ACM UMAP** covers a wide variety of research areas where personalization and adaptation may be applied. The main theme of **UMAP 2022** is ***?User control in personalized systems?***. Specifically, we welcome submissions related to user modeling, personalization, and adaptation in all areas of personalized systems, with an emphasis on how to balance adaptivity and user control. Below we present a short (but not prescriptive) list of topics of importance to the conference. ACM UMAP is co-located and collaborates with the ACM Hypertext conference ( https://ht.acm.org/ht2022/). UMAP takes place one week after Hypertext and uses the same submission dates and formats. We expect authors to submit research on personalized systems to UMAP and invite authors to submit their Web-related work without a focus on personalization to the Hypertext conference. The two conferences will organize one shared track on **personalized recommender systems** (same track chairs and PC, see the track description). -------------------------------------------- **IMPORTANT DATES** ============================================ - Paper Abstracts: February 10, 2022 (mandatory) - Full paper: February 17, 2022 - Notification: April 11, 2022 - Conference: July 4-July 7, 2022 **Note**: The submissions deadlines are at 11:59 pm AoE time (Anywhere on Earth) -------------------------------------------- **CONFERENCE TOPICS** ============================================ We welcome submissions related to *user modeling, personalization, and adaptation in any area*. The topics listed below are not intended to limit possible contributions. **Detailed descriptions and the suggested topics for each track are reported in the online version of the CFP on the UMAP 2022 web site.** ### **Personalized Recommender Systems** **Track Chairs: Osnat Mokryn (University of Haifa), Eva Zangerle (University of Innsbruck, Austria) and Markus Zanker (University of Bolzano, Italy, and University of Klagenfurt, Austria)** (*) This is a joint track between ACM UMAP and ACM Hypertext (same track chairs, overlapping PC). Authors planning to contribute to this track can submit to either conference, depending on their broader interest in either Hypertext or UMAP. Track chairs organize a special issue in the journal New Review of Hypermedia and Multimedia. This track aims to provide a forum for researchers and practitioners to discuss open challenges, the latest solutions, and novel research approaches in the field of recommender systems. In addition to mature research works addressing technical aspects pertaining to recommendations, we also particularly welcome research contributions that address questions related to user perception and the business value of recommender systems. ### **Adaptive Hypermedia, Semantic, and Social Web** **Track Chairs: Alexandra I. Cristea (Durham University, UK) and Peter Brusilovsky (University of Pittsburgh, US)** This track aims to provide a forum to researchers to discuss open research problems, solid solutions, the latest challenges, novel applications, and innovative research approaches in adaptive hypermedia, semantic and social web. We invite original submissions addressing all aspects of personalization, user models building, and personal experience in online social systems. ### **Intelligent User Interfaces** **Track chairs: Elisabeth Lex (Graz University of Technology, Austria) and Marko Tkalcic (University of Primorska, Slovenia)** This topic can be characterized by exploring how to make the interaction between computers and people smarter and more productive, which may leverage solutions from human-computer interaction, data mining, natural language processing, information visualization, and knowledge representation and reasoning. ### **Technology-Enhanced Adaptive Learning** **Track chairs: Judy Kay (University of Sydney, Australia) and Sharon Hsiao (Santa Clara University, US)** This track invites researchers, developers, and practitioners from various disciplines to present their innovative learning solutions, share acquired experiences, and discuss their modeling challenges for personalized adaptive learning. ### **Fairness, Transparency, Accountability, and Privacy** **Track chairs: Bamshad Mobasher (DePaul University College of Computing and Digital Media, US) and Munindar P. Singh (NC State University, US)** Adaptive systems researchers and developers have a social responsibility to care about the impact of their technologies on individual people (users, providers, and other stakeholders) and on society. This track invites work that pertains to the science of building, maintaining, evaluating, and studying adaptive systems that are fair, transparent, respectful of users? privacy, and beneficial to society. ### **Personalization for Persuasive and Behavior Change Systems** **Track chairs: Julita Vassileva (University of Saskatchewan, Canada) and Panagiotis Germanakos (SAP SE, Germany)** This track invites original submissions addressing the areas of personalization and tailoring for persuasive technologies, including but not limited to personalization models, user models, computational personalization, design and evaluation methods, and personal experience designing personalized and adaptive behaviour change technologies. ### **Virtual Assistants and Personalized Human-robot Interaction** **Track chairs: Radhika Garg (Syracuse University, US) and Cristina Gena (University of Torino, Italy)** This track aims at investigating new models and techniques for the adaptation of synthetic companions (e.g., virtual assistants, chatbots, social robots) to the individual user. ### **Research Methods and Reproducibility** **Track chairs: Odd Erik Gundersen (Norwegian University of Science and Technology, Norway) and Dietmar Jannach (University of Klagenfurt, Austria)** This track accepts works on methodologies for the evaluation of personalized systems, benchmarks, measurement scales, with particular attention to the reproducibility of results and of techniques. -------------------------------------------- **SUBMISSION AND REVIEW PROCESS** ============================================ Please consult the conference website for the submission link: http://www.um.org/umap2022/. The maximum length is **14 pages (excluding references) in the ACM new single-column format**. We encourage papers of any length up to 14 pages; reviewers will be asked to comment on whether the length is appropriate for the contribution. **Additional review criteria are available in the online version of the CFP on the UMAP 2022 web site.** Each accepted paper will be included in the conference proceedings and presented at the conference. UMAP uses a **double-blind** review process. Authors must omit their names and affiliations from submissions, and avoid obvious identifying statements. For instance, citations to the authors' own prior work should be made in the third person. Failure to anonymize your submission results in the desk rejection of your paper. -------------------------------------------- **ORGANIZERS** ============================================ **General chairs** * Ludovico Boratto, University of Cagliari, Italy * Alejandro Bellog?n, Universidad Aut?noma de Madrid, Spain * Olga C. Santos, Spanish National University for Distance Education, Spain **Program Chairs** - Liliana Ardissono, University of Torino, Italy - Bart Knijnenburg, Clemson University, US -------------------------------------------- **RELATED EVENTS** ============================================ Separate calls will be sent for Workshops and Tutorials, Doctoral Consortium, and Demo/Late-Breaking Results, as these have different deadlines and submission requirements. Nasim Sonboli (Researcher, University of Colorado Boulder), Helma Torkamaan (Researcher, University of Duisburg-Essen, Germany) UMAP'22 Publicity Chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From k.wong-lin at ulster.ac.uk Sun Jan 9 06:16:54 2022 From: k.wong-lin at ulster.ac.uk (Wong-Lin, Kongfatt) Date: Sun, 9 Jan 2022 11:16:54 +0000 Subject: Connectionists: Fully funded PhD studentship in computational modelling of decision uncertainty Message-ID: Applications are invited for a fully funded 3-year Ph.D. studentship in: "Emergent Computation of Decision Uncertainty Monitoring, Awareness and Learning", at Ulster University, UK. This PhD project aims to model and understand decision uncertainty monitoring computation using recurrent neural networks with "self-awareness", and to develop novel neuro-inspired AI algorithms. For more information, please refer to: https://www.ulster.ac.uk/doctoralcollege/find-a-phd/1044166 The application process for the Ph.D. studentship is opened with a closing date for applications on the 7th February 2022. The latest Postgraduate Research Experience Survey (PRES 2021) has placed Ulster University second in the UK for postgraduate researcher satisfaction. Anyone (UK or international applicant) who wishes to discuss about the Ph.D. studentship application or enquire more about this Ph.D. project should contact: Dr. KongFatt Wong-Lin (e-mail: k.wong-lin at ulster.ac.uk) Reader Intelligent Systems Research Centre, School of Computing, Engineering & Intelligent Systems, Ulster University, UK https://www.ulster.ac.uk/staff/k-wong-lin This email and any attachments are confidential and intended solely for the use of the addressee and may contain information which is covered by legal, professional or other privilege. If you have received this email in error please notify the system manager at postmaster at ulster.ac.uk and delete this email immediately. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Ulster University. The University's computer systems may be monitored and communications carried out on them may be recorded to secure the effective operation of the system and for other lawful purposes. Ulster University does not guarantee that this email or any attachments are free from viruses or 100% secure. Unless expressly stated in the body of a separate attachment, the text of email is not intended to form a binding contract. Correspondence to and from the University may be subject to requests for disclosure by 3rd parties under relevant legislation. The Ulster University was founded by Royal Charter in 1984 and is registered with company number RC000726 and VAT registered number GB672390524.The primary contact address for Ulster University in Northern Ireland is Cromore Road, Coleraine, Co. Londonderry BT52 1SA -------------- next part -------------- An HTML attachment was scrubbed... URL: From nasim.sonboli at gmail.com Sun Jan 9 00:29:22 2022 From: nasim.sonboli at gmail.com (Nasim Sonboli) Date: Sat, 8 Jan 2022 22:29:22 -0700 Subject: Connectionists: =?utf-8?q?ACM_UMAP=E2=80=9922=3A_Second_Call_for_?= =?utf-8?q?Workshop_and_Tutorial_Proposals?= Message-ID: Please accept our apologies in case of multiple receptions. Please send it to interested colleagues. -------------------------------------------------------- # **30th ACM International Conference on User Modeling, Adaptation and Personalization (ACM UMAP'22)** Barcelona, Spain, and Online 4 - 7 July 2022 https://www.um.org/umap2022/ Proposal Submission Deadline: January 27, 2022 -------------------------------------------------------- **BACKGROUND AND SCOPE** ============================ **ACM UMAP'22** is pleased to invite proposals for workshops and tutorials to be held in conjunction with the conference. ACM UMAP is the premier international conference for researchers and practitioners working on systems that adapt to individual users or to groups of users, and which collect, represent, and model user information. We encourage both researchers and industry practitioners to submit workshop and tutorial proposals. We strongly suggest involving organizers from different institutions, bringing different perspectives to the workshop or tutorial topic. We welcome workshops and tutorials with a creative structure that may attract various types of attendees and ensure rich interactions. All the tutorials and workshops should support both virtual and physical attendance (although we hope physical to be the preferred option). -------------------------------------------------------- **Call for Workshop Proposals** ============================ The workshops provide a venue to discuss and explore emerging areas of User Modeling and Adaptive Hypermedia research with like-minded researchers and practitioners from industry and academia. -------------------------------------------------------- **Important Dates** ============================ - Proposals due: January 27, 2022 - Notification to proposers: February 10, 2022 - Workshop day(s): July TBD, 2022 All deadlines are 11:59 pm, AoE time (Anywhere on Earth). -------------------------------------------------------- **Workshop Formats** ============================ In this edition, our goal is to have a balanced workshop program, comprising workshops with different formats and addressing newly emerging, currently evolving and established research topics. Different schemas to organize the workshop are possible, such as: - Working group meetings around a problem or topic. - Mini-conferences on special topics, having their own paper submission and review processes. - Mini-competitions or challenges around selected topics with individual or team participation. - Interactive discussion meetings focusing on subtopics of the UMAP general research topics. - Joint panels for different workshops. The detailed instructions for the proposal content, the submission, the responsibilities, the proceedings, and the registration are provided at [ https://www.um.org/umap2022/call-for-workshops/](https://www.um.org/umap2022/call-for-workshops/) . -------------------------------------------------------- **Call for Tutorial Proposals** ============================ Tutorials are intensive instructional sessions that provide a comprehensive introduction to established or emerging research topics of interest for the UMAP community. -------------------------------------------------------- **Important Dates** ============================ - Proposals due: January 27, 2022 - Notification to proposers: February 10, 2022 - Tutorial day: July TBD, 2022 All deadlines are 11:59 pm, AoE time (Anywhere on Earth). -------------------------------------------------------- **Tutorial Topics** ============================ An ideal tutorial should be broad enough to provide a basic introduction to the chosen area, but it should also cover the most important topics in depth. Topics of interest include, but are not limited to: - New user modeling technologies, methods, techniques, and trends (e.g. exploiting data mining and big data analytics for user modeling, evaluation methodologies, data visualization, etc.). - User modeling and personalization techniques for specific domains (e.g., health, e-government, e-commerce, cultural heritage, education, internet of things, mobile, music, information retrieval, human-robot interaction, etc.). - Application and impact of the user modeling and personalization techniques for information retrieval and recommender systems, including beyond-accuracy aspects (e.g., fairness). - Eliciting and learning user preferences by taking into account users'; emotional state, physical state, personality, trust, cognitive factors. The detailed instructions for the proposal content, the submission, the responsibilities, the proceedings, and the registration are provided at https://www.um.org/umap2022/call-for-tutorials/. -------------------------------------------------------- **Workshop and Tutorial Chairs** ============================ - Mirko Marras, University of Cagliari, Italy - Elvira Popescu, University of Craiova, Romania - Contact: [umap2022-wt at um.org](mailto:umap2022-wt at um.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pubconference at gmail.com Sat Jan 8 16:12:48 2022 From: pubconference at gmail.com (Pub Conference) Date: Sat, 8 Jan 2022 16:12:48 -0500 Subject: Connectionists: [journals] Neural Computing and Applications (NCAA) Special Issue CFP (Deadline: March 31, 2022) Message-ID: Neural Computing and Applications Topical Collection on Interpretation of Deep Learning: Prediction, Representation, Modeling and Utilization https://www.springer.com/journal/521/updates/19187658 Aims, Scope and Objective While Big Data offers the great potential for revolutionizing all aspects of our society, harvesting of valuable knowledge from Big Data is an extremely challenging task. The large scale and rapidly growing information hidden in the unprecedented volumes of non-traditional data requires the development of decision-making algorithms. Recent successes in machine learning, particularly deep learning, has led to breakthroughs in real-world applications such as autonomous driving, healthcare, cybersecurity, speech and image recognition, personalized news feeds, and financial markets. While these models may provide the state-of-the-art and impressive prediction accuracies, they usually offer little insight into the inner workings of the model and how a decision is made. The decision-makers cannot obtain human-intelligible explanations for the decisions of models, which impede the applications in mission-critical areas. This situation is even severely worse in complex data analytics. It is, therefore, imperative to develop explainable computation intelligent learning models with excellent predictive accuracy to provide safe, reliable, and scientific basis for determination. Numerous recent works have presented various endeavors on this issue but left many important questions unresolved. The first challenging problem is how to construct self-explanatory models or how to improve the explicit understanding and explainability of a model without the loss of accuracy. In addition, high dimensional or ultra-high dimensional data are common in large and complex data analytics. In these cases, the construction of interpretable model becomes quite difficult and complex. Further, how to evaluate and quantify the explainability of a model is lack of consistent and clear description. Moreover, auditable, repeatable, and reliable process of the computational models is crucial to decision-makers. For example, decision-makers need explicit explanation and analysis of the intermediate features produced in a model, thus the interpretation of intermediate processes is requisite. Subsequently, the problem of efficient optimization exists in explainable computational intelligent models. These raise many essential issues on how to develop explainable data analytics in computational intelligence. This Topical Collection aims to bring together original research articles and review articles that will present the latest theoretical and technical advancements of machine and deep learning models. We hope that this Topical Collection will: 1) improve the understanding and explainability of machine learning and deep neural networks; 2) enhance the mathematical foundation of deep neural networks; and 3) increase the computational efficiency and stability of the machine and deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following: - Interpretability of deep learning models - Quantifying or visualizing the interpretability of deep neural networks - Neural networks, fuzzy logic, and evolutionary based interpretable control systems - Supervised, unsupervised, and reinforcement learning - Extracting understanding from large-scale and heterogeneous data - Dimensionality reduction of large scale and complex data and sparse modeling - Stability improvement of deep neural network optimization - Optimization methods for deep learning - Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data) - Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security Guest Editors Nian Zhang (Lead Guest Editor), University of the District of Columbia, Washington, DC, USA, nzhang at udc.edu Jian Wang, China University of Petroleum (East China), Qingdao, China, wangjiannl at upc.edu.cn Leszek Rutkowski, Czestochowa University of Technology, Poland, leszek.rutkowski at pcz.pl Important Dates Deadline for Submissions: March 31, 2022 First Review Decision: May 31, 2022 Revisions Due: June 30, 2022 Deadline for 2nd Review: July 31, 2022 Final Decisions: August 31, 2022 Final Manuscript: September 30, 2022 Peer Review Process All the papers will go through peer review, and will be reviewed by at least three reviewers. A thorough check will be completed, and the guest editors will check any significant similarity between the manuscript under consideration and any published paper or submitted manuscripts of which they are aware. In such case, the article will be directly rejected without proceeding further. Guest editors will make all reasonable effort to receive the reviewer?s comments and recommendation on time. The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended to be considered for this special issue (with at least 30% difference from the original works). Submission Guidelines Paper submissions for the special issue should strictly follow the submission format and guidelines ( https://www.springer.com/journal/521/submission-guidelines ). Each manuscript should not exceed 16 pages in length (inclusive of figures and tables). Manuscripts must be submitted to the journal online system at https://www.editorialmanager.com/ncaa/default.aspx . Authors should select ?TC: Interpretation of Deep Learning? during the submission step ?Additional Information?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Sun Jan 9 15:28:51 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Sun, 9 Jan 2022 21:28:51 +0100 Subject: Connectionists: 1st CfP ACDL 2022, 5th Online & Onsite Advanced Course on Data Science & Machine Learning | August 22-26, 2022 | Certosa di Pontignano, Italy - Early Registration: by March 23 In-Reply-To: References: Message-ID: #ACDL2022, An Interdisciplinary Course: #BigData, #DeepLearning & #ArtificialIntelligence without Borders ACDL 2022 ? A Unique Experience: #DataScience, #MachineLearning & #ArtificialIntelligence with the World?s Leaders in the fascinating atmosphere of the ancient Certosa di Pontignano (Online attendance available) Certosa di Pontignano, Castelnuovo Berardenga (Siena) - #Tuscany, Italy August 22-26 https://acdl2022.icas.cc acdl at icas.cc ACDL 2022 (as ACDL 2021 and ACDL 2020): an #OnlineAndOnsiteCourse https://acdl2022.icas.cc/acdl-2022-as-acdl-2021-and-acdl-2020-an-online-onsite-course/ REGISTRATION: Early Registration: by March 23 https://acdl2022.icas.cc/registration/ DEADLINES: Early Registration: by Wednesday March 23 (AoE) Oral/Poster Presentation Submission Deadline: Wednesday March 23 (AoE) Late Registration: from Thursday March 24 Accommodation Reservation at Reservation at the Certosa di Pontignano: by Monday May 23 Notification of Decision for Oral/Poster Presentation: by Thursday June 23 LECTURERS: Each Lecturer will hold three/four lessons on a specific topic. https://acdl2022.icas.cc/lecturers/ * Silvio Savarese, Salesforce & Stanford, University, USA Institute for Human-Centered Artificial Intelligence * Mihaela van der Schaar, University of Cambridge, UK More Keynote Speakers to be announced soon. PAST LECTURERS: https://acdl2022.icas.cc/past-lecturers/ Ioannis Antonoglou, Google DeepMind, UK Igor Babuschkin, DeepMind - Google, London, UK Pierre Baldi, University of California Irvine, USA Roman Belavkin, Middlesex University London, UK Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada Bettina Berendt, TU Berlin, Weizenbaum Institute, and KU Leuven Jacob D. Biamonte, Skolkovo Institute of Science and Technology, Russian Federation Chris Bishop, Microsoft, Cambridge, UK, and Laboratory Director at Microsoft Research Cambridge & University of Edinburgh Michael Bronstein, Twitter & Imperial College London, UK Sergiy Butenko, Texas A&M University, USA Silvia Chiappa, DeepMind, London, UK Giuseppe Di Fatta, University of Reading, UK Oren Etzioni, Allen Institute for AI, USA, and CEO at Allen Institute for AI Aleskerov Z. Fuad, National Research University Higher School of Economics, Russia Marco Gori, University of Siena, Italy Georg Gottlob, Computer Science Dept, University of Oxford, UK Yi-Ke Guo, Imperial College London, UK Phillip Isola, MIT, USA Michael I. Jordan, University of California, Berkeley, USA Leslie Kaelbling, MIT - Computer Science & Artificial Intelligence Lab, USA Diederik P. Kingma, Google Brain, San Francisco, CA, USA Ilias S. Kotsireas, Wilfrid Laurier University, Canada Marta Kwiatkowska, Computer Science Dept., University of Oxford, UK Risto Miikkulainen, University of Texas at Austin, USA Peter Norvig, Director of Research, Google Panos Pardalos, University of Florida, USA Alex 'Sandy' Pentland, MIT & Director of MIT?s Human Dynamics Laboratory, USA Jos? C. Principe, University of Florida, USA Marc'Aurelio Ranzato, Facebook AI Research Lab, New York, USA Dolores Romero Morales, Copenhagen Business School, Denmark Daniela Rus, MIT, USA, and Director of CSAIL Ruslan Salakhutdinov, Carnegie Mellon University, and AI Research at Apple, USA Guido Sanguinetti, The University of Edinburgh, UK Cristina Savin, New York University, Center for Neural Science & Center for Data Science, USA Josh Tenenbaum, MIT, USA Naftali Tishby, Hebrew University, Israel Isabel Valera, Saarland University, Germany, and Max Planck Institute for Intelligent Systems, T?bingen, Germany Mihaela van der Schaar, University of Cambridge, and Director of Cambridge Centre for AI in Medicine Joaquin Vanschoren, Eindhoven University of Technology, The Netherlands Oriol Vinyals, Google DeepMind, UK SCOPE: MSc students, PhD students, postdocs, junior/senior academics, and industry practitioners will be typical profiles of the attendants. In fact, the Advanced Course is not a summer school suited only for younger scholars. Rather, a significant proportion of seasoned investigators are regularly present among the attendees, often senior and junior faculty at their own institutions. The balanced audience that we strive to maintain in each Advanced Course greatly contributes to the development of intense cross-disciplinary debates among faculty and participants that typically address the most advanced and emerging areas of each topic. Each faculty member presents lectures and discusses with the participants for one entire day. Such long interaction together with the small, exclusive Course size provides the uncommon opportunity to fully explore the expertise of each faculty, often through one-to-one mentoring. This is unparalleled and priceless. The Certosa di Pontignano provides the perfect setting to a relaxed yet intense learning atmosphere, with the stunning backdrop of the Tuscan landscapes. World-class wines and traditional foods will make the Advanced Course on Data Science and Machine Learning the experience of a lifetime. VENUE: The venue of ACDL 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact persono: Dr. Lorenzo Pasquinuzzi A few Kilometers from Siena, on a hill dominating the town stands the ancient Certosa di Pontignano, a unique place where nature, history and hospitality blend together in memorable harmony. Built in the 1300, its medieval structure remains intact with additions of the following centuries. The Certosa is centered on its historic cloisters and gardens. https://acdl2022.icas.cc/venue/ PAST EDITIONS: https://acdl2022.icas.cc/past-editions/ https://acdl2018.icas.xyz https://acdl2019.icas.xyz https://acdl2020.icas.xyz/https://acdl2021.icas.cc/ REGISTRATION: https://acdl2022.icas.cc/registration/ CERTIFICATE:A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. ACDL 2022 Poster: https://acdl2022.icas.cc/wp-content/uploads/sites/19/2021/12/poster-ACDL-2022-1.png Anyone interested in participating in ACDL 2022 should register as soon as possible. Similarly for accommodation at the Certosa di Pontignano (the School Venue), book your full board accommodation at the Certosa as soon as possible. All course participants must stay at the Certosa di Pontignano. See you in 3D or 2D :) in Tuscany in August! ACDL 2022 Directors. https://acdl2022.icas.cc/category/news/ https://acdl2022.icas.cc/faq/ acdl at icas.cc https://acdl2022.icas.cc https://www.facebook.com/groups/204310640474650/ https://twitter.com/TaoSciences * Apologies for multiple copies. Please forward to anybody who might be interested * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhammer at techfak.uni-bielefeld.de Sun Jan 9 16:42:26 2022 From: bhammer at techfak.uni-bielefeld.de (Barbara Hammer) Date: Sun, 9 Jan 2022 22:42:26 +0100 Subject: Connectionists: Talk be Kristian Kersting on Neuro-symbolic AI Message-ID: <84e58aee-e704-e0de-6b0c-5a9a1d809ad5@techfak.uni-bielefeld.de> Dear colleagues, The Joint Artificial Intelligence Institute (JAII) was set up by the universities of Bielefeld and Paderborn to promote joint research activities in the field of AI. The JAII has started a lecture series to invite interesting guests from this research field virtually and/or in person to Paderborn or Bielefeld. On Thursday, January 13th, 2022, 16:15-17:45, virtually via Zoom, Prof. Dr. Kristian Kersting, TU Darmstadt, will give a lecture on "Making Deep Machines Right for the Right Reasons". Abstract: Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show ?Clever Hans?-like behavior?making use of confounding factors within datasets?to achieve high performance. In this talk, I shall touch upon explanatory interactive learning (XIL). XIL adds the expert into the training loop such that she interactively revises the original model via providing feedback on its explanations. Since ?visual? explanations may not be sufficient to grapple with the model?s true concept, I shall also touch upon revising a model on the semantic level, e.g. ?never focus on the color to make your decision?. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning. Overall, this all illustrates the benefits of a Hybrid AI, the combination of neural and symbolic AI. To register for the virtual lecture, visit https://jaii.eu. Best wishes Barbara Hammer -- Prof. Dr. Barbara Hammer Machine Learning Group, CITEC Bielefeld University D-33594 Bielefeld Phone: +49 521 / 106 12115 From aihuborg at gmail.com Mon Jan 10 04:51:12 2022 From: aihuborg at gmail.com (AIhub) Date: Mon, 10 Jan 2022 09:51:12 +0000 Subject: Connectionists: Stephen Hanson in conversation with Terry Sejnowski Message-ID: Stephen Hanson in conversation with Terry Sejnowski In the latest episode of this video series for AIhub.org, Stephen Hanson talks to Terry Sejnowski about the history of neural networks, neural modelling, biophysics, explainable AI, language modelling, deep learning, protein folding, and much more. You can watch the discussion, and read the transcript, here: https://aihub.org/2022/01/07/what-is-ai-stephen-hanson-in-conversation-with-terry-sejnowski/ About AIhub: AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information through AIhub.org ( https://aihub.org/). We help researchers publish the latest AI news, summaries of their work, opinion pieces, tutorials and more. We are supported by many leading scientific organizations in AI, namely AAAI , NeurIPS , ICML , AIJ /IJCAI , ACM SIGAI , EurAI/AICOMM, CLAIRE and RoboCup . Twitter: @aihuborg -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Mon Jan 10 06:10:20 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Mon, 10 Jan 2022 13:10:20 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Bernhard_Rinn?= =?utf-8?q?er=3A_=E2=80=9CSelf-awareness_for_autonomous_systems?= =?utf-8?q?=E2=80=9D=2C_11th_January_2022_17=3A00-18=3A00_CET=2E_Up?= =?utf-8?q?coming_AIDA_AI_excellence_lectures?= References: <15a501d80203$6d58c040$480a40c0$@csd.auth.gr> <001801d80206$1fc9d550$5f5d7ff0$@csd.auth.gr> Message-ID: <1d1e01d80612$a8cf8560$fa6e9020$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Bernhard Rinner (Alpen-Adria-Universit?t Klagenfurt, Austria), a prominent AI researcher internationally, will deliver the e-lecture: ?Self-awareness for autonomous systems?, on Tuesday 11th January 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/s/99775795702 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Mon Jan 10 09:17:31 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Mon, 10 Jan 2022 16:17:31 +0200 Subject: Connectionists: Free 2022 January SPRINGEROPEN EURASIP JIVP Webminar 'Drone Vision and Deep Learning for Infrastructure Inspection' by Prof. I. Pitas (Jan. 2022, 13th at 12:30 p.m. CET) References: <006b01d80611$9a743f70$cf5cbe50$@csd.auth.gr> Message-ID: <20cf01d8062c$cec36830$6c4a3890$@csd.auth.gr> Date&Time: January 2022, 13th at 12:30pm CET [06:30 a.m. New-York] - [12:30 p.m. Paris] - [1:30 p.m. Thessaloniki] - [6:30 p.m. Beijing] Title: Drone Vision and Deep Learning for Infrastructure Inspection Speaker: Ioannis Pitas To join the free 1-hour webinar, it is required to pre-register at, https://forms.gle/HZ9SAL9KXD2ReD9X6 or through the journal website at, https://jivp-eurasipjournals.springeropen.com/ Contact: Jana Palinkas Abstract: This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments.Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed. The lecture will offer : a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well. Short bio: Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki (AUTH), Greece. Since 1994, he has been a Professor at the Department of Informatics of AUTH and Director of the Artificial Intelligence and Information Analysis (AIIA) lab. He served as a Visiting Professor at several Universities. His current interests are in the areas of computer vision, machine learning, autonomous systems, intelligent digital media, image/video processing, human- centred computing, affective computing, 3D imaging and biomedical imaging. He has published over 920 papers, contributed in 45 books in his areas of interest and edited or (co-)authored another 11 books. He has also been member of the program committee of many scientific conferences and workshops. In the past he served as Associate Editor or co-Editor of 13 international journals and General or Technical Chair of 5 international conferences. He delivered 98 keynote/invited speeches worldwide. He co-organized 33 conferences and participated in technical committees of 291 conferences. He participated in 71 R&D projects, primarily funded by the European Union and is/was principal investigator in 43 such projects. Prof. Pitas lead the big European H2020 R&D project MULTIDRONE: https://multidrone.eu/. He is AUTH principal investigator in H2020 R&D projects Aerial Core and AI4Media. He was chair and initiator of the Autonomous Systems Initiative https://ieeeasi.signalprocessingsociety.org/. He is chair of the International AI Doctoral Academy (AIDA) https://www.i-aida.org/ and is PI in Horizon2020 EU funded R&D projects AI4Media (1 of the 4 AI flagship projects in Europe) and AerialCore. He has 34400+ citations to his work and h-index 87+. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessio.ferone at uniparthenope.it Mon Jan 10 12:23:26 2022 From: alessio.ferone at uniparthenope.it (ALESSIO FERONE) Date: Mon, 10 Jan 2022 17:23:26 +0000 Subject: Connectionists: [CfP - NEWS] ICIAP2021 - Special Session: Computer Vision for Coastal and Marine Environment Monitoring Message-ID: ******Apologies for multiple posting****** NEWS Submission deadline extended In response to the requests from many authors, we are pleased to announce an extension of the paper submission deadline. The new submission deadline is January 31, 2022. _________________________________________ ICIAP2021 Special Session Computer Vision for Coastal and Marine Environment Monitoring https://www.iciap2021.org/specialsession/ _________________________________________ The coastal and marine environment represents a vital part of the world, resulting in a complex ecosystem tightly linked to many human activities. For this reason, monitoring coastal and marine ecosystems is of critical importance for gaining a better understanding of their complexity with the goal of protecting such a fundamental resource. Coastal and marine environmental monitoring aims to employ leading technologies and methodologies to monitor and evaluate the marine environment both near the coast and underwater. This monitoring can be performed either on site, using sensors for collecting data, or remotely through seafloor cabled observatories, AUVs or ROVs, resulting in a huge amount of data that require advanced intelligent methodologies to extract useful information and knowledge on environmental conditions. A large part of this data is represented by images and videos produced by fixed and PTZ cameras either on the coast, on the marine surface or underwater. For th! is reason, the analysis of such volume of imagery data imposes a series of unique challenges, which need to be tackled by the computer vision community. The aim of the special session is to host recent research advances in the field of computer vision and image processing techniques applied to the monitoring of coastal and marine environment and to highlight research issues and still open questions. Full CfP at https://neptunia.uniparthenope.it/cfp/cv-cmem/ Important Dates: Paper Submission Deadline: January 31, 2022 Decision Notification: February 19, 2022 Camera Ready: March 6, 2022 Organizers: Angelo Ciaramella Sajid Javed Alessio Ferone -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver at roesler.co.uk Mon Jan 10 11:41:04 2022 From: oliver at roesler.co.uk (Oliver Roesler) Date: Mon, 10 Jan 2022 16:41:04 +0000 Subject: Connectionists: CFP Special Issue on Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation Message-ID: <0843b8cc-beb7-c1fc-b909-c3a250283ff6@roesler.co.uk> *CALL FOR PAPERS* **Apologies for cross-posting** *Special Issue* on *Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation* in Interaction Studies *I. Aim and Scope* A key factor for the acceptance of robots as regular partners in human-centered environments is the appropriateness and predictability of their behavior. The behavior of human-human interactions is governed by customary rules that define how people should behave in different situations, thereby governing their expectations. Socially compliant behavior is usually rewarded by group acceptance, while non-compliant behavior might have consequences including isolation from a social group. Making robots able to understand human social norms allows for improving the naturalness and effectiveness of human-robot interaction and collaboration. Since social norms can differ greatly between different cultures and social groups, it is essential that robots are able to learn and adapt their behavior based on feedback and observations from the environment. This special issue in Interaction Studies aims to attract the latest research aiming at learning, producing, and evaluating human-aware robot behavior, thereby, following the recent RO-MAN 2021 Workshop on Robot Behavior Adaptation to Human Social Norms (TSAR) in providing a venue to discuss the limitations of the current approaches and future directions towards intelligent human-aware robot behaviors. *II. Submission* 1. Before submitting, please check the official journal guidelines . 2. For paper submission, please use the online submission system . 3. After logging into the submission system, please click on "Submit a manuscript" and select "Original article". 4. Please ensure that you select "Special Issue: Socially Acceptable Robot Behavior" under "General information". ??? The primary list of topics covers the following points (but not limited to): * Human-human vs human-robot social norms * Influence of cultural and social background on robot behavior perception * Learning of socially accepted behavior * Behavior adaptation based on social feedback * Transfer learning of social norms experience * The role of robot appearance on applied social norms * Perception of socially normative robot behavior * Human-aware collaboration and navigation * Social norms and trust in human-robot interaction * Representation and modeling techniques for social norms * Metrics and evaluation criteria for socially compliant robot behavior *III. Timeline* 1. Deadline for paper submission: *January 31, 2022*** 2. First notification for authors: *April 15, 2022* 3. Deadline for revised papers submission: *May 31, 2022* 4. Final notification for authors: *July 15, 2022* 5. Deadline for submission of camera-ready manuscripts: *August 15, 2022* ??? Please note that these deadlines are only indicative and that all submitted papers will be reviewed as soon as they are received. *IV. Guest Editors* 1. *Oliver Roesler* ? Vrije Universiteit Brussel ? Belgium 2. *Elahe Bagheri* ? Vrije Universiteit Brussel ? Belgium 3. *Amir Aly* ? University of Plymouth ? UK 4. *Silvia Rossi* ? University of Naples Federico II ? Italy 5. *Rachid Alami* ? CNRS-LAAS ? France -------------- next part -------------- An HTML attachment was scrubbed... URL: From llong at simonsfoundation.org Mon Jan 10 13:13:01 2022 From: llong at simonsfoundation.org (Laura Long) Date: Mon, 10 Jan 2022 13:13:01 -0500 Subject: Connectionists: EARLY CAREER SCIENTIST GRANT OPPORTUNITIES- Open January 2022 Message-ID: The Simons Foundation is invested in supporting the next generation of researchers. Our Independence Awards programs promote talented early-career scientists by facilitating their transition to research independence and providing grant funding at the start of their professorships. We are offering such awards through three of our programs: the Simons Foundation Autism Research Initiative (SFARI), the Simons Collaboration on the Global Brain (SCGB) and the Simons Collaboration on Plasticity and the Aging Brain (SCPAB). These requests for applications (RFAs) are aimed at Ph.D. and M.D. ? holding scientists who are currently in training positions and intend to seek tenure-track faculty positions over the next two years (i.e., 2022-2024). Successful applicants will receive a commitment of $495,000 over three years, activated upon assumption of a tenure-track professorship. Visit bit.ly/3yqWILX for more information on all three programs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From malini.vinita.samarasinghe at ini.ruhr-uni-bochum.de Tue Jan 11 01:58:01 2022 From: malini.vinita.samarasinghe at ini.ruhr-uni-bochum.de (Vinita Samarasinghe) Date: Tue, 11 Jan 2022 07:58:01 +0100 Subject: Connectionists: POSTPONED - Women in Memory Research - Workshop - March 7-9, 2022 In-Reply-To: <6ec46c4e-28ba-ad7c-70ee-0e557f7b16a1@ini.rub.de> References: <6ec46c4e-28ba-ad7c-70ee-0e557f7b16a1@ini.rub.de> Message-ID: <18e29881-2e82-ea46-c772-41055ae24c85@ini.ruhr-uni-bochum.de> Dear all, due to the increasing incidence numbers here in Germany we have decided to postpone the WiMR workshop. It will instead be held in summer 2023 in conjunction with the conference GEM 2023. Call for papers and conference and workshop details will be posted here, and on our website https://for2812.rub.de, once they have been finalized. We look forward to greeting you here in Bochum, Germany in 2023 and apologize for any inconvenience caused. Kind regards, Vinita Samarasinghe M.Sc., M.A. Science Manager Arbeitsgruppe Computational Neuroscience Institut f?r Neuroinformatik Ruhr-Universit?t Bochum, NB 3/73 Postfachnummer 110 Universit?tstr. 150 D-44801 Bochum Tel: +49 (0)234 32 27996 Email: samarasinghe at ini.rub.de On 23.12.21 14:17, Vinita Samarasinghe wrote: > > In conjunction with International Women's Day the FOR 2812 is > organising its first ?*Women in Memory Research*? event from *March > 7-9, 2022.* > > The percentage of senior female university researchers in our field > (in Germany) lies between 21% and 29%. Our goal is to increase these > numbers! So come and learn what an academic career looks like at the > Ruhr University Bochum and discover its advantages. You'll be > introduced to the university and it's support structures, be able to > participate in university wide programs, meet with female faculty, > listen to some amazing scientific talks, present your research, and > see what collaborative research looks like in the RUB memory research > community. > > Sounds exciting? > > *Who can apply:*Female master?s students in their final year of study > and recently graduated master?s students who are looking into an > academic career in the area of memory research/neuroscience. > Applicants must have excellent grades and be able to communicate in > English. Selection of participants is competitive. We only have space > for 12 participants! > > *How to apply:*Send your application including a one page letter of > motivation, a current CV, master?s transcripts and a letter of > recommendation from one of your professors. Your application should be > sent, as a single PDF document, to Vinita Samarasinghe > @for2812 at rub.deby January 23, 2022. If you need child care or any > other support please note this in your application. > > *What to expect:*We will provide accommodation and cover travel costs > of up to 500 Euro (some meals are included). The program will be > offered in English; however, certain programs offered by the Ruhr > University in conjunction with International Women's Day may be only > available in German. You will be asked to present your current > research in the form of a poster. > > *Contact:*Vinita Samarasinghe,for2812 at rub.de,?Tel: +49 234 32 27996, > https://for2812.rub.de > > -- > Vinita Samarasinghe M.Sc., M.A. > > Science Manager > Arbeitsgruppe Computational Neuroscience > Institut f?r Neuroinformatik > Ruhr-Universit?t Bochum, NB 3/73 > Postfachnummer 110 > Universit?tstr. 150 > 44801 Bochum > > Tel: +49 (0) 234 32 27996 > Email:samarasinghe at ini.rub.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From el-ghazali.talbi at univ-lille.fr Tue Jan 11 03:19:18 2022 From: el-ghazali.talbi at univ-lille.fr (El-ghazali Talbi) Date: Tue, 11 Jan 2022 09:19:18 +0100 Subject: Connectionists: OLA'2022 deadline approaching Message-ID: Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. **************************************************************************************** ????????????????????????? OLA'2022 ????????? International Conference on Optimization and Learning ????????????????????????? 18-20 July 2022 ????????????????????? Syracuse (Sicilia), Italy ??????????????? http://ola2022.sciencesconf.org/ ??????????????????? SCOPUS Springer Proceedings **************************************************************************************** OLA is a conference focusing on the future challenges of optimization and learning methods and their applications. The conference OLA'2022 will provide an opportunity to the international research community in optimization and learning to discuss recent research results and to develop new ideas and collaborations in a friendly and relaxed atmosphere. OLA'2022 welcomes presentations that cover any aspects of optimization and learning research such as big optimization and learning, optimization for learning, learning for optimization, optimization and learning under uncertainty, deep learning, new high-impact applications, parameter tuning, 4th industrial revolution, computer vision, hybridization issues, optimization-simulation, meta-modeling, high-performance computing, parallel and distributed optimization and learning, surrogate modeling, multi-objective optimization ... Submission papers: We will accept two different types of submissions: -?????? S1: Extended abstracts of work-in-progress and position papers of a maximum of 3 pages -?????? S2: Original research contributions of a maximum of 10 pages Important dates: =============== Invited session organization? Dec 20, 2021 Paper submission deadline???? Jan 28, 2022 Notification of acceptance??? March 25, 2022 Proceedings: Accepted papers in categories S1 and S2 will be published in the proceedings. A SCOPUS and DBLP indexed Springer book will be published for accepted long papers. Proceedings will be available at the conference. -- ********************************************************************** OLA'2022 International Conference on Optimization and Learning (SCOPUS, Springer) 18-20 July 2022, Syracuse, Sicilia, Italy http://ola2022.sciencesconf.org *********************************************************************** Prof. El-ghazali TALBI Polytech'Lille, University Lille - INRIA CRISTAL - CNRS From poirazi at imbb.forth.gr Tue Jan 11 04:19:54 2022 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Tue, 11 Jan 2022 11:19:54 +0200 Subject: Connectionists: DENDRITES 2022 call for abstracts - Feb. 1st, 2022 In-Reply-To: References: Message-ID: DENDRITES 2022 EMBO Workshop on Dendritic Anatomy, Molecules and Function Heraklion, Crete, Greece 23-26 May 2022 http://meetings.embo.org/event/20-dendrites Dear Colleagues, We are pleased to announce the solicitation of abstracts for short oral or poster presentations at the EMBO Workshop on DENDRITES 2022, which will take place in Heraklion, Crete on 23-26 May 2022. This is the 4th of a very successful series of meetings on the island of Crete that is dedicated to dendrites. The meeting will bring together scientific leaders from around the globe to present their theoretical and experimental work on dendrites. The meeting program is designed to facilitate discussions of new ideas and discoveries, in a relaxed atmosphere that emphasizes interaction. Please register (no payment required) and submit your abstract online at: http://meetings.embo.org/event/20-dendrites Submissions of abstracts are due by *February 1st, **2022* Notifications will be provided by February 28th, 2022 Registration payment due by April 15th, 2022 Potential attendees are strongly encouraged to submit an abstract as presenters will have registration priority. For more information about the conference, please refer to our web site or send email to info at mitos.com.gr We look forward to seeing you in person at DENDRITES 2022! The organizers, Yiota Poirazi, Kristen Harris, Matthew Larkum, Michael H?usser -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385, GR 70013, Heraklion, Crete GREECE Tel: +30 2810-391139 /-391238 Fax: +30 2810-391101 ?mail: poirazi at imbb.forth.gr Lab site: www.dendrites.gr -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385, GR 70013, Heraklion, Crete GREECE Tel: +30 2810-391139 / -391238 Fax: +30 2810-391101 ?mail: poirazi at imbb.forth.gr Lab site: www.dendrites.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Spencer at uea.ac.uk Wed Jan 12 03:54:39 2022 From: J.Spencer at uea.ac.uk (John Spencer (PSY - Staff)) Date: Wed, 12 Jan 2022 08:54:39 +0000 Subject: Connectionists: Post-doctoral posts in England / Germany... In-Reply-To: References: Message-ID: Three year postdoctoral positions in England / Germany Profs. John Spencer and Gregor Schoener are looking for two postdoctoral candidates to work on a joint project funded by the Leverhulme Trust. The goal of the project is to construct a theory that explains how the brain flexibly integrates lower-level processes with higher-level language and executive functions. The project will use the framework of Dynamic Field Theory (www.dynamicfieldtheory.org) to both simulate human behaviours and embody the theory on an autonomous robot to explore application to real-world scenarios. One postdoctoral fellow will be housed in the School of Psychology at the University of East Anglia in Norwich, UK. The other fellow will be housed at the Institute for Neurocomputing at the Ruhr University in Bochum, Germany. Details can be found here: https://myview.uea.ac.uk/webrecruitment/pages/vacancy.jsf?vacancyRef=RA1931 Inquiries can be addressed to John Spencer (j.spencer at uea.ac.uk) or Gregor Schoener (gregor.schoener at ini.rub.de). Applications are due by January 20, 2022. The posts will be available starting March 7, 2022. John P. Spencer, PhD Professor Developmental Dynamics Lab https://www.facebook.com/DDPSYUEA http://www.uea.ac.uk/developmental-dynamics-lab/home School of Psychology, Room 0.09 Lawrence Stenhouse Building, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ United Kingdom Telephone 01603 593968 [/var/folders/jl/_9mpm6j92_9ckmg0q9qhs4986d96kt/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/cidimage002.jpg at 01D48BB4.171FA130][/var/folders/jl/_9mpm6j92_9ckmg0q9qhs4986d96kt/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/cidimage004.jpg at 01D48BB4.171FA130] Gold (Teaching Excellence Framework 2017-2021) World Top 200 (Times Higher Education World University Rankings 2020) UK Top 25 (The Times/Sunday Times 2020 and Complete University Guide 2020) World Top 50 for research citations (Times Higher Education World University Rankings 2020) Athena SWAN Silver Award Holder in recognition of advancement of gender equality for all (Advance HE 2019) Any personal data exchanged as part of this email conversation will be processed by the University in accordance with current UK data protection law and in line with the relevant UEA Privacy Notice. This email is confidential and may be privileged. If you are not the intended recipient please accept my apologies; please do not disclose, copy or distribute information in this email or take any action in reliance on its contents: to do so is strictly prohibited and may be unlawful. Please inform me that this message has gone astray before deleting it. Thank you for your co-operation. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 198 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 5591 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2464 bytes Desc: image003.jpg URL: From joanna.zapisek at eng.ox.ac.uk Wed Jan 12 04:45:22 2022 From: joanna.zapisek at eng.ox.ac.uk (Joanna Zapisek) Date: Wed, 12 Jan 2022 09:45:22 +0000 Subject: Connectionists: University of Oxford, 2 x Postdoctoral Research Assistants in Machine Learning In-Reply-To: References: Message-ID: Hello, We are seeking two Postdoctoral Research Assistants in Machine Learning to join Professor Torr's research group at the Department of Engineering Science (central Oxford). The group is an internationally leading research group that has numerous scientific awards and has close links with some of the top industrial research labs, more information can be found here https://torrvision.com/. The posts are fixed-term for 2 year in the first instance with funding provided by the EPSRC. You will be responsible for the development and implementation of novel computer vision and learning algorithms for reliable, robust, and efficient deep neural networks. You should hold a PhD or DPhil (or be near completion of) in Computer Vision or Machine Learning. You should also have excellent communication skills, including the ability to write for publication, present research proposals and results, and represent the research group at meetings. Informal enquiries may be addressed to philip.torr at eng.ox.ac.uk. The University offers a comprehensive range of childcare services and has very generous maternity, adoption, paternity, shared parental leave schemes in operation. Requests for flexible working are always taken into consideration. We offer an enhanced entitlement to 38 days' annual leave per year (pro-rata for part-time staff), inclusive of bank holidays and fixed closure days. Additional long service leave is available after 5 years' service. An additional scheme enables staff to request to purchase up to ten additional days' annual leave in each holiday year. Other staff benefits can be found here https://hr.admin.ox.ac.uk/staff-benefits Only applications received before midday on the 16th February 2022 can be considered. You will be required to upload a covering letter/supporting statement, including a brief statement of research interests (describing how past experience and future plans fit with the advertised position), CV and the details of two referees as part of your online application. The Department holds an Athena Swan Bronze award, highlighting its commitment to promoting women in Science, Engineering and Technology. To apply go to https://my.corehr.com/pls/uoxrecruit/erq_jobspec_version_4.display_form?p_company=10&p_internal_external=E&p_display_in_irish=N&p_process_type=&p_applicant_no=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y&p_recruitment_id=155286 Thank you, Joanna [cid:694DA3CF-6F66-4FBC-A3E2-A0586029DE1B]Joanna Zapisek Senior Research Manager Professor Torr Vision Group University of Oxford Department of Engineering Science Parks Road, Oxford, OX1 3PJ t: +44 (0) 1865 273130 e: Joanna.zapisek at eng.ox.ac.uk w: https://torrvision.com/ I'm working part time (Mon, Wed, Fri full day; Tue and Thurs mornings) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 19989 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2411 bytes Desc: image002.jpg URL: From mpavone at dmi.unict.it Wed Jan 12 11:24:36 2022 From: mpavone at dmi.unict.it (Mario Pavone) Date: Wed, 12 Jan 2022 17:24:36 +0100 Subject: Connectionists: OLA'2022 deadline approaching Message-ID: <20220112172436.Horde.DmT0Geph4B9h3wDEp1yzcUA@mbox.dmi.unict.it> Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. **************************************************************************************** OLA'2022 International Conference on Optimization and Learning 18-20 July 2022 Syracuse (Sicilia), Italy http://ola2022.sciencesconf.org/ SCOPUS Springer Proceedings **************************************************************************************** OLA is a conference focusing on the future challenges of optimization and learning methods and their applications. The conference OLA'2022 will provide an opportunity to the international research community in optimization and learning to discuss recent research results and to develop new ideas and collaborations in a friendly and relaxed atmosphere. OLA'2022 welcomes presentations that cover any aspects of optimization and learning research such as big optimization and learning, optimization for learning, learning for optimization, optimization and learning under uncertainty, deep learning, new high-impact applications, parameter tuning, 4th industrial revolution, computer vision, hybridization issues, optimization-simulation, meta-modeling, high-performance computing, parallel and distributed optimization and learning, surrogate modeling, multi-objective optimization ... Submission papers: We will accept two different types of submissions: - S1: Extended abstracts of work-in-progress and position papers of a maximum of 3 pages - S2: Original research contributions of a maximum of 10 pages Important dates: =============== Invited session organization Dec 20, 2021 Paper submission deadline Jan 28, 2022 Notification of acceptance March 25, 2022 Proceedings: Accepted papers in categories S1 and S2 will be published in the proceedings. A SCOPUS and DBLP indexed Springer book will be published for accepted long papers. Proceedings will be available at the conference. -- Mario F. Pavone, PhD Associate Professor Dept of Mathematics and Computer Science University of Catania V.le A. Doria 6 - 95125 Catania, Italy --------------------------------------------- tel: +39 095 7383034 mobile: +39 3384342147 Email: mpavone at dmi.unict.it http://www.dmi.unict.it/mpavone/ FB: https://www.facebook.com/mfpavone Skype: mpavone ========================================================= MIC 2022 - 14th International Metaheuristics Conference 11-14 July 2022, Ortigia-Syracuse, Italy https://www.ants-lab.it/mic2022/ ========================================================= From ioannakoroni at csd.auth.gr Wed Jan 12 07:39:03 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 12 Jan 2022 14:39:03 +0200 Subject: Connectionists: =?utf-8?q?Live_AIDA_e-Lecture_by_Dr=2E_Sebastian_?= =?utf-8?b?TGFwdXNjaGtpbjog4oCcVG93YXJkcyBBY3Rpb25hYmxlIFhBSeKAnSwg?= =?utf-8?q?25th_January_2022_17=3A00-18=3A00_CET?= References: <003401d80794$5c14f060$143ed120$@csd.auth.gr> Message-ID: <015e01d807b1$626a9e70$273fdb50$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Dr. Sebastian Lapuschkin, a prominent AI researcher internationally, will deliver the e-lecture: ?Towards Actionable XAI?, on Tuesday 25th January 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/91473198783 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From maanakg at gmail.com Thu Jan 13 00:11:01 2022 From: maanakg at gmail.com (Maanak Gupta) Date: Wed, 12 Jan 2022 23:11:01 -0600 Subject: Connectionists: Call for Papers: 27th ACM Symposium on Access Control Models and Technologies Message-ID: ACM SACMAT 2022 New York City, New York ----------------------------------------------- | Hybrid Conference (Online + In-person) | ----------------------------------------------- Call for Research Papers ============================================================== Papers offering novel research contributions are solicited for submission. Accepted papers will be presented at the symposium and published by the ACM in the symposium proceedings. In addition to the regular research track, this year SACMAT will again host the special track -- "Blue Sky/Vision Track". Researchers are invited to submit papers describing promising new ideas and challenges of interest to the community as well as access control needs emerging from other fields. We are particularly looking for potentially disruptive and new ideas which can shape the research agenda for the next 10 years. We also encourage submissions to the "Work-in-progress Track" to present ideas that may have not been completely developed and experimentally evaluated. Topics of Interest ============================================================== Submissions to the regular track covering any relevant area of access control are welcomed. Areas include, but are not limited to, the following: * Systems: * Operating systems * Cloud systems and their security * Distributed systems * Fog and Edge-computing systems * Cyber-physical and Embedded systems * Mobile systems * Autonomous systems (e.g., UAV security, autonomous vehicles, etc) * IoT systems (e.g., home-automation systems) * WWW * Design for resiliency * Designing systems with zero-trust architecture * Network: * Network systems (e.g., Software-defined network, Network function virtualization) * Corporate and Military-grade Networks * Wireless and Cellular Networks * Opportunistic Network (e.g., delay-tolerant network, P2P) * Overlay Network * Satellite Network * Privacy and Privacy-enhancing Technologies: * Mixers and Mixnets * Anonymous protocols (e.g., Tor) * Online social networks (OSN) * Anonymous communication and censorship resistance * Access control and identity management with privacy * Cryptographic tools for privacy * Data protection technologies * Attacks on Privacy and their defenses * Authentication: * Password-based Authentication * Biometric-based Authentication * Location-based Authentication * Identity management * Usable authentication * Mechanisms: * Blockchain Technologies * AI/ML Technologies * Cryptographic Technologies * Programming-language based Technologies * Hardware-security Technologies (e.g., Intel SGX, ARM TrustZone) * Economic models and game theory * Trust Management * Usable mechanisms * Data Security: * Big data * Databases and data management * Data leakage prevention * Data protection on untrusted infrastructure * Policies and Models: * Novel policy language design * New Access Control Models * Extension of policy languages * Extension of Models * Analysis of policy languages * Analysis of Models * Policy engineering and policy mining * Verification of policy languages * Efficient enforcement of policies * Usable access control policy New in ACM SACMAT 2022 ============================================================== We are moving ACM SACMAT 2022 to have two submission cycles. Authors submitting papers in the first submission cycle will have the opportunity to receive a major revision verdict in addition to the usual accept and reject verdicts. Authors can decide to prepare a revised version of the paper and submit it to the second submission cycle for consideration. Major revision papers will be reviewed by the program committee members based on the criteria set forward by them in the first submission cycle. Regular Track Paper Submission and Format ============================================================== Papers must be written in?English. Authors are required to use the ACM format for papers, using the two-column SIG Proceedings Template (the sigconf template for LaTex) available in the following link: https://www.acm.org/publications/authors/submissions The length of the paper in the proceedings format must not exceed?twelve?US letter pages formatted for 8.5" x 11" paper and be no more than 5MB in size. It is the responsibility of the authors to ensure that their submissions will print easily on simple default configurations. The submission must be anonymous, so information that might identify the authors - including author names, affiliations, acknowledgments, or obvious self-citations - must be excluded. It is the authors' responsibility to ensure that their anonymity is preserved when citing their work. Submissions should be made to the EasyChair conference management system by the paper submission deadline of: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) Submission Link: https://easychair.org/conferences/?conf=acmsacmat2022 All submissions must contain a significant original contribution. That is, submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal, conference, or workshop. In particular, simultaneous submission of the same work is not allowed. Wherever appropriate, relevant related work, including that of the authors, must be cited. Submissions that are not accepted as full papers may be invited to appear as short papers. At least one author from each accepted paper must register for the conference before the camera-ready deadline. Blue Sky Track Paper Submission and Format ============================================================== All submissions to this track should be in the same format as for the regular track, but the length must not exceed ten US letter pages, and the submissions are not required to be anonymized (optional). Submissions to this track should be submitted to the EasyChair conference management system by the same deadline as for the regular track. Work-in-progress Track Paper Submission and Format ============================================================== Authors are invited to submit papers in the newly introduced work-in-progress track. This track is introduced for (junior) authors, ideally, Ph.D. and Master's students, to obtain early, constructive feedback on their work. Submissions in this track should follow the same format as for the regular track papers while limiting the total number of pages to six US letter pages. Paper submitted in this track should be anonymized and can be submitted to the EasyChair conference management system by the same deadline as for the regular track. Call for Lightning Talk ============================================================== Participants are invited to submit proposals for 5-minute lightning talks describing recently published results, work in progress, wild ideas, etc. Lightning talks are a new feature of SACMAT, introduced this year to partially replace the informal sharing of ideas at in-person meetings. Submissions are expected??by May 27, 2022. Notification of acceptance will be on June 3, 2022. Call for Posters ============================================================== SACMAT 2022 will include a poster session to promote discussion of ongoing projects among researchers in the field of access control and computer security. Posters can cover preliminary or exploratory work with interesting ideas, or research projects in the early stages with promising results in all aspects of access control and computer security. Authors interested in displaying a poster must submit a poster abstract in the same format as for the regular track, but the length must not exceed three US letter pages, and the submission should not be anonymized. The title should start with "Poster:". Accepted poster abstracts will be included in the conference proceedings. Submissions should be emailed to the poster chair by Apr 15th, 2022. The subject line should include "SACMAT 2022 Poster:" followed by the poster title. Call for Demos ============================================================== A demonstration proposal should clearly describe (1) the overall architecture of the system or technology to be demonstrated, and (2) one or more demonstration scenarios that describe how the audience, interacting with the demonstration system or the demonstrator, will gain an understanding of the underlying technology. Submissions will be evaluated based on the motivation of the work behind the use of the system or technology to be demonstrated and its novelty. The subject line should include "SACMAT 2022 Demo:" followed by the demo title. Demonstration proposals should be in the same format as for the regular track, but the length must not exceed four US letter pages, and the submission should not be anonymized. A two-page description of the demonstration will be included in the conference proceedings. Submissions should be emailed to the Demonstrations Chair by Apr 15th, 2022. Financial Conflict of Interest (COI) Disclosure: ============================================================== In the interests of transparency and to help readers form their own judgments of potential bias, ACM SACMAT requires authors and PC members to declare any competing financial and/or non-financial interests in relation to the work described. Definition ------------------------- For the purposes of this policy, competing interests are defined as financial and non-financial interests that could directly undermine, or be perceived to undermine the objectivity, integrity, and value of a publication, through a potential influence on the judgments and actions of authors with regard to objective data presentation, analysis, and interpretation. Financial competing interests include any of the following: Funding: Research support (including salaries, equipment, supplies, and other expenses) by organizations that may gain or lose financially through this publication. A specific role for the funding provider in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript, should be disclosed. Employment: Recent (while engaged in the research project), present or anticipated employment by any organization that may gain or lose financially through this publication. Personal financial interests: Ownership or contractual interest in stocks or shares of companies that may gain or lose financially through publication; consultation fees or other forms of remuneration (including reimbursements for attending symposia) from organizations that may gain or lose financially; patents or patent applications (awarded or pending) filed by the authors or their institutions whose value may be affected by publication. For patents and patent applications, disclosure of the following information is requested: patent applicant (whether author or institution), name of the inventor(s), application number, the status of the application, specific aspect of manuscript covered in the patent application. It is difficult to specify a threshold at which a financial interest becomes significant, but note that many US universities require faculty members to disclose interests exceeding $10,000 or 5% equity in a company. Any such figure is necessarily arbitrary, so we offer as one possible practical alternative guideline: "Any undeclared competing financial interests that could embarrass you were they to become publicly known after your work was published." We do not consider diversified mutual funds or investment trusts to constitute a competing financial interest. Also, for employees in non-executive or leadership positions, we do not consider financial interest related to stocks or shares in their company to constitute a competing financial interest, as long as they are publishing under their company affiliation. Non-financial competing interests: Non-financial competing interests can take different forms, including personal or professional relations with organizations and individuals. We would encourage authors and PC members to declare any unpaid roles or relationships that might have a bearing on the publication process. Examples of non-financial competing interests include (but are not limited to): * Unpaid membership in a government or non-governmental organization * Unpaid membership in an advocacy or lobbying organization * Unpaid advisory position in a commercial organization * Writing or consulting for an educational company * Acting as an expert witness Conference Code of Conduct and Etiquette ============================================================== ACM SACMAT will follow the ACM Policy Against Harassment at ACM Activities. Please familiarize yourself with the ACM Policy Against Harassment (available at https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/ policy-against-discrimination-and-harassment) and guide to Reporting Unacceptable Behavior (available at https://www.acm.org/about-acm/reporting-unacceptable-behavior). AUTHORS TAKE NOTE ============================================================== The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.) Important dates ============================================================== **Note that, these dates are currently only tentative and subject to change.** * Paper submission: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) * Rebuttal: December 16th - December 20th, 2021 (Submission Cycle 1) March 24th - March 28th, 2022 (Submission Cycle 2) * Notifications: January 14th, 2022 (Submission Cycle 1) April 8th, 2022 (Submission Cycle 2) * Systems demo and Poster submissions: April 15th, 2022 * Systems demo and Poster notifications: April 22nd, 2022 * Panel Proposal: March 18th, 2022 * Camera-ready paper submission: April 29th, 2022 * Conference date: June 8 - June 10, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.pucyk at icm.edu.pl Thu Jan 13 03:57:11 2022 From: a.pucyk at icm.edu.pl (Alicja Pucyk) Date: Thu, 13 Jan 2022 09:57:11 +0100 Subject: Connectionists: [Call for Participation] Jan 20, 4pm CET | Free Virtual ICM Seminar on reconstructing all neurons in a fly brain at nanometer resolution Message-ID: <058a01d8085b$8e02bd60$aa083820$@icm.edu.pl> ============================================================= 21. Virtual ICM Seminar with Sven Dorkenwald from Princeton University ============================================================= TITLE: Towards whole-brain Connectomes: Reconstructing all neurons in a fly brain at nanometer resolution DATE: Thursday, January 20, 2022 | 4pm CST FREE registration: https://supercomputingfrontiers.eu/2022/seminars/ ICM University of Warsaw alongside the creator of this series dr Marek Michalewicz are proud to invite everyone to #VirtualICMSeminar with Sven Dornekwald who is developing systems, infrastructure and machine learning methods to facilitate the analysis of large-scale connectomics datasets called FlyWire.ai Don't miss it! Register NOW. _Abstract Comprehensive neuronal wiring diagrams derived from Electron Microscopy images allow researchers to test models of how brain circuits give rise to neuronal activity and drive behavior. Due to advances in automated image acquisition and analysis, whole-brain connectomes with thousands of neurons are finally on the horizon. However, many person-years of manual proofreading are still required to correct errors in these automated reconstructions. We created FlyWire to facilitate the proofreading of neuronal circuits in an entire fly brain by a community of researchers distributed across the world. While FlyWire is dedicated to the fly brain, its methods will be generally applicable to whole-brain connectomics and are already in use to proofread multiple datasets. In this talk I will describe how FlyWire?s computational and social structures are organized to scale up to whole-brain connectomics and present on our progress towards the generation of a proofread whole-brain connectome of the fruit fly. _BIOSKETCH Sven Dorkenwald is currently a PhD student in the Seung Lab at Princeton University. In his PhD he is developing systems, infrastructure and machine learning methods to facilitate the analysis of large-scale connectomics datasets. Together with collaborators at the Allen Institute for Brain Science, he developed proofreading and annotation infrastructure that is used to host multiple large-scale connectomics datasets and runs FlyWire. FlyWire.ai is an online community for proofreading neural circuits in a whole fly brain based on the FAFB EM dataset. From heather at incf.org Thu Jan 13 04:01:36 2022 From: heather at incf.org (Heather Topple) Date: Thu, 13 Jan 2022 10:01:36 +0100 Subject: Connectionists: Working group publication: Active Segmentation for ImageJ Message-ID: INCF's Active Segmentation for ImageJ working group - including GSOC students - has just published a paper in Brain Sciences as part of a special issue on neuroinformatics and signal processing. Congratulations to mentors Dimiter Prodanov & Sumit Vohra, and GSOC students Mukesh Gupta, Sanjeev Dubay, Joanna Stachera, Raghavendra Singh Chauhan, & Piyumal Demotte! Learn more about it here: bit.ly/INCFwgActiveSegblog Read the paper here: bit.ly/ActiveSegmentationpaper Please feel free to forward this email to anyone in your network you feel would be interested. All the best, ---------------------------- The INCF Secretariat International Neuroinformatics Coordinating Facility Secretariat Karolinska Institutet. Nobels v?g 15A, SE-171 77 Stockholm. Sweden Phone: +46 085 248 70 65 incf.org neuroinformatics.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.baldassarre at gmail.com Thu Jan 13 04:19:33 2022 From: gianluca.baldassarre at gmail.com (Gianluca Baldassarre) Date: Thu, 13 Jan 2022 10:19:33 +0100 Subject: Connectionists: Event on "REAL 2021-2022 - Robot open-Ended Learning competition", 18/01/2022 15:00-16:30 (CET) In-Reply-To: References: Message-ID: *** Event on the "REAL 2021-2022 - Robot open-Ended Learning competition" (free participation) *** On 18 January 2022, there will be an online presentation of the competition followed by a hands-on demonstration. The free event will have this organisation (times refers to CET - Central European Time): *15:00-15:45*: - objectives and scope of the competition; - tour of the simulation: rules, organisation. *15:45-16:30 *(and longer if needed or requested): - installation of the competition software 'starting kit' on your computer; - run of the 'baseline model' as a first solution to the competition challenge; - modifications of the baseline model; - demo on how to develop new algorithms to test. The event will be openly held in streaming with GoogleMeet at: meet.google.com/qtj-wexv-pxo ** *Features of the REAL 2021-2022 - Robot open-Ended Learning competition* ** The third edition of the REAL 2021-2022 competition ("Robot open-Ended Autonomous Learning"), which started in 2021 and will end in 2022, aims to develop a benchmark in the field of open-ended learning robots. Moreover, the competition aims to form a community focused on comparing models able to face the several interesting challenges posed by open-ended learning. In the competition, a simulated camera-arm-gripper robot goes through two phases: (1) "Intrinsic phase": in a first long (15ML simulation steps) learning phase the robot should acquire sensorimotor competence in a fully autonomous way (no extrinsic reward functions, pre-wired knowledge on objects and actions, etc.), on the basis of mechanisms such as free exploration, curiosity, autonomous curriculum learning, intrinsic motivations, and self-generated goals; (2) "Extrinsic phase": the robot is tested with 50 "extrinsic goals", unknown to the robot during the intrinsic phase, used to measure the quality of the acquired knowledge that was autonomously acquired in the first phase. The key features of the competition as a benchmark for open-ended learning are that: (a) no information on tasks or on the specific domain can be given to the robot during the first phase (full autonomy); (b) the extrinsic phase allows a rigorous measure of the quality of the knowledge that the robot autonomously acquired in the intrinsic phase on the basis of the extrinsic goals representing a sample of all possible goals/tasks that might be randomly drawn in the given environment; this unique measure of the autonomously acquired knowledge facilitates the comparison and improvement of models as it abstracts over the possible "autonomy tricks" that different competing robots might use during the intrinsic phase. The benchmark is very challenging because during the intrinsic phase it requires the robot to be able to do all these things at the same time in a fully autonomous way: (a) learn what objects are (e.g., location and identity) from raw pixel images; (b) learn motor skills, in particular, to get in contact with the objects and move them (without moving the objects, it is difficult to distinguish them from the rest of the environment); (c) since during the intrinsic phase the environment is never reset (unless objects fall off the table) the robot has to face continuously changing environmental conditions. The competition is based on a fully open-source software kit that relies on a very fast 3D simulator (PyBullet) of the Kuka robotic arm with a gripper. Moreover, it includes a fully functioning and modifiable "baseline? robot architecture, based on modular well-commented software, to facilitate the initial development of own models given the multiple challenges that have to be faced at the same time. The kit thus allows a handy development of your models on your computer before submitting them to the competition website, and it can be used to make research on robot open-ended learning. The top 3 winning teams will receive prizes, as indicated on the competition website. Important dates of the competition: - 23/08/2021: competition started - 18/01/2022: hands-on presentation of the competition - 04/04/2022: hands-on micro-workshop at the Intrinsically Motivated Open-ended Learning Workshop (IMOL 2022) - 24/06/2022: competition ends - 12-15/09/2022: presentation of winners at the International Conference on Development and Learning (ICDL 2022; also launching REAL 2022) For further details, please refer to the competition website: https://eval.ai/web/challenges/challenge-page/1134/overview .|.CS...|.......|...............|..|......US.|||.|||||.||.||||..|...|....... Gianluca Baldassarre, Ph.D., Director of Research, Laboratory of Embodied Natural and Artificial Intelligence, Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (LENAI-ISTC-CNR), Via San Martino della Battaglia 44, I-00185 Roma, Italy. Coordinator of LENAI Research Group: https://www.istc.cnr.it/it/group/locen President of "Advanced School in AI": www.as-ai.org President of "Associazione culturale science2mind": www.science2mind.org Co-founder and R&D Officer of Spin-off CNR - "Startup innovativa AI2Life s.r.l.": https://ai2life.com E-mail: gianluca.baldassarre at istc.cnr.it Web: http://www.istc.cnr.it/people/gianluca-baldassarre Tel: +39 06 44 595 231 Skype: gianluca.baldassarre View of life: 'Learn from the past, live in(tensely) the present, dream for the future' Ultimate life mission: 'Serve humanity through core knowledge' ...CS.|||.||.|||.||..|.......|........|...US.|.|....||..|..|......|......... -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.kosmala at qmul.ac.uk Thu Jan 13 10:56:04 2022 From: t.kosmala at qmul.ac.uk (Tomasz Kosmala) Date: Thu, 13 Jan 2022 15:56:04 +0000 Subject: Connectionists: =?windows-1252?q?Final_call=3A_Pathways_to_Net_Ze?= =?windows-1252?q?ro=92_Reinforcement_Learning_challenge_starts_on_Monday?= Message-ID: Reinforcement Learning Challenge, 17th-31st January 2022 The Net Zero Technology Centre, Alan Turing Institute, RangL project and Oxquant announce the ?Pathways to Net Zero? Reinforcement Learning challenge, which will take place from 17th-31st January 2022. As showcased at COP26, the challenge is to control the rate of deployment of zero-carbon technologies towards net zero UK carbon emissions in 2050. Sign up to compete as an individual or team. More info and sign up: https://rangl.org/ COP26 video: https://vimeo.com/632748761 Best wishes, Tomasz Kosmala RangL ---------------- Tomasz Kosmala School of Mathematical Sciences Queen Mary University of London t.kosmala at qmul.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Thu Jan 13 11:14:27 2022 From: steve at bu.edu (Grossberg, Stephen) Date: Thu, 13 Jan 2022 16:14:27 +0000 Subject: Connectionists: Two online interviews about my work in neural networks over the years In-Reply-To: References: Message-ID: Dear Connectionist colleagues, I have listed below urls for two recent interviews with me about my life and work, including my recently published Magnum Opus called Conscious Mind, Resonant Brain: How Each Brain Makes a Mind: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Interview with Stephen Grossberg in January 2021 in the Brain Stream series of podcasts by The BCI Guys, Harrison Canning and Colin Fausnaght. The interview reviews Grossberg's work from when he began modeling how each brain makes a mind as a college Freshman in 1957 to the present, including his new book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind. https://lnkd.in/ejTaXaWA Interview with Stephen Grossberg in December 2021 for IEEE Spectrum by Kathy Pretz. The topic is: Deep Learning Can't Be Trusted. The interview discusses computational weaknesses of Deep Learning and how Adaptive Resonance Theory overcomes them. https://lnkd.in/ewKKAJhF Best, Steve Grossberg Stephen Grossberg http://en.wikipedia.org/wiki/Stephen_Grossberg http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en https://youtu.be/9n5AnvFur7I https://www.youtube.com/watch?v=_hBye6JQCh4 https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Wang Professor of Cognitive and Neural Systems Director, Center for Adaptive Systems Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University sites.bu.edu/steveg steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bingham at cs.utexas.edu Thu Jan 13 16:46:19 2022 From: bingham at cs.utexas.edu (bingham at cs.utexas.edu) Date: Thu, 13 Jan 2022 21:46:19 +0000 Subject: Connectionists: AutoInit software for model initialization Message-ID: AutoInit is a weight initialization method that automatically adapts to different neural network architectures. It tracks the mean and variance of signals as they propagate through the network and initializes the weights at each layer to avoid exploding or vanishing signals. AutoInit can be used to improve performance of feedforward, convolutional, and residual networks; configured with different activation function, dropout, weight decay, learning rate, and normalizer settings; and applied to vision, language, tabular, multi-task, and transfer learning domains. The software package provides a simple wrapper that makes it possible to apply AutoInit to existing TensorFlow models as-is. We invite you to try it out and see if it can improve the performance of your neural network models! For further details, see - GitHub repo: https://github.com/cognizant-ai-labs/autoinit - arXiv paper: https://arxiv.org/abs/2109.08958 AutoInit is also available through the Cognizant AI Labs Software page, together with related software on estimating model uncertainty, multitasking, loss-function metalearning, decision making, and model management, at - https://evolution.ml/software -- Garrett & Risto -------------- next part -------------- An HTML attachment was scrubbed... URL: From papaleon at sch.gr Thu Jan 13 12:04:32 2022 From: papaleon at sch.gr (Papaleonidas Antonios) Date: Thu, 13 Jan 2022 19:04:32 +0200 Subject: Connectionists: 18th AIAI 2022 Hybrid @ Crete, Greece - Call for Papers Message-ID: <049201d8089f$a2d4ffc0$e87eff40$@sch.gr> 18th AIAI 2022, 17 - 20 June 2022 Hybrid@ Web & Aldemar Knossos Royal, Crete, Greece www.ifipaiai.org/2022 CALL FOR PAPERS for 18th AIAI 2022 Hybrid @ Web & Crete, Greece Dear Colleagues We would like to invite you to submit your work at the 18th International Conference on Artificial Intelligence Applications and Innovations ( AIAI2022) 18th International Conference on Artificial Intelligence Applications and Innovations, AIAI 2022, is technically sponsored by IFIP Artificial Intelligence Applications WG12.5. It is going to be co-organized as a Joint event with 23rd Conference on Engineering Applications of Neural Networks, EANN 2022, which is technically sponsored by the INNS (International Neural Network Society). SPECIAL ISSUES - PROCEEDINGS: Selected papers will be published in 4 special issues of high quality international scientific Journals: * World Scientific journal, International Journal of Neural Systems, Impact factor 5.87 * Springer journal , Neural Computing and Applications, Impact Factor 5.61 * ???? journal, International Journal of Biomedical and Health Informatics, Impact factor 5.772 * Springer journal, AI & Ethics PROCEEDINGS will be published SPRINGER IFIP AICT Series and they are INDEXED BY SCOPUS, DBLP, Google Scholar, ACM Digital Library, IO-Port, MAthSciNet, CPCI, Zentralblatt MATH and EI Engineering Index Papers submissions will be up to 12 pages long and not less than 6 pages. BIBLIOMETRIC DETAILS: We proudly announce that according to Springer?s statistics, the last 15 AIAI conferences have been downloaded 1,719,00 times! IFIP AIAI series has reached h-index of 29 and published papers have been Cited more than 6000 times! For more Bibliometric Details please click at AIAI BIBLIOMETRIC DETAILS page IMPORTANT DATES: * Paper Submission Deadline: 25th of February 2022 * Notification of Acceptance: 26th of March 2022 * Camera ready Submission: 22th of April 2022 * Early / Authors Registration Deadline: 22th of April 2022 * Conference: 17 - 20 of June 2022 WORKSHOPS & SPECIAL SESSIONS: So far, the following 7 high quality Workshops & Speccail Sessions have been accepted and scheduled: * 11th Mining Humanistic Data Workshop (MHDW 2022) * 7th Workshop on ?5G ? Putting Intelligence to the Network Edge? (5G-PINE 2021) * 2nd Distributed AI for Resource-Constrained Platforms Workshop (DARE 2022) * 2nd Artificial Intelligence in Biomedical Engineering and Informatics (AI-BEI 2022) * 2nd Artificail Intelligence & Ethics Workshop (AIETH 2022) * AI in Energy, Buildings and Micro-Grids Workshop (??BMG) * Machine Learning and Big Data in Health Care (ML at HC) For more info please visit AIAI 2022 workshop info page KEYNOTE SPEAKERS: So far two Plenary Lectures have been announced, both by distinguished Professors with an important imprint in AI and Machine Learning. * Professor Hojjat Adeli Ohio State University, Columbus, USA, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) (IEEE), Honorary Professor, Southeast University, Nanjing, China, Member, Polish and Lithuanian Academy of Sciences, Elected corresponding member of the Spanish Royal Academy of Engineering. * Professor Riitta Salmelin Department of Neuroscience and Biomedical Engineering Aalto University, Finland For more info please visit AIAI 2022 Keynote info page VENUE: ALDEMAR KNOSSOS ROYAL Beach Resort in Hersonisso Peninsula, Crete, Greece. Special Half Board prices have been arranged for the conference delegates in the Aldemar Knossos Royal Beach Resort. For details please see: https://ifipaiai.org/2022/venue/ Conference topics, CFPs, Submissions & Registration details can be found at: * ifipaiai.org/2022/calls-for-papers/ * ifipaiai.org/2022/paper-submission/ * ifipaiai.org/2022/registration/ We are expecting Submissions on all topics related to Artificial and Computational Intelligence and their Applications. Detailed Guidelines on the Topics and the submission details can be found at the links above General co-Chairs: * Ilias Maglogiannis, University of Piraeus, Greece * John Macintyre, University of Sunderland, United Kingdom Program co-Chairs: * Lazaros Iliadis, School of Engineering, Democritus University of Thrace, Greece * Konstantinos Votis, Information Technologies Institute, ITI Thessaloniki, Greece * Vangelis Metsis, Texas State University, USA *** Apologies for cross-posting *** Dr Papaleonidas Antonios Organizing - Publication & Publicity co-Chair of 23rd EANN 2022 & 18th AIAI 2022 Civil Engineering Department Democritus University of Thrace papaleon at civil.duth.gr papaleon at sch.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From heather at incf.org Fri Jan 14 05:37:33 2022 From: heather at incf.org (Heather Topple) Date: Fri, 14 Jan 2022 11:37:33 +0100 Subject: Connectionists: INCF IC preprint: Recommendations for repositories and scientific gateways from a neuroscience perspective Message-ID: The INCF Infrastructure Committee (IC) - which provides recommendations for the strategic direction of INCF's infrastructure strategy and development - has a preprint available in arXiv. The preprint, titled *Recommendations for repositories and scientific gateways from a neuroscience perspective*, is a set of recommendations that can apply to a wide and diverse range of digital services, both repositories and science gateways, for data as well as software. These recommendations have neurosciences as their primary use case but are often general. Congratulations to authors and members Malin Sandstr?m, Mathew Abrams, Jan Bjaalie, Mona Hicks, David Kennedy, Arvind Kumar, JB Poline, Prasun Roy, Paul Tiesinga, Thomas Wachtler, and Wojtek Goscinski! Learn more about it here: http://bit.ly/INCFICpreprintblog Read the paper here: bit.ly/INCFICpreprintarXiv Please feel free to forward this email to anyone in your network you feel would be interested. All the best, /Heather ---------------------------- Heather Topple, BSc *Project Assistant* *Development and Communications* International Neuroinformatics Coordinating Facility Secretariat Karolinska Institutet. Nobels v?g 15A, SE-171 77 Stockholm. Sweden Email: heather at incf.org Phone: +46 085 248 70 65 incf.org neuroinformatics.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From angelo.cangelosi at manchester.ac.uk Fri Jan 14 05:41:11 2022 From: angelo.cangelosi at manchester.ac.uk (Angelo Cangelosi) Date: Fri, 14 Jan 2022 10:41:11 +0000 Subject: Connectionists: Call for Papers - IEEE International Conference on Development and Learning ICDL 2022, (12th edition of the ICDL-EPIROB Conference). Message-ID: <5C99756C-EB20-4899-92EA-4A3135FDB57C@manchester.ac.uk> IEEE International Conference on Development and Learning ICDL 2022, (12th edition of the ICDL-EPIROB Conference). Conference dates: September 12-15, 2022 Web page: http://icdl-2022.org An IEEE conference sponsored by CIS (Computational Intelligence Society) Location: London, UK Deadline for Submission of Regular Papers, Tutorial and Workshops Proposals, and Journal Track Papers: March 18th , 2022 (24:00 PST)" ==== Overview ==== The IEEE International Conference on Development and Learning (ICDL), previously referred to as ICDL-EpiRob, is the premier gathering of professionals dedicated to the advancement of cognitive and developmental learning. As such, ICDL is a unique conference gathering researchers from computer science, robotics, psychology and developmental science. We invite submissions for the conference in 2022 to explore, extend, and consolidate the interdisciplinary boundaries of this exciting research field, in the form of full papers, workshop and tutorials and with submissions to the journal track. We also invite your participation in the SmartBot Challenge and in the REAL 2022 competition. More details on the call can be found at: https://icdl2022.qmul.ac.uk/?page_id=107 ==== Topics ==== The primary list of topics of interest include (but are not limited to): ? General principles of development and learning; ? Development of skills in biological systems and robots; ? Nature VS nurture, critical periods and developmental stages; ? Models on active learning; ? Architectures for cognitive development and life-long learning; ? Emergence of body knowledge and affordance perception; ? Analysis and modelling of human motion and state; ? Models for prediction, planning and problem solving; ? Models of human-human and human-robot interaction; ? Emergence of verbal and non-verbal communication skills; ? Epistemological foundations and philosophical issues; ? Models of child development from experimental psychology. ==== Full Papers ==== Papers of at most 6 pages in IEEE double column format will undergo peer-review, and accepted and presented submissions will be included in the conference proceedings published by IEEE Xplore. The authors of the best conference papers will be invited to extend their contributions for a Special Issue of IEEE Transaction on Cognitive and Developmental Systems (IEEE TCDS). Maximum two-extra pages can be acceptable for a publication fee of $100 per page. Contributed papers can participate to the SmartBot Challenge, if they meet the eligibility requirements (see below). Authors should tick the appropriate box during submission if they want their paper to be considered for the SmartBot Challenge. The detailed instructions and templates for the submission can be found here: https://icdl2022.qmul.ac.uk/?page_id=209 ==== Workshops and Tutorials === We invite experts in different areas to organize either a tutorial or a workshop to be held on the first day of the conference (September 12, 2022). Tutorials are meant to provide insights into specific topics through hands-on training and interactive experiences. Workshops are exciting opportunities to present a focused research topic cumulatively. Tutorials and workshops can be half- or full-day in duration including oral presentations, posters and live demonstrations. Submission format: two double-column pages in standard IEEE format including title, duration (half day or full day), concept, target audience, list of speakers, open to paper/poster submission, website link. The detailed instructions and templates for the submission can be found here: https://icdl2022.qmul.ac.uk/?page_id=108 ==== Journal Track ==== Authors are encouraged to submit applications for Journal Track talks. The application must be about a journal paper that has been published already, in the March 2021 ? March 2022 period, on a topic relevant to ICDL. All submissions must be made by email before March 18th 2022, including (in a single PDF): ? Title of the original journal paper. ? Abstract of the original journal paper. ? A complete reference to the original paper in APA format. ? URL where the paper can be shown to be formally published by the publisher (even if early access). ? URL where the paper with its final camera-ready contents can be freely download for the evaluation process. ? A brief description (no more than half-page) to explain why the authors believe that the paper is relevant to ICDL. The ICDL committee will select a limited number of applications through an expedite evaluation process that will consider the quality of the journal paper and the relevance to ICDL. The authors of the selected applications will be invited to present their work (oral presentation) during a dedicated session at the conference. The detailed instructions for the submission can be found here: https://icdl2022.qmul.ac.uk/?page_id=109 ==== Competitions ==== We also strongly expect your participation in the SmartBot and the REAL competitions. The SmartBot Challenge is designed to help strengthen the bridge between two research communities: those who study learning and development in humans and those who study comparable processes in artificial systems. When submitting your 6-pages paper to ICDL 2022 you can specify that you want it to participate in the challenge. To be eligible, your paper should describe a computational or robotic model that explains one or several studies from the infant development literature. More details on the criteria to be eligible for the challenge here: https://icdl2022.qmul.ac.uk/?page_id=241 The REAL competition addresses open-ended learning with a focus on "Robot open-Ended Autonomous Learning" (REAL)", that is, on systems that: (a) acquire sensorimotor competence that allows them to interact with objects and physical environments; (b) learn in a fully autonomous way, i.e. with no human intervention. The competition has a two-phase structure: during the first ?intrinsic phase? your model has a certain time to explore and learn in the environment freely. Then during the second ?extrinsic phase? the quality of the knowledge acquired in the intrinsic phase is measured with tasks unknown to the robot during this autonomous phase. Find all the details to participate here: https://icdl2022.qmul.ac.uk/?page_id=243 ==== Important Dates ==== Conference dates: September 12-15, 2022 Workshops/Tutorials Date: September 12th, 2022 Submission deadline: March 18th, 2022 (24:00 PST) (applies to Papers, Workshop and Tutorial proposals and Journal Track submissions). Paper Author Notification: May 15th, 2022 Paper Camera Ready Due: June 12th, 2022 Workshop and Tutorial Notification: April 24th, 2022 Workshop and Tutorial Camera Ready Due: May 22th, 2022 Journal Track Papers Notification: May 15th, 2022 SmartBot Challenge Submission Deadline: March 18th, 2022 (24:00 PST) REAL Competition: See dedicated website - https://eval.ai/web/challenges/challenge-page/1134/overview ================= Best regards, The ICDL 2022 Organizing Committee. https://icdl2022.qmul.ac.uk/?page_id=37 General Chairs Lorenzo Jamone (Queen Mary Univ. of London, UK) Yukie Nagai (Univ. of Tokyo, Japan)" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mm at di.ku.dk Fri Jan 14 05:57:04 2022 From: mm at di.ku.dk (Maria Maistro) Date: Fri, 14 Jan 2022 10:57:04 +0000 Subject: Connectionists: 2-year fully funded postdoc on "speed of light search engines" Message-ID: <7353DBBD-3E78-4EB4-ABDB-A87CBF001ED6@ku.dk> The Machine Learning section of the Department of Computer Science at the Faculty of Science at the University of Copenhagen (DIKU) is offering a 2-year fully-funded postdoctoral position in applied Machine Learning and Information Retrieval, commencing 1 May 2022 or as soon as possible thereafter. The deadline for applications is February 9, 2022, 23:59 GMT +1. Description of the scientific environment The postdoctoral fellow will join the Machine Learning Section at DIKU. The Machine Learning section is among the leading research environments in Artificial Intelligence and Web & Information Retrieval in Europe (in the top 5 for 2020, according to csrankings.org), with a strong presence at top-tier conferences, continuous collaboration in international & national research networks, and solid synergies with big tech, small tech, and industry. The Machine Learning section consists of a vibrant selection of approximately 65 talented researchers (40 of whom are PhD and postdoctoral fellows) from around the world with a diverse set of backgrounds and a common incessant scientific curiosity and openness to innovation. DIKU is the one of the oldest Computer Science departments in Europe, founded by Turing Award Winner Peter Naur. The postdoctoral fellow will conduct research on the use and development of photonic deep neural architectures for search engines. Inquiries about the position can be made to Christina Lioma at c.lioma at di.ku.dk. Job description Supervision of bachelor and master theses and teaching are encouraged, but not required. Travel funding and computing resources are covered. The University of Copenhagen is currently expanding strongly in computer science. We expect to have tenure-track openings in the very near future, and welcome postdoctoral researchers interested in exploring such opportunities. Formal requirements Postdoctoral applicants should have or be about to receive a PhD degree in a subject relevant for the research area. The successful candidate is expected to have a solid background in applied Machine Learning and/or Information Retrieval. The candidate should have a strong research record as witnessed by publications in top venues. The University of Copenhagen The University of Copenhagen was founded in 1479 and is the oldest and largest institution of research and education in Denmark. It is a member of the International Alliance of Research Universities, alongside the Universities of Cambridge, Oxford and Yale, and has produced 9 Nobel prize winners. Various academic rankings see the University of Copenhagen as one of the top leading institutions in Europe and the world, and its study programs meet the most stringent international standards for higher education based on Standards and Guidelines for Quality Assurances in the European Higher Education Area and the Danish Accreditation Institution guidelines. Life in Copenhagen, Denmark International surveys repeatedly rank Denmark among the world?s happiest countries, with high level of social and gender equality, community spirit, and a strong sense of common responsibility for social welfare. Copenhagen is the capital of Denmark, buzzing with culture, sustainable living, modern architecture, royal history, and a mouthwatering restaurant scene. Terms of employment The position is covered by the Memorandum on Job Structure for Academic Staff. Terms of appointment and payment accord to the agreement between the Ministry of Finance and The Danish Confederation of Professional Associations on Academics in the State. Negotiation for salary supplement is possible. Application Procedure The application, in English, should be submitted electronically by clicking APPLY NOW below. Please include ? Curriculum vitae. ? Diploma and transcripts of records (PhD, MSc and BSc, or equivalent). ? List of publications, including links to the applicant?s pages at DBLP and Google Scholar. ? Brief research statement with description of research so far and future research goals and interests (maximum 1 page). ? Copies of the 3 selected research papers. ? Reference letters (if available) The University wishes our staff to reflect the diversity of society and thus welcomes applications from all qualified candidates regardless of personal background. The deadline for applications is February 9, 2022, 23:59 GMT +1. Questions Inquiries about the positions can be made to Christina Lioma at c.lioma at di.ku.dk. You can read about the recruitment process at https://employment.ku.dk/faculty/recruitment-process/. ??? Maria Maistro, PhD Tenure-track Assistant Professor Department of Computer Science University of Copenhagen Universitetsparken 5, 2100 Copenhagen, Denmark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto.nogales at ceiec.es Fri Jan 14 06:40:05 2022 From: alberto.nogales at ceiec.es (Alberto Nogales Moyano) Date: Fri, 14 Jan 2022 12:40:05 +0100 Subject: Connectionists: EEGraph new version. Python library to model signal brains as graphs. Message-ID: Dear all, we have updated this library that can help scientists involved in biosignal processing. This library lets you model your EEGs as graphs taking into account brain connectivity and different measures. Once you obtain these graphs you can work with convolutional neural networks or graph convolutional neural networks. Please give us a star so we can make it more popular. https://github.com/ufvceiec/EEGRAPH Hope you find it useful. -- G [image: Mailtrack] Remitente notificado con Mailtrack 14/01/22 12:38:56 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pavis at iit.it Fri Jan 14 08:20:05 2022 From: Pavis at iit.it (Pavis) Date: Fri, 14 Jan 2022 13:20:05 +0000 Subject: Connectionists: Hackathon on covid-19 prognosis from images and clinical data Message-ID: The 2nd Covid CXR Hackathon - Artificial Intelligence for Covid-19 prognosis - will start February 1 2022 with an official lunch at the Dubai Expo 2020 during a joint event on AI for health endorsed by the Italian and Israeli governments. The Covid CXR Hackathon challenges the participants to implement and deploy machine learning based solution to aid the Covid-19 prognosis from early chest X-rays and clinical data. One of the main challenges of this hackathon will be the need for processing real-world data collected from several different structures upon first hospitalization in near-emergency conditions during the first outbreak in Northern Italy. Hence, image quality (and format) is highly variable and clinical data, despite best intentions, is incomplete. Therefore, any developed approach will have to deal with missing data. This hackathon aims at finding solutions relying on both sets of data, with a heavy emphasis on image analysis and algorithm explainability, as deploying machine learning techniques in a clinical setting requires ease of understanding by medical personnel. The participants are expected to find and submit solutions; the submissions will be evaluated on performance as well as explainability by a panel of clinician and computer scientists. Prizes will be available for the best performing teams, subject to the open source release of their technique. More information about the hackathon registration and schedule will be available soon at this webpage: https://ai4covid-hackathon.it/ The event is endorsed by ELLIS Genoa and Modena units, the Italian Institute of Technology (IIT), Fondazione Bruno Kessler (FBK). For any information please email: info at ai4covid-hackathon.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.j.palmeri at vanderbilt.edu Fri Jan 14 11:14:07 2022 From: thomas.j.palmeri at vanderbilt.edu (Palmeri, Thomas J) Date: Fri, 14 Jan 2022 16:14:07 +0000 Subject: Connectionists: Postdoctoral Fellowships in Applied Deep Learning at Vanderbilt University's Data Science Institute Message-ID: <1B4C7CB2-08AE-4A47-AC91-DF236F5FD162@vanderbilt.edu> The Data Science Institute (DSI) at Vanderbilt University invites applications for its first cohort of applied deep learning postdoctoral fellows. Fellows will be part of an interdisciplinary Deep Learning Initiative to explore the application of transformers and other models to problems from multiple disciplines. Fellows will serve as collaborators on research projects as part of a team of researchers, data scientists, graduate students studying data science, and undergraduates. Fellows will have access to compute infrastructure including two DGX A100s for model fitting and inference. Strong candidates will have backgrounds in deep learning and a strong interest in exploring applications of their research in multiple fields. Appropriate interests range from research in model behavior, to model training methodologies, to novel architectures. Fellows will play an important part in the Deep Learning Initiative and will be part of a strong team of data scientists and data science graduate students. This team will engage with research faculty from multiple domains. The goal is for Fellows to collaborate with faculty within a domain, and to coauthor model-centric research within the domain and in deep learning journals. In addition, Fellows will be part of a community at the DSI where we share research, collaborate on problem solving, mentor students, and support reproducible, open research. The cohort of fellows will also have the opportunity to contribute to the educational mission of the DSI by assisting in the creation and instruction of an advanced research-based graduate course involving deep learning. Fellowship Details Fellowships will begin in September of 2022 and will be renewable for up to three years. Fellows will receive annual salary support of $70,000, a competitive benefits package, and an annual research budget of $10,000 that can be used for travel, equipment, software, or other research expenses. Eligibility Candidates should have a PhD or be on track to earn a PhD before they begin their tenure as DSI fellows. Successful candidates will have a strong record in deep learning related research and an interest in exploring broader application of their work in varied research domains. Candidates should have an interest in engaging in interdisciplinary work. Application Procedure See: https://www.vanderbilt.edu/datascience/jobs/postdoctoral-fellowships/ Application review begins February 1, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1620 bytes Desc: not available URL: From jpezaris at gmail.com Fri Jan 14 10:42:44 2022 From: jpezaris at gmail.com (John Pezaris) Date: Fri, 14 Jan 2022 10:42:44 -0500 Subject: Connectionists: AREADNE 2022 2nd Call for Abstracts Message-ID: AREADNE 2022 Research in Encoding and Decoding of Neural Ensembles Nomikos Conference Centre, Santorini, Greece 28 June - 2 July 2022 http://areadne.org info at areadne.org * * * * SECOND CALL FOR ABSTRACTS * * * * Dear Colleague, We would like to remind you that abstracts for poster presentation at AREADNE 2022 are coming due shortly. AREADNE 2022 will once again bring scientific leaders from around the world to present their theoretical and experimental work on the functioning of neuronal ensembles in an informal yet spectacular setting, and with a relaxed pace that emphasizes interaction. Please see the Call for Abstracts for additional details, including links for templates, at https://areadne.org/call-for-abstracts Submissions of abstracts for poster presentations are due by 21 January 2022; notifications will be provided by 22 February 2022. We strongly encourage potential attendees to submit an abstract as presenters have registration priority. For information about the conference, please refer to the main web page https://areadne.org or send email to us at info at areadne.org. We hope to see you at AREADNE 2022, Nicholas Hatsopoulos and John Pezaris AREADNE 2022 Co-Chairs --- John S. Pezaris, Ph.D. AREADNE 2022 Co-Chair Harvard Medical School Massachusetts General Hospital 55 Fruit Street Boston, MA 02114, USA john at areadne.org From alessandra.sciutti at gmail.com Fri Jan 14 12:33:06 2022 From: alessandra.sciutti at gmail.com (alessandra.sciutti at gmail.com) Date: Fri, 14 Jan 2022 18:33:06 +0100 Subject: Connectionists: [journals] Extended deadline - CfP Special Issue in IEEE Transactions on Cognitive and Developmental Systems (TCDS) - Second Edition Message-ID: <007901d8096c$ca77dac0$5f679040$@gmail.com> ======================================================================= Extended Deadline - Special Issue in IEEE Transactions on Cognitive and Developmental Systems ======================================================================== Dear Colleagues, Following several requests, we are granting an extension to the deadline for the submission of papers to the second edition on the special issue in the IEEE Transactions on Cognitive and Developmental Systems (TCDS) to 31st January 2022. Below are the relevant details. Quick Links ================= *Second Edition of the Special Issue on Emerging Topics on Development and Learning* Journal: IEEE Transactions on Cognitive and Developmental Systems (TCDS) Journal Link: https://cis.ieee.org/publications/t-cognitive-and-developmental-systems Special Issue: https://wanweiwei07.github.io/files/ICDL2021_Special_Issue_Proposal_IEEE_For mat.pdf ***Submission deadline (extended): 31 January 2022*** Overview ======== This special issue aims to track the state-of-the-art progress on development and learning in natural and artificial systems. It concentrates on development and learning from a multidisciplinary perspective. Researchers from computer science, robotics, psychology, and developmental studies are solicited to share their knowledge and research on how humans and animals develop sensing, reasoning and actions, and how to exploit robots as research tools to test models of development and learning. We expect the submitted contributions emphasize the interaction with social and physical environments and how cognitive and developmental capabilities can be transferred to computing systems and robotics. This approach goes hand in hand with the goals of both understanding human and animal development and applying this knowledge to improve future intelligent technology, including for robots that will be in close interaction with humans. The primary list of topics of interest include, but not limited to: - Principles and theories of development and learning; - Development of skills in biological systems and robots; - Models on the contributions of interaction to learning; - Non-verbal and multi-modal interaction; - Nature vs. nurture, developmental stages; - Models on active learning; - Architectures for lifelong learning; - Emergence of body and affordance perception; - Analysis and modelling of human motion and state; - Models for prediction, planning and problem solving; - Models of human-human and human-robot interaction; - Emergence of verbal and non-verbal communication; - Epistemological foundations and philosophical issues; - Robot prototyping of human and animal skills; - Ethics and trust in computational intelligence and robotics; - Social learning in humans, animals, and robots. Contributions ============ The special issue is open to novel contributions. Also extended versions of published conference papers (such as in ICDL 2021) are welcome, but they must have at least 30% new impacting technical/scientific material in the submitted journal version, and there should be less than 50% verbatim similarity as reported by a tool (such as CrossRef). Additionally, the conference papers and the detailed summary differences must be included as part of the journal submission to TCDS. All submissions will be reviewed as regular TCDS papers before acceptance. Guest Editors =========== - Dingsheng Luo (Lead guest editor) (dsluo at pku.edu.cn) - Angelo Cangelosi (angelo.cangelosi at manchester.ac.uk) - Alessandra Sciutti (alessandra.sciutti at iit.it) - Weiwei Wan (wan at sys.es.osaka-u.ac.jp) - Ana Tanevska (ana.tanevska at iit.it) For further information, please contact the editors. Regards, the Guest Editors -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Fri Jan 14 12:42:52 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Fri, 14 Jan 2022 12:42:52 -0500 Subject: Connectionists: Deadline tomorrow: BMI Machine Conscious Learning Project Message-ID: *BMI Machine Conscious Learning Project* http://www.brain-mind-institute.org/program-summer.html Ever since humankind came into being, holistic mechanisms of Natural General Intelligence (NGI) and Artificial General Intelligence (AGI) have been elusive. For example, the Third World Science and Technology Development Forum Nov. 6-7, 2021 published "The Ten Scientific Problems for the Development of Human Society for 2021". The No. 1 Problem in the information domain is "what are the mechanisms for human brains to process information and for generating human intelligence?" Many machine learning experts hoped that NGI and AGI can be modeled by, or achieved by, training increasingly larger neural networks on increasingly larger data sets that are static, like well-known projects AlphaGo, AlphaZero, AlphaFold , the IBM Debater and many other similarly large neural network projects elsewhere. Unfortunately, such approaches are categorically hopeless for AGI, not only because of the alleged Post Selection protocol flaws [ WengNatureProtocol21 ,WengScienceProtocol21 ] but something much deeper and more fundamental. The recent discovery of Conscious Learning by Weng 2022 [WengCLICCE22 ,WengCLAIEE22 ] revealed a surprising principle, namely consciousness is recursively necessary across every time instant of learning by humans and machines in order to reach their NGI and AGI at each corresponding mental age. Consciousness, in the full sense as we know it and defined in dictionaries, will never arise as an outcome of feeding static data sets, regardless of how large the data sets are and what kinds of neural network we use. But instead, consciousness is a necessary capability of a learner, natural or artificial for NGI or AGI, so that conscious thinking takes place while the learner processes information while learning across space and time on the fly. Weng proposes that the algorithmic theory of Conscious Learning in [ WengCLICCE22 ,WengCLAIEE22 ] supported by the Developmental Networks is the first holistic solution to the above No. 1 Problem in the information domain. Therefore, GNI appears to be computationally modeled and AGI seems to be machine achievable. The remaining challenges toward modeling NGI and achieving AGI are still great but exciting. They include education of Conscious Learning theory and algorithms; research on hardware design for real-time, brain size Conscious Learning; development of practical Conscious Learning products; and applications of Conscious Learning theory and algorithms. BMI, the Brain-Mind Institute, is pleased to announce a funded Project, called BMI Conscious Machine Learning Project, for all those who are interested. This announcement calls for professors, graduate students, and undergraduate students to apply for an appropriate position in the Project. The open positions include the following three categories: 1. *Research advisors*: There are four categories, assistant professors, associate professors, full professors and retired professors, corresponding to your current rank. The responsibilities include advising local students. It is desirable that each professor recruits a few of his students locally. Send your CV to BMI with the names, affiliations and contact information of the students who will submit applications in association with you. Each BMI paid student will correspond a part of budget for his research advisor. 2. *Graduate students: *There are two categories, PhD program and MS program. Each student is expected to spend 10 hours each week during his university semesters and 40 hours each week during summer. The student's time spent on the projects will be paid by BMI at a rate suited for his own country. Each applicant should identify a local research advisor who supervises the project on a weekly basis. If you are a graduate student in a university and are interested in applying for the Project, find a professor in your local university who can supervise you. Ask him to jointly apply for a professor position at the Project. You two should name each other in the applications. Send your CV and official transcripts during the undergraduate years and the graduate years. 3. *Undergraduate students: *There are four categories, freshmen, sophomore, junior and senior, corresponding to your year in your home university. Other requirements are similar to the Graduate student category. Admission terms: summer session 2022 or fall 2022. Specify your preferred starting summer date and fall date, as each country has a different date. Send your filled application form , your application and supporting material to juyang.weng at gmail.com with a subject: Application: BMI Conscious Machine Learning Project. *Important dates:* *January 15, 2022:* Deadline for application *March 15, 2022:* Notice of admission For further detail and questions, contact juyang.weng at gmail.com. PDF file -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Fri Jan 14 15:14:56 2022 From: cgf at isep.ipp.pt (Carlos) Date: Fri, 14 Jan 2022 20:14:56 +0000 Subject: Connectionists: CFP: HLPP 2022: International Symposium on International Symposium on High-Level Parallel Programming and Applications Message-ID: --------------- CALL FOR PAPERS --------------- HLPP 2022 The 15th International Symposium on High-level Parallel Programming and Applications Porto, Portugal, 7-8 July, 2022 https://hlpp2022.dcc.fc.up.pt/ ---------------------- Aims and scope of HLPP ---------------------- As processor and system manufacturers increase the amount of both inter- and intra-chip parallelism it becomes crucial to provide the software industry with high-level, clean and efficient tools for parallel programming. Parallel and distributed programming methodologies are currently dominated by low-level techniques such as send/receive message passing, or equivalently unstructured shared memory mechanisms. Higher-level, structured approaches offer many possible advantages and have a key role to play in the scalable exploitation of ubiquitous parallelism. Since 2001 the HLPP series of workshops/symposia has been a forum for researchers developing state-of-the-art concepts, tools and applications for high-level parallel programming. The general emphasis is on software quality, programming productivity and high-level performance models. The 15th Symposium on High-Level Parallel Programming and Applications will be held in the Porto, Portugal. ------ Topics ------ HLPP 2022 invites papers on all topics in high-level parallel programming, its tools and applications including, but not limited to, the following aspects: * High-level programming, performance models (BSP, CGM, LogP, MPM, etc.) and tools * Declarative parallel programming methodologies * Algorithmic skeletons and constructive methods * Declarative parallel programming languages and libraries: semantics and implementation * Verification of declarative parallel and distributed programs * Software synthesis, automatic code generation for parallel programming * Model-driven software engineering with parallel programs * High-level programming models for heterogeneous/hierarchical platforms * High-level parallel methods for large structured and semi-structured datasets * Applications of parallel systems using high??-level languages and tools * Formal models of timing and real-time verification for parallel systems ------------------ Program Chairs ------------------ In?s Dutra, University of Porto, Portugal Jorge Barbosa, University of Porto, Portugal Miguel Areias, University of Porto, Portugal ------------------ Publicity Chair ------------------ Carlos Ferreira, Polytechnic Institute of Porto ----------------- Program Committee ----------------- TBA ---------------- Important dates ---------------- Submission deadline: April 1, 2022(AoE) Author notification: June 3, 2022 Camera-ready for draft proceedings: July 1, 2022 Early registration deadline: June 8, 2022 Symposium: July 7-8 (Thursday/Friday) IJPP (HLPP special issue) submission deadline: October 28, 2022 IJPP (HLPP special issue) camera-ready for journal publication: December 2, 2022 ---------------- Paper submission ---------------- Papers submitted to HLPP 2022 must describe original research results and must not have been published or simultaneously submitted anywhere else. Manuscripts must be prepared with the Springer IJSS latex macro package using the single column option (\documentclass[smallextended]{svjour3}) and submitted via the EasyChair Conference Management System as one pdf file. The strict page limit for initial submission and camera-ready version is 20 pages in the aforementioned format. Each paper will receive a minimum of three reviews by members of the international technical program committee. Papers will be selected based on their originality, relevance, technical clarity and quality of presentation. After the symposium the authors of the accepted papers will have ample time to revise their papers and to incorporate the potential comments and remarks of their colleagues. We expect the HLPP 2022 special issue of the International Journal of Parallel Programming (IJPP) to appear online-first by the end of the year and the printed edition in mid-2023. ----------- Proceedings ----------- Accepted papers will be distributed as informal draft proceedings during the symposium and will be published by Springer in a special issue of the International Journal of Parallel Programming (IJPP). ----- Venue ----- HLPP 2022 will be hosted by the Dept. of Computer Science (GPS coords 41.152545, -8.640758) of the Faculty of Sciences of the University of Porto (FCUP). Participants may reserve rooms in several of the nearby Hotels. As the symposium will be held in the tourist season, the organizers recommend a timely reservation of rooms. Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From ludovico.montalcini at gmail.com Sun Jan 16 10:07:47 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Sun, 16 Jan 2022 16:07:47 +0100 Subject: Connectionists: 1st CfP ACAIN 2022, 2nd Int. Online & Onsite Advanced Course & Symposium on Artificial Intelligence & Neuroscience, Sept 19-22, Certosa di Pontignano, Tuscany - Italy In-Reply-To: <53A633D7-58FC-4F87-986E-1EB7449E8754@unh.edu> References: <53A633D7-58FC-4F87-986E-1EB7449E8754@unh.edu> Message-ID: _______________________________________________________________ Call for Participation & Call for Papers (apologies for cross-postings) Please distribute this call to interested parties, thanks _______________________________________________________________ The 2nd International Online & Onsite Advanced Course & Symposium on #ArtificialIntelligence & #Neuroscience - #ACAIN2022 September 19-22, 2022 Certosa di Pontignano, Castelnuovo Berardenga (Siena), #Tuscany - Italy LECTURERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! W: https://acain2022.artificial-intelligence-sas.org E: acain at icas.cc NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ Past Edition: https://acain2021.artificial-intelligence-sas.org Early Registration (Course): by March 23, 2022 (AoE) https://acain2022.artificial-intelligence-sas.org/registration/ Paper Submission (Symposium) : by Saturday April 23, 2022 (AoE) https://acain2022.artificial-intelligence-sas.org/symposium-call-for-papers/ https://easychair.org/conferences/?conf=acain2022 SCOPE & MOTIVATION: The ACAIN 2022 symposium is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields. While the Advanced Course and Symposium on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion residential (or online) Course and Symposium at the Certosa di Pontignano (Tuscany - Italy) on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their peers, in a friendly and constructive environment. Two days of keynote talks and oral presentations, the ACAIN Symposium, (September 21-22), will be preceded by lectures of leading scientists, the ACAIN Course, (September 19-20). Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy and efficient cooperation between the two research communities. These are the goals of the International Course and Symposium ? ACAIN 2022, which is aimed both at AI experts with interests in Neuroscience and at neuroscientists with an interest in AI. ACAIN 2022 accepts rigorous research that promotes and fosters multidisciplinary interactions between artificial intelligence and neuroscience. For four days you will work alongside faculty that are undisputed leaders in their own field. The result is a profound experience that fosters professional and personal development. In a proven, unique format you will be exposed to a high-impact learning experience, taking you outside the comfort zone of your own technical expertise, that will empower you with new analytical and strategic skills across areas of Artificial Intelligence and Neuroscience. Through an increased awareness of the challenges in Artificial Intelligence and Neuroscience, you will gain a place within an elite global network of experts from both fields and learn how their skills apply to your own discipline. The Advanced Course is suited for younger scholars, academics, early career researchers, Post-Docs, PhD students and industry leaders. Moreover, a significant proportion of seasoned investigators are regularly present among the attendees, often senior faculty at their own institutions. The balanced audience that we strive to maintain in each Advanced Course greatly contributes to the development of intense cross-disciplinary debates among faculty and participants that typically address the most advanced and emerging areas of each topic. For four days, the faculty members will present lectures, and discuss with participants in a smaller, more focused setting. This longer interaction, with an exclusive course size, provides the best opportunity to explore the unique expertise of each distinguished faculty mentor, often through one-on-one mentoring. This is unparalleled and priceless. The Event (Course and Symposium) will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. The Certosa di Pontignano provides the perfect learning atmosphere that is both relaxing, and intellectually stimulating, with a stunning backdrop of the Tuscany landscapes. Arts, Landscapes, world-class wines and traditional foods will make the Advanced Course on Artificial Intelligence and Neuroscience the experience of a lifetime! COURSE DESCRIPTION: https://acain2022.artificial-intelligence-sas.org/course-description/ LECTURERS: https://acain2022.artificial-intelligence-sas.org/course-lecturers/ * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! ORGANIZING COMMITTEE: https://acain2022.artificial-intelligence-sas.org/organizing-committee/ VENUE & ACCOMMODATION: https://acain2022.artificial-intelligence-sas.org/venue/ https://acain2022.artificial-intelligence-sas.org/accommodation/ The venue of ACAIN 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact persono: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation, meals directly to the Certosa di Pontignano. ACTIVITIES: https://acain2022.artificial-intelligence-sas.org/activities/ REGISTRATION: https://acain2022.artificial-intelligence-sas.org/registration/ See you in 3D or 2D :) in Tuscany in September! Giuseppe Nicosia & Panos Pardalos - ACAIN 2022 Directors. POSTER: https://acain2022.artificial-intelligence-sas.org/wp-content/uploads/sites/21/2021/12/poster-ACAIN-2022.png NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ E: acain at icas.cc W: https://acain2022.artificial-intelligence-sas.org Past Edition, ACAIN 2021: https://acain2021.artificial-intelligence-sas.org * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpavone at dmi.unict.it Sat Jan 15 03:18:12 2022 From: mpavone at dmi.unict.it (Mario Pavone) Date: Sat, 15 Jan 2022 09:18:12 +0100 Subject: Connectionists: MIC 2022 - 14th Metaheuristics International Conference, Ortigia-Syracuse, Italy Message-ID: <20220115091812.Horde.RjE3Nuph4B9h4oNEexry5eA@mbox.dmi.unict.it> Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. ********************************************************* MIC 2022 - 14th Metaheuristics International Conference 11-14 July 2022, Ortigia-Syracuse, Italy https://www.ANTs-lab.it/mic2022/ mic2022 at ANTs-lab.it ********************************************************* ** Submission deadline: 30th March 2022 ** NEWS ** Proceedings will be published in LNCS Volume (Springer) ** Special Issue in ITOR journal *Scope of the Conference ======================== The?Metaheuristics International Conference?(MIC) conference series was established in 1995 and this is its 14th edition!? MIC is nowadays the main event focusing on the progress of the area of Metaheuristics and their applications. As in all previous editions, provides an opportunity to the international research community in Metaheuristics to discuss recent research results, to develop new ideas and collaborations, and to meet old and make new friends in a friendly and relaxed atmosphere.? Considering the particular moment,?the conference will be held in presence and online mode. Of course, in case the conference will be held in presence, the organizing committee will ensure compliance of all safety conditions. MIC 2022 is focus on presentations that cover different aspects of metaheuristic research such as new algorithmic developments, high-impact and original applications, new research challenges, theoretical developments, implementation issues, and in-depth experimental studies.? MIC 2022? strives a high-quality program that will be completed by a number of invited talks, tutorials, workshops and special sessions. *Plenary Speakers ======================== + Christian Blum, Artificial Intelligence Research Institute (IIIA), Spanish National Research Council (CSIC) + Salvatore Greco, University of Catania, Italy + Kalyanmoy Deb, Michigan State University, USA + Holger H. Hoos, Leiden University, The Netherlands + El-Ghazali Talbi, University of Lille, France Important Dates ================ Submission deadline???????????? March 30th, 2022 Notification of acceptance????May 10th, 2022 Camera ready copy?????????????? May 25th, 2022 Early registration????????????? May 25th , 2022 Submission Details =================== MIC 2022 accepts submissions in three different formats: ??S1) Regular paper: novel and original research contributions of a maximum of 15 pages? (LNCS format) ??S2) Short paper: extended abstract of novel research works of 6 pages (LNCS format) ??S3) Oral/Poster presentation: high-quality manuscripts that have recently, within the last year, been submitted or accepted for journal publication. All papers must be prepared using Lecture Notes in Computer Science (LNCS) template, and must be submitted in PDF at the link: https://www.easychair.org/conferences/?conf=mic2022 Proceedings and special issue ============================ Accepted papers in categories S1 and S2 will be published as post-proceedings in Lecture Notes in Computer Science series by Springer.? Accepted contributions of category S3 will be considered for oral or poster presentations at the conference based on the number received and the slots available, and will not be included into the LNCS proceedings. An electronic book instead will be prepared by the MIC 2022 organizing committee, and made available on the website. In addition, a post-conference special issue in International Transactions in Operational Research (ITOR) will be considered for the significantly extended and revised versions of selected accepted papers from categories S1 and S2. Conference Location ==================== MIC 2022 will be held in the beautiful Ortigia island, the historical centre of the city of Syracuse, Sicily-Italy. Syracuse is very famous for its ancient ruins, with particular reference to the Roman Amphitheater, Greek Theatre, and the Orecchio di Dionisio (Ear of Dionisio) that is a limestone cave shaped like a human ear. Syracuse is also the city where the greatest mathematician Archimede was born. https://www.siracusaturismo.net/multimedia_lista.asp MIC'2022 Conference Chairs ============================== Conference Chairs - Luca Di Gaspero, University of Undine, Italy - Paola Festa, University of Naples, Italy - Amir Nakib, Universit? Paris Est Cr?teil, France - Mario Pavone, University of Catania, Italy -- Mario F. Pavone, PhD Associate Professor Dept of Mathematics and Computer Science University of Catania V.le A. Doria 6 - 95125 Catania, Italy --------------------------------------------- tel: +39 095 7383034 mobile: +39 3384342147 Email: mpavone at dmi.unict.it http://www.dmi.unict.it/mpavone/ FB: https://www.facebook.com/mfpavone Skype: mpavone ============================================= From vassilisvas at gmail.com Mon Jan 17 05:40:32 2022 From: vassilisvas at gmail.com (Vassilis Vassiliades) Date: Mon, 17 Jan 2022 12:40:32 +0200 Subject: Connectionists: [jobs] Internship program at CYENS Centre of Excellence in Cyprus Message-ID: Dear colleagues, At CYENS Centre of Excellence, we announced our 2022 year-round internship program for undergraduate- and graduate-level students and early career individuals. The topics include: robot learning, computer vision, machine/deep learning, image processing, preference elicitation, eHealth, virtual reality, augmented reality, computer animation, smart networks and others. For the application process and eligibility criteria please visit: https://www.cyens.org.cy/en-gb/vacancies/placement-opportunities/internships/cyens-coe-year-round-internship-program/ Best wishes, Vassilis -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Jan 15 09:41:07 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 15 Jan 2022 15:41:07 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Summer: early registration January 17 Message-ID: <419364131.4571228.1642257667756@webmail.strato.com> ****************************************************************** 6th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2022 Summer Las Palmas de Gran Canaria, Spain July 25-29, 2022 https://irdta.eu/deeplearn/2022su/ ***************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: January 17, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Bournemouth, and Guimar?es. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/index.php?option=com_k2&view=item&layout=item&id=360&Itemid=896 STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wahid Bhimji (Lawrence Berkeley National Laboratory), Deep Learning on Supercomputers for Fundamental Science Joachim M. Buhmann (Swiss Federal Institute of Technology Zurich), Machine Learning -- A Paradigm Shift in Human Thought!? Kate Saenko (Boston University), Overcoming Dataset Bias in Deep Learning PROFESSORS AND COURSES: T?lay Adal? (University of Maryland Baltimore County), [intermediate] Data Fusion Using Matrix and Tensor Factorizations Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning: From Theory to Applications in the Natural Sciences Arindam Banerjee (University of Illinois Urbana-Champaign), [intermediate/advanced] Deep Generative and Dynamical Models Mikhail Belkin (University of California San Diego), [intermediate/advanced] Modern Machine Learning and Deep Learning through the Prism of Interpolation Dumitru Erhan (Google), [intermediate/advanced] Visual Self-supervised Learning and World Models Arthur Gretton (University College London), [intermediate/advanced] Probability Divergences and Generative Models Phillip Isola (Massachusetts Institute of Technology), [intermediate] Deep Generative Models Mohit Iyyer (University of Massachusetts Amherst), [intermediate/advanced] Natural Language Generation Irwin King (Chinese University of Hong Kong), [intermediate/advanced] Deep Learning on Graphs Vincent Lepetit (Paris Institute of Technology), [intermediate] Deep Learning and 3D Reasoning for 3D Scene Understanding Yan Liu (University of Southern California), [introductory/intermediate] Deep Learning for Time Series Dimitris N. Metaxas (Rutgers, The State University of New Jersey), [intermediate/advanced] Model-based, Explainable, Semisupervised and Unsupervised Machine Learning for Dynamic Analytics in Computer Vision and Medical Image Analysis Sean Meyn (University of Florida), [introductory/intermediate] Reinforcement Learning: Fundamentals, and Roadmaps for Successful Design Louis-Philippe Morency (Carnegie Mellon University), [intermediate/advanced] Multimodal Machine Learning Wojciech Samek (Fraunhofer Heinrich Hertz Institute), [introductory/intermediate] Explainable AI: Concepts, Methods and Applications Clara I. S?nchez (University of Amsterdam), [introductory/intermediate] Mechanisms for Trustworthy AI in Medical Image Analysis and Healthcare Bj?rn W. Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Jonathon Shlens (Apple), [introductory/intermediate] An Introduction to Computer Vision and Convolution Neural Networks Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines Csaba Szepesv?ri (University of Alberta), [intermediate/advanced] Tools and Techniques of Reinforcement Learning to Overcome Bellman's Curse of Dimensionality 1. Murat Tekalp (Ko? University), [intermediate/advanced] Deep Learning for Image/Video Restoration and Compression Alexandre Tkatchenko (University of Luxembourg), [introductory/intermediate] Machine Learning for Physics and Chemistry Li Xiong (Emory University), [introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning Ming Yuan (Columbia University), [intermediate/advanced] Low Rank Tensor Methods in High Dimensional Data Analysis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 17, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. ORGANIZING COMMITTEE: Marisol Izquierdo (Las Palmas de Gran Canaria, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://irdta.eu/deeplearn/2022su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Jan 15 09:39:33 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 15 Jan 2022 15:39:33 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring: early registration February 14 Message-ID: <1517273657.4571135.1642257573898@webmail.strato.com> ****************************************************************** 5th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Spring Guimar?es, Portugal April 18-22, 2022 https://irdta.eu/deeplearn/2022sp/ ***************** Co-organized by: Algoritmi Center University of Minho, Guimar?es Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: February 14, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Spring will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, and Bournemouth. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 23 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Spring is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Spring will take place in Guimar?es, in the north of Portugal, listed as UNESCO World Heritage Site and often referred to as the birthplace of the country. The venue will be: Hotel de Guimar?es Eduardo Manuel de Almeida 202 4810-440 Guimar?es http://www.hotel-guimaraes.com/ STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full in vivo online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Kate Smith-Miles (University of Melbourne), Stress-testing Algorithms via Instance Space Analysis Mihai Surdeanu (University of Arizona), Explainable Deep Learning for Natural Language Processing Zhongming Zhao (University of Texas, Houston), Deep Learning Approaches for Predicting Virus-Host Interactions and Drug Response PROFESSORS AND COURSES: Eneko Agirre (University of the Basque Country), [introductory/intermediate] Natural Language Processing in the Pretrained Language Model Era Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision Altan ?ak?r (Istanbul Technical University), [introductory] Introduction to Deep Learning with Apache Spark Rylan Conway (Amazon), [introductory/intermediate] Deep Learning for Digital Assistants Jianfeng Gao (Microsoft Research), [introductory/intermediate] An Introduction to Conversational Information Retrieval Daniel George (JPMorgan Chase), [introductory] An Introductory Course on Machine Learning and Deep Learning with Mathematica/Wolfram Language Bohyung Han (Seoul National University), [introductory/intermediate] Robust Deep Learning Lina J. Karam (Lebanese American University), [introductory/intermediate] Deep Learning for Quality Robust Visual Recognition Xiaoming Liu (Michigan State University), [intermediate] Deep Learning for Trustworthy Biometrics Jennifer Ngadiuba (Fermi National Accelerator Laboratory), [intermediate] Ultra Low-latency and Low-area Machine Learning Inference at the Edge Lucila Ohno-Machado (University of California, San Diego), [introductory] Use of Predictive Models in Medicine and Biomedical Research Bhiksha Raj (Carnegie Mellon University), [introductory] Quantum Computing and Neural Networks Bart ter Haar Romenij (Eindhoven University of Technology), [intermediate] Deep Learning and Perceptual Grouping Kaushik Roy (Purdue University), [intermediate] Re-engineering Computing with Neuro-inspired Learning: Algorithms, Architecture, and Devices Walid Saad (Virginia Polytechnic Institute and State University), [intermediate/advanced] Machine Learning for Wireless Communications: Challenges and Opportunities Yvan Saeys (Ghent University), [introductory/intermediate] Interpreting Machine Learning Models Martin Schultz (J?lich Research Centre), [intermediate] Deep Learning for Air Quality, Weather and Climate Richa Singh (Indian Institute of Technology, Jodhpur), [introductory/intermediate] Trusted AI Sofia Vallecorsa (European Organization for Nuclear Research), [introductory/intermediate] Deep Generative Models for Science: Example Applications in Experimental Physics Michalis Vazirgiannis (?cole Polytechnique), [intermediate/advanced] Machine Learning with Graphs and Applications Guowei Wei (Michigan State University), [introductory/advanced] Integrating AI and Advanced Mathematics with Experimental Data for Forecasting Emerging SARS-CoV-2 Variants Xiaowei Xu (University of Arkansas, Little Rock), [intermediate/advanced] Deep Learning for NLP and Causal Inference Guoying Zhao (University of Oulu), [introductory/intermediate] Vision-based Emotion AI OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by April 10, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. ORGANIZING COMMITTEE: Dalila Dur?es (Braga, co-chair) Jos? Machado (Braga, co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) Paulo Novais (Braga, co-chair) David Silva (London, co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022sp/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will get exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022sp/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Centro Algoritmi, University of Minho, Guimar?es School of Engineering, University of Minho Intelligent Systems Associate Laboratory, University of Minho Rovira i Virgili University Municipality of Guimar?es Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill.stine at unh.edu Fri Jan 14 15:57:54 2022 From: bill.stine at unh.edu (William Stine) Date: Fri, 14 Jan 2022 20:57:54 +0000 Subject: Connectionists: UNH: Assistant Professor in Computational Modeling of Cognition and/or Perception Message-ID: <53A633D7-58FC-4F87-986E-1EB7449E8754@unh.edu> Would you please post the advertisement below? Thanks, Bill I apologize for cross-postings. University of New Hampshire Department of Psychology Assistant Professor in Computational Modeling of Cognition and/or Perception The Department of Psychology at the University of New Hampshire invites applications for a tenure-track position at the Assistant Professor level to begin fall, 2022 with a focus on the computational modeling of the mechanisms underlying cognition and/or perception. Area of expertise within psychology is open; however, we are particularly interested in a faculty member who will enhance competitiveness for extramural funding through collaborations by taking an interdisciplinary approach that bridges psychology and another discipline within cognitive science, such as computer science, philosophy, neuroscience, or biology. A strong quantitative background is highly desirable. Applicants should show a commitment to sustain and advance the goals of the institution?s diversity of students, faculty, and staff. Applicant?s research interests should complement current faculty in Psychology. History of or strong potential for external funding is desirable. Requirements: Ph.D. in psychology or a related field and a strong record of research and teaching. The successful applicant will teach courses in their area of specialization, and a course in Introductory Psychology, Statistics, or Research Methods; supervise doctoral and undergraduate student research; and advise undergraduate and doctoral students. Our standard teaching load is two courses per semester. Review of applications begins January 1, 2022 and will continue until February 1, or until the position is filled. Upload cover letter, curriculum vitae, statement describing research and teaching interests, reprints, teaching evaluations to https://jobs.usnh.edu/postings/44553, and have three referees submit letters to jobs.usnh.edu. Questions may be sent via email to Search Co-Chairs Brett Gibson (Brett.Gibson at unh.edu) and/or Caitlin Mills (Caitlin.Mills at unh.edu). The University of New Hampshire, an R1 Carnegie Classification research university, provides comprehensive, high-quality undergraduate programs and graduate programs of distinction. UNH is located in Durham, NH (population 16,500), on a 188-acre campus, 65 miles north of Boston/Cambridge, 70 miles south of the White Mountain National Forest, and 15 miles from the Atlantic coast. The university has an enrollment of 16,000 students from all 50 states and 71 countries, a full-time faculty of over 900, and offers more than 200 undergraduate and graduate degree programs. The Department offers B.A. and Ph.D. degrees in psychology and hosts an inter-college B.S. major in neuroscience and behavior. The University seeks excellence through diversity among its administrators, faculty, staff, and students; applicants are expected to sustain and advance these goals. The University prohibits discrimination on the basis of race, color, religion, sex, age, national origin, sexual orientation, gender identity or expression, disability, veteran status, or marital status. Application by members of all underrepresented groups is encouraged. Hiring is contingent upon eligibility to work in the U.S.A. UNH is a Federal contractor within the meaning of the Executive Order on Ensuring Adequate COVID Safety Protocols for Federal Contractors. This position requires that you be vaccinated against COVID-19, unless you apply for and receive a religious or medical exemption. Wm Wren Stine Chair, Department of Psychology Program in Neuroscience & Behavior McConnell Hall University of New Hampshire Durham, NH 03824 USA +01 (603) 862-2823 bill.stine at unh.edu TTY: 7-1-1 or +01 (800) 735-2964 (Relay NH) https://unh.zoom.us/j/9372229656 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.schaub at rwth-aachen.de Sat Jan 15 13:43:18 2022 From: michael.schaub at rwth-aachen.de (Michael Schaub) Date: Sat, 15 Jan 2022 19:43:18 +0100 Subject: Connectionists: Call for papers: Signal Processing over Higher Order Networks Message-ID: Call for papers: Signal Processing over Higher Order Networks We are living in the era of data where there is a fast-growing need for the analysis and processing of data generated by complex networks such as biological, social and communication networks, to name a few. The development of models and tools for analysing data and capturing their complex relationships represents one of the most prominent research fields. Graph signal processing emerged as a powerful tool to analyse signals defined over the vertices of a graph by encoding the pairwise relationships among data through the presence of edges. However, to grasp multiway relations among the constitutive elements of a network such as for example protein-to-protein interaction networks or brain networks, we need to go beyond graphs by resorting to more complex topological descriptors. A promising new research direction is the development of signal processing tools over higher order structures such as hypergraphs and simplicial complexes. Signal processing on topological spaces is a novel and promising research direction merging together signal processing and topological tools to provide a powerful framework for the analysis of complex, multiway relationships among data. Higher order-based representations of data recently paved the way for new research directions in the area of machine learning and neural networks. This special issue aims at presenting the latest research advances in signal processing over higher order networks by gathering papers providing new methods, models and applications. The main goal is to identify ongoing research directions and new perspectives in this infant and vibrant research field. Topics of interest include (but are not limited to): ? Processing over higher order networks such as hypergraphs, simplicial complexes: filtering, sampling, transforms, spectral analysis ? Recent advances in Graph Signal Processing: multi-layer graphs, multigraphs ? Topological data analysis on higher order networks ? Nonlinear, statistical and robust signal processing over higher order networks ? Topology inference from data ? Signal processing over higher order networks for machine learning ? Higher order neural networks and deep learning ? Applications to neuroscience, bioengineering and bioinformatics ? Applications to finance, economics and social networks ? Applications to image, speech and video processing ? Applications to transport, power and communication networks For more information visit the following page https://asp-eurasipjournals.springeropen.com/signalnetworks Submission deadline: 15 March 2022 From m.biehl at rug.nl Mon Jan 17 01:24:36 2022 From: m.biehl at rug.nl (Michael Biehl) Date: Mon, 17 Jan 2022 07:24:36 +0100 Subject: Connectionists: Fully funded PhD position Message-ID: A *fully funded PhD position* (4 years) in the *Statistical* *Physics of Neural Networks* is available at the University of Groningen, The Netherlands, see https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-02S0008WFP for details and application details. Applications (before March 1) are only possible through this webpage. The title of the project is "The role of the activation function for feedforward learning systems (RAFFLES)". For further information contact M. Biehl. ---------------------------------------------------------- Prof. Dr. Michael Biehl Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence P.O. Box 407, 9700 AK Groningen The Netherlands Tel. +31 50 363 3997 https://www.cs.rug.nl/~biehl m.biehl at rug.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius.pedersen at ntnu.no Mon Jan 17 03:32:26 2022 From: marius.pedersen at ntnu.no (Marius Pedersen) Date: Mon, 17 Jan 2022 08:32:26 +0000 Subject: Connectionists: Postdoctoral Fellow in Deep Learning-Based Image Quality Assessment Message-ID: We have an open 41 months Postdoctoral position in deep learning-based image quality assessment at Norwegian Colour and Visual Computing Laboratory (Colourlab) at the Norwegian university of Science and Technology (NTNU). The post doc position is part of the research project "Quality and Content: understanding the influence of content on subjective and objective image quality assessment". The project aims to advance the understanding of image quality and develop more precise and better performing image quality metrics based on improved understanding on how content influences image quality. The Postdoctoral position will focus on developing more precise and better performing image quality metrics using deep learning and understanding on how content influences image quality. Deadline: 22nd of January 2022 More information and the process for applying is found at https://www.jobbnorge.no/en/available-jobs/job/217799/postdoctoral-fellow-in-deep-learning-based-image-quality-assessment Best regards Marius Pedersen Professor of Colour Imaging Director of the Norwegian Colour and Visual Computing Laboratory www.colourlab.no Department of Computer Science NTNU marius.pedersen at ntnu.no - (+47) 93 63 43 85 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Mon Jan 17 06:21:43 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Mon, 17 Jan 2022 13:21:43 +0200 Subject: Connectionists: =?utf-8?q?Call-for-papers=3A_ACM_ICMR_2022_Worksh?= =?utf-8?q?op_on_Multimedia_AI_against_Disinformation_=28MAD?= =?utf-8?b?4oCZMjIp?= Message-ID: [Apologies for multiple postings] MAD 2022 1st Workshop on Multimedia AI against Disinformation @ ACM International Conference on Multimedia Retrieval - ICMR 2022 Newark, NJ, USA, June 27-30, 2022 https://mad2022.aimultimedialab.ro/ https://easychair.org/conferences/?conf=icmr20221 *** Call for papers *** * Paper submission due: February 17, 2022 * Acceptance notification: March 30, 2022 * Camera-ready papers due: TBD * Workshop @ ICMR 2022: TBD/June 27-30, 2022 Disinformation spreads easily in online social networks and is propagated by social media actors and network communities to achieve specific (mostly malevolent) objectives. Disinformation has deleterious effects on users? real lives since it distorts their points of view regarding societally-sensitive topics, such as politics, health or religion. Ultimately, it has a negative effect on the very fabric of democratic societies and should be fought against via an effective combination of human and technical means. Disinformation campaigns are increasingly powered by advanced AI techniques and a lot of effort was put into the detection of fake content. While important, this is only a piece of the puzzle if one wants to address the phenomenon in a comprehensive manner. Whether a piece of information is considered fake or true often depends on the temporal and cultural contexts in which it is interpreted. This is for instance the case for scientific knowledge, which evolves at a fast pace, and whose usage in mainstream content should be updated accordingly. Multimedia content is often at the core of AI-assisted disinformation campaigns. Their impact is directly related to the perceived credibility of their content. Significant advances related to the automatic generation/manipulation of each modality were obtained with the introduction of dedicated deep learning techniques. Visual content can be tampered with in order to produce manipulated but realistic versions of it. Synthesized speech has attained a high quality level and is more and more difficult to distinguish from the actual voice. Deep language models learned on top of huge corpora allow the generation of text which resembles that written by humans. Combining these advances has the potential to boost the effectiveness of disinformation campaigns. This combination is an open research topic which needs to be addressed in order to reduce the effects of disinformation campaigns. This workshop welcomes contributions related to different aspects of AI-powered disinformation. Topics of interest include but are not limited to: - Disinformation detection in multimedia content (video, audio, texts, images) - Multimodal verification methods - Synthetic and manipulated media detection - Multimedia forensics - Disinformation spread and effects in social media - Analysis of disinformation campaigns in societally-sensitive domains (e.g., politics, health) - Explaining disinformation to non-experts - Disinformation detection technologies for non-expert users - Dataset sharing and governance in AI for disinformation - Temporal and cultural aspects of disinformation - Datasets for disinformation detection and multimedia verification - Multimedia verification systems and applications - System fusion, ensembling and late fusion techniques - Benchmarking and evaluation frameworks - Open resources, e.g., datasets, tools *** Submission guidelines *** To submit your contribution, please adhere strictly to the ACM ICMR instructions available here https://www.icmr2022.org/authors/submissions/. *** Organizing committee *** Bogdan Ionescu, Politehnica University of Bucharest, Romania Giorgos Kordopatis-Zilos, Centre for Research and Technology Hellas, Thessaloniki, Greece Symeon Papadopoulos, Centre for Research and Technology Hellas, Thessaloniki, Greece Adrian Popescu, CEA LIST, Saclay, France Luca Cuccovillo, Fraunhofer IDMT, Germany On behalf of the organizers, Bogdan Ionescu https://bionescu.aimultimedialab.ro/ From coralie.gregoire at insa-lyon.fr Mon Jan 17 10:19:11 2022 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Mon, 17 Jan 2022 16:19:11 +0100 (CET) Subject: Connectionists: [CFP] The ACM Web Conference 2022 - CFPs stil open Message-ID: <727870828.1279509.1642432751681.JavaMail.zimbra@insa-lyon.fr> ?[Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] [CFP] The ACM Web Conference 2022 - CFP Posters & Demos + Web Developer & W3C + PhD Symposium + Workshops papers We invite contributions to the Posters and Demos track / Web Developer & W3C track, PhD Symposium track as well as to the 22 Workshops colocated at The ACM Web Conference 2022 (formerly known as WWW). The conference will take place online, hosted by Lyon, France, on April 25-29, 2022, * Workshops: https://www2022.thewebconf.org/workshops/ * Posters & Demos: https://www2022.thewebconf.org/cfp/posters-demos/ * Web Developer & W3C: https://www2022.thewebconf.org/cfp/web-dev-w3c/ * PhD Symposium: https://www2022.thewebconf.org/cfp/phd-symposium/ *Same deadlines for them all: (All submission deadlines are end-of-day in the Anywhere on Earth (AoE) time zone).* - Papers submission: February 3rd, 2022 - Notification to authors: March 3rd, 2022 - Camera ready: March 10th, 2022 ------------------------------------------------------------ Call for Workshop papers All the selected workshops for TheWebConf 2022 are listed at https://www2022.thewebconf.org/workshops/. Check out the following web sites for additional instructions regarding submission guidelines. W1 - 3rd International Workshop on Data Literacy: https://datalit.itd.cnr.it/en/3rd-Data-Literacy-Workshop W2 - Beyond Facts: 2nd International Workshop on Knowledge Graphs for Online Discourse Analysis: https://knod22.wordpress.com/ W3 - CAAW: International Workshop on Cryptoasset Analytics Workshop: https://caaw.io/ W4 - CLEOPATRA: 3rd International Workshop on Cross-lingual Event-centric Open Analytics: http://cleopatra-workshop.l3s.uni-hannover.de/ W5 - COnSeNT: 2nd International Workshop on Consent Management in Online Services, Networks and Things: https://consentworkshop.com W6 - EMDC: 2nd International Workshop on the Efficiency of Modern Datacenters: https://sites.google.com/view/emdc W7 - FinWeb: 2nd International Workshop on Financial Technology on the Web: https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2022/ W8 - GLB 2022: 2nd International Workshop on Graph Learning Benchmark: https://graph-learning-benchmarks.github.io W9 - International Workshop on AI in Health: Explainable AI for Better Health: https://aihealth.ischool.utexas.edu/AIHealthWWW2022/index.html W10 - LocWeb2022: 12th International Workshop on Location and the Web: https://dhere.de/locweb/locweb2022/ W11 - MAISoN: 8th International Workshop on Mining Actionable Insights from Social Networks, Special Edition on Mental Health and Social Media: https://2022.maisonworkshop.org W12 - MUWS ?2022: 1st International Workshop on MultimodalUnderstanding for the Web and Social Media: https://muws-workshop.github.io/ W13 - Sci-K: 2nd International Workshop on Scientific Knowledge Representation, Discovery, and Assessment: https://sci-k.github.io/2022/ W14 - SeBiLAn: International Workshop on Semantics-enabled Biomedical Literature Analytics: https://www.sebilanworkshop.com/ W15 - SocialNLP: 10th International Workshop on Natural Language Processing for Social Media: https://sites.google.com/view/socialnlp2022/ W16 - TempWeb: 12th International Workshop on Temporal Web Analytics: http://temporalweb.net/ W17 - The 2nd International Workshop on Deep Learning for the Web of Things: https://rail.fzu.edu.cn/info/1014/1211.htm W18 - The First International Workshop on Graph Learning: http://www.graphlearning.net/ W19 - UserNLP: International Workshop on User-centered Natural Language Processing: https://caisa.informatik.uni-marburg.de/user_nlp.html W20 - WANDER: International Workshop on Web Acceleration for Developing Regions: https://wander2022.com/ W21 - WebAndTheCity: 8th International Workshop on Web and Smart Cities: https://webandthecity.home.blog/ W22 - Wiki Workshop: https://wikiworkshop.org/ ----------------------------------------------------------------------------------------------------- Call for TheWebConf Posters & Demos Track *Posters and Demos chairs: (www2022-poster-demo at easychair.org)* - Anna Lisa Gentile (IBM Research) - Pasquale Lisena (EURECOM) The Web Conference is the premier conference focused on understanding the current state and the evolution of the Web through the lens of different disciplines, including computing, computational social science, economics and political sciences. The Posters and Demos Track is a forum to foster interactions among researchers and practitioners by allowing them to present and demonstrate their new and innovative work. In addition, the Posters and Demos track will give conference attendees an opportunity to learn novel on-going research projects through informal interactions. Demos submissions must be based on an implemented and tested system. Posters and Demos papers will be peer-reviewed by members of the Poster Committee based on originality, significance, quality, and clarity. Accepted papers will appear in the Companion conference proceedings. In addition, accepted work authors will be asked to create a digital poster to present their work during the Posters and Demos track at the conference. Submitted posters are expected to be aligned with one or more of the relevant topics of TheWebConf community, including (but not limited to): - Web-related Economics, Monetization, and Online Markets - Web Search - Web Security, Privacy, and Trust - Semantics and Knowledge - Social Network Analysis and Graph Algorithms - Social Web - Systems and Infrastructure - User Modeling, Personalization and Accessibility - Web and Society - Web Mining and Content Analysis - Web of Things, Ubiquitous and Mobile Computing - Esports and Online Gaming - History of the Web - Web for good *Submission guidelines* Posters and Demos papers are limited to four pages, including references. Submissions are NOT anonymous. It is the authors? responsibility to ensure that their submissions adhere strictly to the required format. In particular, the format cannot be modified with the objective of squeezing in more material. Submissions that do not comply with the formatting guidelines will be rejected without review. Submissions will be handled via Easychair, at: https://easychair.org/conferences/?conf=thewebconf2022, selecting the Poster-Demo track. We also highly encourage to include external material related to the poster or demo (e.g., code repository on Github or equivalent) in the submission. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template. For overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty Submissions for review must be in PDF format. They must be self-contained and written in English. Submissions that do not follow these guidelines, or do not view or print properly, will be rejected without review. *Ethical use of data and informed consent* As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM?s new Publications Policy on Research Involving Human Participants and Subjects. When appropriate, authors are encouraged to include a section on the ethical use of data and/or informed consent in their paper. Note that submitting your research for approval by the author(s)? institutional ethics review body (IRB) may not always be sufficient. Even if such research has been signed off by your IRB, the programme committee might raise additional concerns about the ethical implications of the work and include these concerns in its review. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. All accepted papers will be published by ACM and will be available via the ACM Digital Library. To be included in the Companion Proceedings, at least one author of each accepted paper must register for the conference and present the paper. -------------------------------------------------------------------------------------------------- Call for TheWebConf 2022 Developer and W3C Track *Web Developer and W3C Track Chairs:* Contact us at - Dominique Hazael-Massieux (W3C) - Tom Steiner (Google LLC) The Web Conference 2022 Developer and W3C Track is part of The Web Conference 2022 in Lyon, France. Participation in the developers track will require registration of at least one author for the conference. The Web Conference 2022 Developer and W3C Track presents an opportunity to share the latest developments across the technical community, both in terms of technologies and in terms of tooling. We will be running as a regular track with live (preferred) or pre-recorded (as an alternative format) presentations during the conference, showcasing community expertise and progress. The event will take place according to the CET time zone (Paris time). We will do our best to accommodate speakers in as convenient as possible time slots for their local time zones. While we are open to any contributions that are relevant for the Web space, here are a few areas that we are particularly interested in: - Tools and methodologies to measure and reduce the environmental impact of the Web. - New usage patterns enabled by Progressive Web Apps and new browser APIs. - Work-arounds, data-based quantification and identification of Web compatibility issues. - Web tooling and developer experience, in particular towards reducing the complexity for newcomers: How can we get closer to the magic of hitting "view source" as a way to get people started? - Tools and frameworks that enable the convergence of real-time communication and streaming media. - Decentralized architectures for the Web, such as those emerging from projects and movements such as Solid, or "Web3". - Peer to peer architectures and protocols. - Identity management (DID, WebAuthN, Federated Credential Management). *Submission guidelines* Submissions can take several form and authors can choose one or multiple submission entry among the following choices: - Papers: papers are limited to 6 pages, including references. Submissions are NOT anonymous. It is the authors? responsibility to ensure that their submissions adhere strictly to the required format. In particular, the format cannot be modified with the objective of squeezing in more material. Papers will be published in the ACM The Web Conference Companion Proceedings archived by the ACM Digital Library, as open access, if the authors wish so. - Links to code repositories on GitHub (with sufficient description and documentation). - Links to recorded demos (as a complement to the above, ideally following established best practices as proposed by the W3C). - Any other resource reachable on the Web. Submissions will be handled via Easychair, at https://easychair.org/conferences/?conf=thewebconf2022, selecting the Web Developer and W3C track. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords and use the template in traditional double-column format to prepare your submissions. For example, Word users may use the Word Interim template, and LaTeX users may use the sample-sigconf template. For Overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty. Submissions for review must be in PDF format. They must be self-contained and written in English. Submissions that do not follow these guidelines, or do not view or print properly, will be rejected without review. *Ethical use of data and informed consent* As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM?s new Publications Policy on Research Involving Human Participants and Subjects. When appropriate, authors are encouraged to include a section on the ethical use of data and/or informed consent in their paper. Note that submitting your research for approval by the author(s)? institutional ethics review body (IRB) may not always be sufficient. Even if such research has been signed off by your IRB, the programme committee might raise additional concerns about the ethical implications of the work and include these concerns in its review. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. ---------------------------------------------------------------------------------------------------------- Call for PhD Symposium Papers *PhD Symposium chairs: (www2022-phd-symposium at easychair.org)* - Hala Skaf-Molli (University of Nantes, France) - Elena Demidova (University of Bonn, Germany) The PhD Symposium of The Web Conference 2022 welcomes submissions from PhD students on their ongoing research related to the main conference topics. These topics include: Semantics and Knowledge, Web Search, Web Systems and Infrastructure, Web Mining and Content Analysis, Economics, Monetization, and Online Markets, User Modeling and Personalization, Web and Society, Web of Things, Ubiquitous and Mobile Computing, Social Network Analysis and Graph Algorithms, Security, Privacy, Trust and Social Web (see also the Research Tracks). We are particularly interested in submissions aimed to enhance our understanding of the Web, provide intuitive access to Web information and knowledge, strengthen the positive impact of the Web on society, take advantage of the Web of Things and enhance security, privacy protection, and trust. The goal of the PhD Symposium is to provide a platform for PhD students to present and receive feedback on their ongoing research. Students at different stages of their research will have the opportunity to present and discuss their research questions, goals, methods, and results. The symposium aims to provide students guidance on various aspects of their research from established researchers and other PhD students working in research areas related to the World Wide Web. Finally, the symposium aims to enable PhD students to interact with other participants of The Web Conference and potential collaborators by stimulating the exchange of ideas and experiences. *Eligibility* The PhD Symposium is open to all PhD students. PhD students at the beginning stages of their doctoral work are particularly welcome when they have a well-defined problem statement and ideas about the solutions they would like to discuss. PhD students in a more advanced stage of their work are also welcome to share and discuss their research results and experiences. *Submission Guidelines* Submissions should be written based on the following structure, which focuses on the key methodological components required for a sound research synthesis: - Abstract: A self-sustained short description of the paper. - Introduction/Motivation: Provide a general introduction to the topic and indicate its importance/impact on Web research and real-world applications. - Problem: Describe the core problem of the PhD thesis. - State of the art: Briefly describe the most relevant related work. - Proposed approach: Briefly present the approach taken and motivate how this is novel regarding existing works. - Methodology: Sketch the methodology that is (or will be) adopted and, in particular, the approach to be taken for evaluating the results of the work. - Results: Describe the current status of the work and the most significant results that have been obtained so far. - Conclusions and future work: Conclude and specify the major items of future work. Submissions should be written in English and must be no longer than five (5) pages in length (according to the ACM format acmart.cls, using the ?sigconf? option). Submissions must be in PDF and must be made through the EasyChair system at https://easychair.org/conferences/?conf=thewebconf2022 (select the PhD Symposium track). Submissions must be single-author and be on the topic of the doctoral work. The supervisor?s name must be clearly marked (? supervised by ? ?) on the paper, under the author?s name. Submissions that do not comply with the formatting guidelines will be rejected without review. Selected papers will be published in the Companion Proceedings of The Web Conference 2022 and made available through the ACM Digital Library. *Review Process* All submissions will be reviewed by the members of the Program Committee of the PhD Symposium, who are experienced researchers in the relevant areas. Students of accepted submissions will have the opportunity to discuss their submissions in more detail and receive additional feedback from mentors. ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From triesch at fias.uni-frankfurt.de Mon Jan 17 08:05:40 2022 From: triesch at fias.uni-frankfurt.de (Jochen Triesch) Date: Mon, 17 Jan 2022 14:05:40 +0100 Subject: Connectionists: Josh Tenenbaum speaking in Developing Minds global online lecture series, January 27, 16:00 UTC Message-ID: <937A7D11-2EB9-4941-8C3B-D86A0DB586C5@fias.uni-frankfurt.de> Dear colleagues, you are cordially invited to the following talk in the Developing Minds global online lecture series (https://sites.google.com/view/developing-minds-series/home): Josh Tenenbaum, MIT "Reverse Engineering Human Cognitive Development: What do we start with, and how do we learn the rest?? This live event will take place on: January 27, 2022 16:00 UTC (Coordinated Universal Time) 17:00 CET (Central European Time) 11:00 EST (Eastern Standard Time) 01:00 JST, January 28 (Japan Standard Time) Abstract: What would it take to build a machine that grows into intelligence the way a person does ? that starts like a baby, and learns like a child!? AI researchers have long debated the relative value of building systems with strongly pre-specified knowledge representations versus learning representations from scratch, driven by data. However, in cognitive science, it is now widely accepted that the analogous ?nature versus nurture?? question is a false choice: explaining the origins of human intelligence will most likely require both powerful learning mechanisms and a powerful foundation of built-in representational structure and inductive biases. I will talk about our efforts to build models of the starting state of the infant mind, as well as the learning algorithms that grow knowledge through early childhood and beyond. These models are expressed as probabilistic programs, defined on top of simulation engines that capture the basic dynamics of objects and agents interacting in space and time. Learning algorithms draw on techniques from program synthesis and probabilistic program induction. I will show how these models are beginning to capture core aspects of human cognition and cognitive development, in terms that can be useful for building more human-like AI. I will also talk about some of the major outstanding challenges facing these and other models of human learning. Bio: Josh Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received his PhD from MIT in 1999, and taught at Stanford from 1999 to 2002. His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. His current research focuses on the development of common sense in children and machines, the neural basis of common sense, and models of learning as Bayesian program synthesis. His research group's papers have been recognized with awards at multiple conferences in Cognitive Science, Computer Vision, AI, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences. Web: https://web.mit.edu/cocosci/josh.html To attend the talk, please register at: https://sites.google.com/view/developing-minds-series/home Kind regards, Jochen Triesch -- Prof. Dr. Jochen Triesch Johanna Quandt Chair for Theoretical Life Sciences Frankfurt Institute for Advanced Studies and Goethe University Frankfurt http://fias.uni-frankfurt.de/~triesch/ Tel: +49 (0)69 798-47531 Fax: +49 (0)69 798-47611 From ludovico.montalcini at gmail.com Tue Jan 18 04:09:10 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Tue, 18 Jan 2022 10:09:10 +0100 Subject: Connectionists: 1st CfP: The 8th Int. Online & Onsite Conf. on Machine Learning, Optimization & Data Science - LOD 2022, September 19-22, Certosa di Pontignano, Tuscany - Italy - Paper Submission Deadline: March 23 Message-ID: Dear Colleague, Apologies if you receive multiple copies of this announcement. Please kindly help forward it to potentially interested authors/attendees, thanks! The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science ? #LOD2022 - September 19-22, Certosa di Pontignano, #Tuscany - Italy LOD 2022, An Interdisciplinary Conference: #MachineLearning, #Optimization, #BigData & #ArtificialIntelligence, #DeepLearning without Borders https://lod2022.icas.cc lod at icas.cc PAPERS SUBMISSION: March 23 (Anywhere on Earth) All papers must be submitted using EasyChair: https://easychair.org/conferences/?conf=lod2022 LOD 2022 KEYNOTE SPEAKER(S): * Pierre Baldi, University of California Irvine, USA LOD 2022 TUTORIAL SPEAKER: * Simone Scardapane, University of Rome "La Sapienza", Italy ACAIN 2022 KEYNOTE SPEAKERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! https://acain2022.artificial-intelligence-sas.org/course-lecturers/ PAPER FORMAT: Please prepare your paper in English using the Springer Nature ? Lecture Notes in Computer Science (LNCS) template, which is available here. Papers must be submitted in PDF. TYPES OF SUBMISSIONS: When submitting a paper to LOD 2022, authors are required to select one of the following four types of papers: * long paper: original novel and unpublished work (max. 15 pages in Springer LNCS format); * short paper: an extended abstract of novel work (max. 5 pages); * work for oral presentation only (no page restriction; any format). For example, work already published elsewhere, which is relevant, and which may solicit fruitful discussion at the conference; * abstract for poster presentation only (max 2 pages; any format). The poster format for the presentation is A0 (118.9 cm high and 84.1 cm wide, respectively 46.8 x 33.1 inch). For research work which is relevant, and which may solicit fruitful discussion at the conference. Each paper submitted will be rigorously evaluated. The evaluation will ensure the high interest and expertise of reviewers. Following the tradition of LOD, we expect high-quality papers in terms of their scientific contribution, rigor, correctness, novelty, clarity, quality of presentation and reproducibility of experiments. Accepted papers must contain significant novel results. Results can be either theoretical or empirical. Results will be judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact. It is also possible to present the talk virtually (Zoom). Special Sessions: Special session on ?Graph Machine Learning? Gianfranco Lombardo, Ph.D., University of Parma, Italy gianfranco.lombardo at unipr.it Special session on ?Machine Learning for Fintech? Gianfranco Lombardo, Ph.D., University of Parma, Italy gianfranco.lombardo at unipr.it https://easychair.org/my/conference?conf=lod2022 https://lod2022.icas.cc/special-sessions/ PAST LOD KEYNOTE SPEAKERS: https://lod2022.icas.cc/past-keynote-speakers/ Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada Bettina Berendt, TU Berlin, Germany & KU Leuven, Belgium, and Weizenbaum Institute for the Networked Society, Germany J?rg Bornschein, DeepMind, London, UK Michael Bronstein, Imperial College London, UK Nello Cristianini, University of Bristol, UK Peter Flach, University of Bristol, UK, and EiC of the Machine Learning Journal Marco Gori, University of Siena, Italy Arthur Gretton, UCL, UK Arthur Guez, Google DeepMind, Montreal, UK Yi-Ke Guo, Imperial College London, UK George Karypis, University of Minnesota, USA Vipin Kumar, University of Minnesota, USA Marta Kwiatkowska, University of Oxford, UK George Michailidis, University of Florida, USA Kaisa Miettinen, University of Jyv?skyl?, Finland Stephen Muggleton, Imperial College London, UK Panos Pardalos, University of Florida, USA Jan Peters, Technische Universitaet Darmstadt & Max-Planck Institute for Intelligent Systems, Germany Tomaso Poggio, MIT, USA Andrey Raygorodsky, Moscow Institute of Physics and Technology, Russia Mauricio G. C. Resende, Amazon.com Research and University of Washington Seattle, Washington, USA Ruslan Salakhutdinov, Carnegie Mellon University, USA, and AI Research at Apple Maria Schuld, Xanadu & University of KwaZulu-Natal, South Africa Richard E. Turner, Department of Engineering, University of Cambridge, UK Ruth Urner, York University, Toronto, Canada Isabel Valera, Saarland University, Saarbr?cken & Max Planck Institute for Intelligent Systems, T?bingen, Germany TRACKS & SPECIAL SESSIONS: https://lod2022.icas.cc/special-sessions/ *) Special Session on AI for Sustainability We welcome contributions on AI for Sustainable Development, AI for Sustainable Urban Mobility, AI for Food Security, AI to fight Deforestation, cutting-edge technology AI to create Inclusive and Sustainable development that leaves no one behind. *) Special Session on AI to help to fight Climate Change AI is a new tool to help us better manage the impacts of climate change and protect the planet. AI can be a ?game-changer? for climate change and environmental issues. AI refers to computer systems that ?can sense their environment, think, learn, and act in response to what they sense and their programmed objectives,? World Economic Forum report, Harnessing Artificial Intelligence for the Earth. We accept papers/short papers/talks at the intersection of climate change, AI, machine learning and data science. AI, Machine Learning and Data Science can be invaluable tools both in reducing greenhouse gas emissions and in helping society adapt to the effects of climate change. We invite submissions using AI, Machine Learning and/or Data Science to address problems in climate mitigation/adaptation including but not limited to the following topics: * Industrial Session Chair: Giovanni Giuffrida ? Neodata. * Special Session on Explainable Artificial Intelligence Explainability is essential for users to effectively understand, trust, and manage powerful artificial intelligence applications. BEST PAPER AWARD: Springer sponsors the LOD 2022 Best Paper Award https://lod2022.icas.cc/best-paper-award/ PROGRAM COMMITTEE: https://lod2022.icas.cc/program-committee/ VENUE: https://lod2022.icas.cc/venue/ The venue of LOD 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation, meals directly to the Certosa di Pontignano. ACTIVITIES: https://lod2022.icas.cc/activities/ POSTER: https://lod2022.icas.cc/wp-content/uploads/sites/20/2021/12/poster-LOD-2022-1.png Submit your research work today! https://easychair.org/conferences/?conf=lod2022 See you in the beautiful Tuscany in September! Best regards, LOD 2022 Organizing Committee LOD 2022 NEWS: https://lod2022.icas.cc/category/news/ Past Editions https://lod2022.icas.cc/past-editions/ LOD 2021, The Seventh International Conference on Machine Learning, Optimization and Big Data Grasmere ? Lake District ? England, UK. Nature Springer ? LNCS volumes 13163 and 13164. LOD 2020, The Sixth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volumes 12565 and 12566. LOD 2019, The Fifth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volume 11943. LOD 2018, The Fourth International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Nature Springer ? LNCS volume 11331. MOD 2017, The Third International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10710. MOD 2016, The Second International Workshop on Machine learning, Optimization and big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10122. MOD 2015, International Workshop on Machine learning, Optimization and big Data Taormina ? Sicily ? Italy. Springer ? LNCS volume 9432. https://www.facebook.com/groups/2236577489686309/ https://twitter.com/TaoSciences https://www.linkedin.com/groups/12092025/ lod at icas.cc https://lod2022.icas.cc * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From malchiodi at di.unimi.it Tue Jan 18 05:29:20 2022 From: malchiodi at di.unimi.it (malchiodi) Date: Tue, 18 Jan 2022 11:29:20 +0100 Subject: Connectionists: [CfP - extended deadline] (Special Session @ IPMU 2022) MIIXAI: Managing Imprecise Information for XAI In-Reply-To: <53d00e6c5ce2b1f8a90affbb82422e6b@di.unimi.it> References: <53d00e6c5ce2b1f8a90affbb82422e6b@di.unimi.it> Message-ID: <79eade9502099ef14d8fe03be933e9e9@di.unimi.it> (Special Session @ IPMU 2022) MIIXAI: Managing Imprecise Information for XAI EXTENDED DEADLINE FOR SUBMISSION: Friday, 18 February 2022 (apologies for cross postings) People have an exceptional ability in managing imprecise information in forms that are well captured by several theories within the Granular Computing paradigm, such as Fuzzy Set Theory, Rough Set Theory, Interval Computing and hybrid theories among others. Endowing XAI systems with the ability of dealing with the many forms of imprecision, is therefore a key challenge that can push forward current XAI technologies towards more trustworthy systems based on imprecise information (II) and full collaborative intelligence. The Special Session will gather recent advancements in topics like foundational, theoretical and methodological aspects of imprecision management in XAI, new technologies for representing and processing imprecision in XAI systems, as well as real-world applications that demonstrate explainability improvements through imprecision management. Topics include but are not limited to: - Design of explainable II-based systems - Evaluation of explainability in models for II - Hybrid systems dealing with different forms of imprecision - Successful applications of explainable II-based systems - Induction of explainable models from II - Theoretical aspects of explainability in II-based systems Important dates: - Submission of Full Papers (EXTENDED): Friday, 18 February 2022 - Notification of Acceptance (EXTENDED): Friday, 1 April 2022 - Camera-ready Submission (EXTENDED): Friday, 22 April 2022 - Conference: 11-15 July 2022, Milan, Italy Submission instructions and more information at IPMU website: https://ipmu2022.disco.unimib.it/ Organizers: - Dario Malchiodi, Dept. of Computer Science, Universit? degli Studi di Milano, Italy, dario.malchiodi at unimi.it - Corrado Mencar, Dept. of Computer Science, University of Bari Aldo Moro, Italy, corrado.mencar at uniba.it From a.pucyk at icm.edu.pl Tue Jan 18 04:30:43 2022 From: a.pucyk at icm.edu.pl (Alicja Pucyk) Date: Tue, 18 Jan 2022 10:30:43 +0100 Subject: Connectionists: [Call for Participation] Jan 20, 4pm CET | Free Virtual ICM Seminar on reconstructing all neurons in a fly brain at nanometer resolution Message-ID: <028d01d80c4e$12fa8520$38ef8f60$@icm.edu.pl> ============================================================= 21. Virtual ICM Seminar with Sven Dorkenwald from Princeton University ============================================================= TITLE: Towards whole-brain Connectomes: Reconstructing all neurons in a fly brain at nanometer resolution DATE: Thursday, January 20, 2022 | 4pm CST FREE registration: https://supercomputingfrontiers.eu/2022/seminars/ ICM University of Warsaw alongside the creator of this series dr Marek Michalewicz are proud to invite everyone to #VirtualICMSeminar with Sven Dornekwald who is developing systems, infrastructure and machine learning methods to facilitate the analysis of large-scale connectomics datasets called FlyWire.ai Don't miss it! Register NOW. _Abstract Comprehensive neuronal wiring diagrams derived from Electron Microscopy images allow researchers to test models of how brain circuits give rise to neuronal activity and drive behavior. Due to advances in automated image acquisition and analysis, whole-brain connectomes with thousands of neurons are finally on the horizon. However, many person-years of manual proofreading are still required to correct errors in these automated reconstructions. We created FlyWire to facilitate the proofreading of neuronal circuits in an entire fly brain by a community of researchers distributed across the world. While FlyWire is dedicated to the fly brain, its methods will be generally applicable to whole-brain connectomics and are already in use to proofread multiple datasets. In this talk I will describe how FlyWire?s computational and social structures are organized to scale up to whole-brain connectomics and present on our progress towards the generation of a proofread whole-brain connec! tome of the fruit fly. _BIOSKETCH Sven Dorkenwald is currently a PhD student in the Seung Lab at Princeton University. In his PhD he is developing systems, infrastructure and machine learning methods to facilitate the analysis of large-scale connectomics datasets. Together with collaborators at the Allen Institute for Brain Science, he developed proofreading and annotation infrastructure that is used to host multiple large-scale connectomics datasets and runs FlyWire. FlyWire.ai is an online community for proofreading neural circuits in a whole fly brain based on the FAFB EM dataset. From ludovico.montalcini at gmail.com Tue Jan 18 04:33:47 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Tue, 18 Jan 2022 10:33:47 +0100 Subject: Connectionists: 1st CfP ACAIN 2022, 2nd Int. Online & Onsite Advanced Course & Symposium on Artificial Intelligence & Neuroscience, Sept 19-22, Certosa di Pontignano, Tuscany - Italy Message-ID: _______________________________________________________________ Call for Participation & Call for Papers (apologies for cross-postings) Please distribute this call to interested parties, thanks _______________________________________________________________ The 2nd International Online & Onsite Advanced Course & Symposium on #ArtificialIntelligence & #Neuroscience - #ACAIN2022 September 19-22, 2022 Certosa di Pontignano, Castelnuovo Berardenga (Siena), #Tuscany - Italy LECTURERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! W: https://acain2022.artificial-intelligence-sas.org E: acain at icas.cc NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ Past Edition: https://acain2021.artificial-intelligence-sas.org Early Registration (Course): by March 23, 2022 (AoE) https://acain2022.artificial-intelligence-sas.org/registration/ Paper Submission (Symposium) : by Saturday April 23, 2022 (AoE) https://acain2022.artificial-intelligence-sas.org/symposium-call-for-papers/ https://easychair.org/conferences/?conf=acain2022 SCOPE & MOTIVATION: The ACAIN 2022 symposium is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields. While the Advanced Course and Symposium on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion residential (or online) Course and Symposium at the Certosa di Pontignano (Tuscany - Italy) on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their peers, in a friendly and constructive environment. Two days of keynote talks and oral presentations, the ACAIN Symposium, (September 21-22), will be preceded by lectures of leading scientists, the ACAIN Course, (September 19-20). Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy and efficient cooperation between the two research communities. These are the goals of the International Course and Symposium ? ACAIN 2022, which is aimed both at AI experts with interests in Neuroscience and at neuroscientists with an interest in AI. ACAIN 2022 accepts rigorous research that promotes and fosters multidisciplinary interactions between artificial intelligence and neuroscience. For four days you will work alongside faculty that are undisputed leaders in their own field. The result is a profound experience that fosters professional and personal development. In a proven, unique format you will be exposed to a high-impact learning experience, taking you outside the comfort zone of your own technical expertise, that will empower you with new analytical and strategic skills across areas of Artificial Intelligence and Neuroscience. Through an increased awareness of the challenges in Artificial Intelligence and Neuroscience, you will gain a place within an elite global network of experts from both fields and learn how their skills apply to your own discipline. The Advanced Course is suited for younger scholars, academics, early career researchers, Post-Docs, PhD students and industry leaders. Moreover, a significant proportion of seasoned investigators are regularly present among the attendees, often senior faculty at their own institutions. The balanced audience that we strive to maintain in each Advanced Course greatly contributes to the development of intense cross-disciplinary debates among faculty and participants that typically address the most advanced and emerging areas of each topic. For four days, the faculty members will present lectures, and discuss with participants in a smaller, more focused setting. This longer interaction, with an exclusive course size, provides the best opportunity to explore the unique expertise of each distinguished faculty mentor, often through one-on-one mentoring. This is unparalleled and priceless. The Event (Course and Symposium) will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. The Certosa di Pontignano provides the perfect learning atmosphere that is both relaxing, and intellectually stimulating, with a stunning backdrop of the Tuscany landscapes. Arts, Landscapes, world-class wines and traditional foods will make the Advanced Course on Artificial Intelligence and Neuroscience the experience of a lifetime! COURSE DESCRIPTION: https://acain2022.artificial-intelligence-sas.org/course-description/ LECTURERS: https://acain2022.artificial-intelligence-sas.org/course-lecturers/ * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! ORGANIZING COMMITTEE: https://acain2022.artificial-intelligence-sas.org/organizing-committee/ VENUE & ACCOMMODATION: https://acain2022.artificial-intelligence-sas.org/venue/ https://acain2022.artificial-intelligence-sas.org/accommodation/ The venue of ACAIN 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation, meals directly to the Certosa di Pontignano. ACTIVITIES: https://acain2022.artificial-intelligence-sas.org/activities/ REGISTRATION: https://acain2022.artificial-intelligence-sas.org/registration/ See you in 3D or 2D :) in Tuscany in September! Giuseppe Nicosia & Panos Pardalos - ACAIN 2022 Directors. POSTER: https://acain2022.artificial-intelligence-sas.org/wp-content/uploads/sites/21/2021/12/poster-ACAIN-2022.png NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ E: acain at icas.cc W: https://acain2022.artificial-intelligence-sas.org Past Edition, ACAIN 2021: https://acain2021.artificial-intelligence-sas.org * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.petrone at gmail.com Tue Jan 18 06:30:27 2022 From: stephan.petrone at gmail.com (Stephan Petrone) Date: Tue, 18 Jan 2022 12:30:27 +0100 Subject: Connectionists: MIC 2022 - 14th Metaheuristics International Conference, Ortigia-Syracuse, Italy Message-ID: Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. ********************************************************* MIC 2022 - 14th Metaheuristics International Conference 11-14 July 2022, Ortigia-Syracuse, Italy https://www.ANTs-lab.it/mic2022/ mic2022 at ANTs-lab.it ********************************************************* ** Submission deadline: 30th March 2022 ** NEWS ** Proceedings will be published in LNCS Volume (Springer) ** Special Issue in ITOR journal *Scope of the Conference ======================== The Metaheuristics International Conference (MIC) conference series was established in 1995 and this is its 14th edition! MIC is nowadays the main event focusing on the progress of the area of Metaheuristics and their applications. As in all previous editions, provides an opportunity to the international research community in Metaheuristics to discuss recent research results, to develop new ideas and collaborations, and to meet old and make new friends in a friendly and relaxed atmosphere. Considering the particular moment, the conference will be held in presence and online mode. Of course, in case the conference will be held in presence, the organizing committee will ensure compliance of all safety conditions. MIC 2022 is focus on presentations that cover different aspects of metaheuristic research such as new algorithmic developments, high-impact and original applications, new research challenges, theoretical developments, implementation issues, and in-depth experimental studies. MIC 2022 strives a high-quality program that will be completed by a number of invited talks, tutorials, workshops and special sessions. *Plenary Speakers ======================== + Christian Blum, Artificial Intelligence Research Institute (IIIA), Spanish National Research Council (CSIC) + Salvatore Greco, University of Catania, Italy + Kalyanmoy Deb, Michigan State University, USA + Holger H. Hoos, Leiden University, The Netherlands + El-Ghazali Talbi, University of Lille, France Important Dates ================ Submission deadline March 30th, 2022 Notification of acceptance May 10th, 2022 Camera ready copy May 25th, 2022 Early registration May 25th , 2022 Submission Details =================== MIC 2022 accepts submissions in three different formats: ??S1) Regular paper: novel and original research contributions of a maximum of 15 pages (LNCS format) ??S2) Short paper: extended abstract of novel research works of 6 pages (LNCS format) ??S3) Oral/Poster presentation: high-quality manuscripts that have recently, within the last year, been submitted or accepted for journal publication. All papers must be prepared using Lecture Notes in Computer Science (LNCS) template, and must be submitted in PDF at the link: https://www.easychair.org/conferences/?conf=mic2022 Proceedings and special issue ============================ Accepted papers in categories S1 and S2 will be published as post-proceedings in Lecture Notes in Computer Science series by Springer. Accepted contributions of category S3 will be considered for oral or poster presentations at the conference based on the number received and the slots available, and will not be included into the LNCS proceedings. An electronic book instead will be prepared by the MIC 2022 organizing committee, and made available on the website. In addition, a post-conference special issue in International Transactions in Operational Research (ITOR) will be considered for the significantly extended and revised versions of selected accepted papers from categories S1 and S2. Conference Location ==================== MIC 2022 will be held in the beautiful Ortigia island, the historical centre of the city of Syracuse, Sicily-Italy. Syracuse is very famous for its ancient ruins, with particular reference to the Roman Amphitheater, Greek Theatre, and the Orecchio di Dionisio (Ear of Dionisio) that is a limestone cave shaped like a human ear. Syracuse is also the city where the greatest mathematician Archimede was born. https://www.siracusaturismo.net/multimedia_lista.asp MIC'2022 Conference Chairs ============================== Conference Chairs - Luca Di Gaspero, University of Undine, Italy - Paola Festa, University of Naples, Italy - Amir Nakib, Universit? Paris Est Cr?teil, France - Mario Pavone, University of Catania, Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From vcutsuridis at gmail.com Tue Jan 18 06:46:04 2022 From: vcutsuridis at gmail.com (Vassilis Cutsuridis) Date: Tue, 18 Jan 2022 11:46:04 +0000 Subject: Connectionists: CALL FOR STUDENT APPLICATIONS - ATHENS INTERNATIONAL MASTERS PROGRAMME IN NEUROSCIENCES Message-ID: The Athens International Master?s Programme in Neurosciences has announced a call for student applications for the Academic year 2022-2023. Please, note that students will be evaluated through their CV, exams in cell biology and neuronal physiology (multiple choice question-via online platforms) and interview. Exams are introduced for the first time to evaluate the knowledge of students in Cell Biology and Physiology of the Nervous System. Basic knowledge in these subjects is considered necessary for the students to achieve the maximum performance during their studies and have homogeneity in terms of the level of knowledge. Applicants who have not been taught the specific subjects must make sure to attend the relevant undergraduate courses and/or online seminars or read relevant books. For more information, please, visit the webpage of the program http://masterneuroscience.biol.uoa.gr/ Regards, --- Vassilis Cutsuridis, PhD, MSc, MA, FHEA Programme Leader in MSc in Intelligent Vision Senior Lecturer School of Computer Science University of Lincoln UK Tel: +44 (0) 1522 83 5701 Email: vcutsuridis at lincoln.ac.uk Web: http://staff.lincoln.ac.uk/vcutsuridis Personal website: http://www.vassiliscutsuridis.org/ "It is better to have nine of your ideas be completely disproved, and the tenth one spark off a revolution, than to have all ten be correct but unimportant discoveries that satisfy the skeptics" ? Francis Crick, 1916?2004 "The important thing in science is not so much to obtain new facts as to discover new ways of thinking about them" ? Sir William Lawrence Bragg, 1890?1971 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Tue Jan 18 09:22:29 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Tue, 18 Jan 2022 15:22:29 +0100 Subject: Connectionists: 1st CfP: The 8th Int. Online & Onsite Conf. on Machine Learning, Optimization & Data Science - LOD 2022, September 19-22, Certosa di Pontignano, Tuscany - Italy - Paper Submission Deadline: March 23 In-Reply-To: References: Message-ID: Dear Colleague, Apologies if you receive multiple copies of this announcement. Please kindly help forward it to potentially interested authors/attendees, thanks! -- The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science ? #LOD2022 - September 19-22, Certosa di Pontignano, #Tuscany - Italy LOD 2022, An Interdisciplinary Conference: #MachineLearning, #Optimization, #BigData & #ArtificialIntelligence, #DeepLearning without Borders https://lod2022.icas.cc lod at icas.cc PAPERS SUBMISSION: March 23 (Anywhere on Earth) All papers must be submitted using EasyChair: https://easychair.org/conferences/?conf=lod2022 LOD 2022 KEYNOTE SPEAKER(S): * Pierre Baldi, University of California Irvine, USA LOD 2022 TUTORIAL SPEAKER: * Simone Scardapane, University of Rome "La Sapienza", Italy ACAIN 2022 KEYNOTE SPEAKERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * Christopher Summerfield, Oxford University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute More Lecturers and Speakers to be announced soon! https://acain2022.artificial-intelligence-sas.org/course-lecturers/ PAPER FORMAT: Please prepare your paper in English using the Springer Nature ? Lecture Notes in Computer Science (LNCS) template, which is available here. Papers must be submitted in PDF. TYPES OF SUBMISSIONS: When submitting a paper to LOD 2022, authors are required to select one of the following four types of papers: * long paper: original novel and unpublished work (max. 15 pages in Springer LNCS format); * short paper: an extended abstract of novel work (max. 5 pages); * work for oral presentation only (no page restriction; any format). For example, work already published elsewhere, which is relevant, and which may solicit fruitful discussion at the conference; * abstract for poster presentation only (max 2 pages; any format). The poster format for the presentation is A0 (118.9 cm high and 84.1 cm wide, respectively 46.8 x 33.1 inch). For research work which is relevant, and which may solicit fruitful discussion at the conference. Each paper submitted will be rigorously evaluated. The evaluation will ensure the high interest and expertise of reviewers. Following the tradition of LOD, we expect high-quality papers in terms of their scientific contribution, rigor, correctness, novelty, clarity, quality of presentation and reproducibility of experiments. Accepted papers must contain significant novel results. Results can be either theoretical or empirical. Results will be judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact. It is also possible to present the talk virtually (Zoom). Special Sessions: Special session on ?Graph Machine Learning? Gianfranco Lombardo, Ph.D., University of Parma, Italy gianfranco.lombardo at unipr.it Special session on ?Machine Learning for Fintech? Gianfranco Lombardo, Ph.D., University of Parma, Italy gianfranco.lombardo at unipr.it https://easychair.org/my/conference?conf=lod2022 https://lod2022.icas.cc/special-sessions/ PAST LOD KEYNOTE SPEAKERS: https://lod2022.icas.cc/past-keynote-speakers/ Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada Bettina Berendt, TU Berlin, Germany & KU Leuven, Belgium, and Weizenbaum Institute for the Networked Society, Germany J?rg Bornschein, DeepMind, London, UK Michael Bronstein, Imperial College London, UK Nello Cristianini, University of Bristol, UK Peter Flach, University of Bristol, UK, and EiC of the Machine Learning Journal Marco Gori, University of Siena, Italy Arthur Gretton, UCL, UK Arthur Guez, Google DeepMind, Montreal, UK Yi-Ke Guo, Imperial College London, UK George Karypis, University of Minnesota, USA Vipin Kumar, University of Minnesota, USA Marta Kwiatkowska, University of Oxford, UK George Michailidis, University of Florida, USA Kaisa Miettinen, University of Jyv?skyl?, Finland Stephen Muggleton, Imperial College London, UK Panos Pardalos, University of Florida, USA Jan Peters, Technische Universitaet Darmstadt & Max-Planck Institute for Intelligent Systems, Germany Tomaso Poggio, MIT, USA Andrey Raygorodsky, Moscow Institute of Physics and Technology, Russia Mauricio G. C. Resende, Amazon.com Research and University of Washington Seattle, Washington, USA Ruslan Salakhutdinov, Carnegie Mellon University, USA, and AI Research at Apple Maria Schuld, Xanadu & University of KwaZulu-Natal, South Africa Richard E. Turner, Department of Engineering, University of Cambridge, UK Ruth Urner, York University, Toronto, Canada Isabel Valera, Saarland University, Saarbr?cken & Max Planck Institute for Intelligent Systems, T?bingen, Germany TRACKS & SPECIAL SESSIONS: https://lod2022.icas.cc/special-sessions/ *) Special Session on AI for Sustainability We welcome contributions on AI for Sustainable Development, AI for Sustainable Urban Mobility, AI for Food Security, AI to fight Deforestation, cutting-edge technology AI to create Inclusive and Sustainable development that leaves no one behind. *) Special Session on AI to help to fight Climate Change AI is a new tool to help us better manage the impacts of climate change and protect the planet. AI can be a ?game-changer? for climate change and environmental issues. AI refers to computer systems that ?can sense their environment, think, learn, and act in response to what they sense and their programmed objectives,? World Economic Forum report, Harnessing Artificial Intelligence for the Earth. We accept papers/short papers/talks at the intersection of climate change, AI, machine learning and data science. AI, Machine Learning and Data Science can be invaluable tools both in reducing greenhouse gas emissions and in helping society adapt to the effects of climate change. We invite submissions using AI, Machine Learning and/or Data Science to address problems in climate mitigation/adaptation including but not limited to the following topics: * Industrial Session Chair: Giovanni Giuffrida ? Neodata. * Special Session on Explainable Artificial Intelligence Explainability is essential for users to effectively understand, trust, and manage powerful artificial intelligence applications. BEST PAPER AWARD: Springer sponsors the LOD 2022 Best Paper Award https://lod2022.icas.cc/best-paper-award/ PROGRAM COMMITTEE: https://lod2022.icas.cc/program-committee/ VENUE: https://lod2022.icas.cc/venue/ The venue of LOD 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation, meals directly to the Certosa di Pontignano. ACTIVITIES: https://lod2022.icas.cc/activities/ POSTER: https://lod2022.icas.cc/wp-content/uploads/sites/20/2021/12/poster-LOD-2022-1.png Submit your research work today! https://easychair.org/conferences/?conf=lod2022 See you in the beautiful Tuscany in September! Best regards, LOD 2022 Organizing Committee LOD 2022 NEWS: https://lod2022.icas.cc/category/news/ Past Editions https://lod2022.icas.cc/past-editions/ LOD 2021, The Seventh International Conference on Machine Learning, Optimization and Big Data Grasmere ? Lake District ? England, UK. Nature Springer ? LNCS volumes 13163 and 13164. LOD 2020, The Sixth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volumes 12565 and 12566. LOD 2019, The Fifth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volume 11943. LOD 2018, The Fourth International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Nature Springer ? LNCS volume 11331. MOD 2017, The Third International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10710. MOD 2016, The Second International Workshop on Machine learning, Optimization and big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10122. MOD 2015, International Workshop on Machine learning, Optimization and big Data Taormina ? Sicily ? Italy. Springer ? LNCS volume 9432. https://www.facebook.com/groups/2236577489686309/ https://twitter.com/TaoSciences https://www.linkedin.com/groups/12092025/ lod at icas.cc https://lod2022.icas.cc * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Tue Jan 18 09:24:00 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Tue, 18 Jan 2022 15:24:00 +0100 Subject: Connectionists: 1st CfP ACDL 2022, 5th Online & Onsite Advanced Course on Data Science & Machine Learning | August 22-26, 2022 | Certosa di Pontignano, Italy - Early Registration: by March 23 In-Reply-To: References: Message-ID: #ACDL2022, An Interdisciplinary Course: #BigData, #DeepLearning & #ArtificialIntelligence without Borders ACDL 2022 ? A Unique Experience: #DataScience, #MachineLearning & #ArtificialIntelligence with the World?s Leaders in the fascinating atmosphere of the ancient Certosa di Pontignano (Online attendance available) Certosa di Pontignano, Castelnuovo Berardenga (Siena) - #Tuscany, Italy August 22-26 https://acdl2022.icas.cc acdl at icas.cc ACDL 2022 (as ACDL 2021 and ACDL 2020): an #OnlineAndOnsiteCourse https://acdl2022.icas.cc/acdl-2022-as-acdl-2021-and-acdl-2020-an-online-onsite-course/ REGISTRATION: Early Registration: by March 23 https://acdl2022.icas.cc/registration/ DEADLINES: Early Registration: by Wednesday March 23 (AoE) Oral/Poster Presentation Submission Deadline: Wednesday March 23 (AoE) Late Registration: from Thursday March 24 Accommodation Reservation at Reservation at the Certosa di Pontignano: by Monday May 23 Notification of Decision for Oral/Poster Presentation: by Thursday June 23 LECTURERS: Each Lecturer will hold three/four lessons on a specific topic. https://acdl2022.icas.cc/lecturers/ * Alex Davies, DeepMind, London, UK * Panos Pardalos, University of Florida, USA * Silvio Savarese, Salesforce & Stanford, University, USA Institute for Human-Centered Artificial Intelligence * Mihaela van der Schaar, University of Cambridge, UK More Keynote Speakers to be announced soon. PAST LECTURERS: https://acdl2022.icas.cc/past-lecturers/ Ioannis Antonoglou, Google DeepMind, UK Igor Babuschkin, DeepMind - Google, London, UK Pierre Baldi, University of California Irvine, USA Roman Belavkin, Middlesex University London, UK Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada Bettina Berendt, TU Berlin, Weizenbaum Institute, and KU Leuven Jacob D. Biamonte, Skolkovo Institute of Science and Technology, Russian Federation Chris Bishop, Microsoft, Cambridge, UK, and Laboratory Director at Microsoft Research Cambridge & University of Edinburgh Michael Bronstein, Twitter & Imperial College London, UK Sergiy Butenko, Texas A&M University, USA Silvia Chiappa, DeepMind, London, UK Giuseppe Di Fatta, University of Reading, UK Oren Etzioni, Allen Institute for AI, USA, and CEO at Allen Institute for AI Aleskerov Z. Fuad, National Research University Higher School of Economics, Russia Marco Gori, University of Siena, Italy Georg Gottlob, Computer Science Dept, University of Oxford, UK Yi-Ke Guo, Imperial College London, UK Phillip Isola, MIT, USA Michael I. Jordan, University of California, Berkeley, USA Leslie Kaelbling, MIT - Computer Science & Artificial Intelligence Lab, USA Diederik P. Kingma, Google Brain, San Francisco, CA, USA Ilias S. Kotsireas, Wilfrid Laurier University, Canada Marta Kwiatkowska, Computer Science Dept., University of Oxford, UK Risto Miikkulainen, University of Texas at Austin, USA Peter Norvig, Director of Research, Google Panos Pardalos, University of Florida, USA Alex 'Sandy' Pentland, MIT & Director of MIT?s Human Dynamics Laboratory, USA Jos? C. Principe, University of Florida, USA Marc'Aurelio Ranzato, Facebook AI Research Lab, New York, USA Dolores Romero Morales, Copenhagen Business School, Denmark Daniela Rus, MIT, USA, and Director of CSAIL Ruslan Salakhutdinov, Carnegie Mellon University, and AI Research at Apple, USA Guido Sanguinetti, The University of Edinburgh, UK Cristina Savin, New York University, Center for Neural Science & Center for Data Science, USA Josh Tenenbaum, MIT, USA Naftali Tishby, Hebrew University, Israel Isabel Valera, Saarland University, Germany, and Max Planck Institute for Intelligent Systems, T?bingen, Germany Mihaela van der Schaar, University of Cambridge, and Director of Cambridge Centre for AI in Medicine Joaquin Vanschoren, Eindhoven University of Technology, The Netherlands Oriol Vinyals, Google DeepMind, UK SCOPE: MSc students, PhD students, postdocs, junior/senior academics, and industry practitioners will be typical profiles of the attendants. In fact, the Advanced Course is not a summer school suited only for younger scholars. Rather, a significant proportion of seasoned investigators are regularly present among the attendees, often senior and junior faculty at their own institutions. The balanced audience that we strive to maintain in each Advanced Course greatly contributes to the development of intense cross-disciplinary debates among faculty and participants that typically address the most advanced and emerging areas of each topic. Each faculty member presents lectures and discusses with the participants for one entire day. Such long interaction together with the small, exclusive Course size provides the uncommon opportunity to fully explore the expertise of each faculty, often through one-to-one mentoring. This is unparalleled and priceless. The Certosa di Pontignano provides the perfect setting to a relaxed yet intense learning atmosphere, with the stunning backdrop of the Tuscan landscapes. World-class wines and traditional foods will make the Advanced Course on Data Science and Machine Learning the experience of a lifetime. VENUE: The venue of ACDL 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact persono: Dr. Lorenzo Pasquinuzzi A few Kilometers from Siena, on a hill dominating the town stands the ancient Certosa di Pontignano, a unique place where nature, history and hospitality blend together in memorable harmony. Built in the 1300, its medieval structure remains intact with additions of the following centuries. The Certosa is centered on its historic cloisters and gardens. https://acdl2022.icas.cc/venue/ PAST EDITIONS: https://acdl2022.icas.cc/past-editions/ https://acdl2018.icas.xyz https://acdl2019.icas.xyz https://acdl2020.icas.xyz/https://acdl2021.icas.cc/ REGISTRATION: https://acdl2022.icas.cc/registration/ CERTIFICATE:A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. ACDL 2022 Poster: https://acdl2022.icas.cc/wp-content/uploads/sites/19/2021/12/poster-ACDL-2022-1.png Anyone interested in participating in ACDL 2022 should register as soon as possible. Similarly for accommodation at the Certosa di Pontignano (the School Venue), book your full board accommodation at the Certosa as soon as possible. All course participants must stay at the Certosa di Pontignano. See you in 3D or 2D :) in Tuscany in August! ACDL 2022 Directors. https://acdl2022.icas.cc/category/news/ https://acdl2022.icas.cc/faq/ acdl at icas.cc https://acdl2022.icas.cc https://www.facebook.com/groups/204310640474650/ https://twitter.com/TaoSciences * Apologies for multiple copies. Please forward to anybody who might be interested * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.opazo at ed.ac.uk Tue Jan 18 12:39:03 2022 From: p.opazo at ed.ac.uk (OPAZO Patricio) Date: Tue, 18 Jan 2022 17:39:03 +0000 Subject: Connectionists: Postdoc position in synaptic/neuronal in-vivo two-photon imaging Message-ID: Dear all, We are looking for a Postdoctoral Research Fellow to join our lab at the UK Dementia Research Institute in The University of Edinburgh. The Opportunity: To investigate the synaptic and neuronal compensatory mechanisms that may underlie cognitive reserve in Alzheimer?s disease mouse models using state-of-the-art in-vivo two photon imaging and optogenetics. Please find more details here: https://elxw.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1001/job/2827/?utm_medium=jobshare [https://erecruitmentimages.isg.ed.ac.uk/images/UOE%20WHITE%20STCKED%20LOGO.png] University of Edinburgh Jobs - elxw.fa.em3.oraclecloud.com We are looking for a Postdoctoral Research Fellow with experience with in-vivo two-photon microscopy, optogenetics, stereotaxic surgeries and animal behaviour to join the Opazo lab at the UK Dementia Research Institute. elxw.fa.em3.oraclecloud.com Many thanks, Kind regards, Pato Dr Patricio Opazo Group Leader UK Dementia Research Institute at the University of Edinburgh Centre for Discovery Brain Sciences Web | www.ukdri.ac.uk | https://ukdri.ac.uk/team/patricio-opazo-olavarria Address | Chancellor?s Building, 49 Little France Crescent, Edinburgh, EH16 4SB The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th? ann an Oilthigh Dh?n ?ideann, cl?raichte an Alba, ?ireamh cl?raidh SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Tue Jan 18 16:50:37 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Tue, 18 Jan 2022 23:50:37 +0200 Subject: Connectionists: Call-for-Participation: ImageCLEF 2022 Tasks Message-ID: [Apologies for multiple postings] ImageCLEF 2022 Multimedia Retrieval in CLEF http://www.imageclef.org/2022/ https://www.facebook.com/ImageClef/ https://twitter.com/imageclef/ *** CALL FOR PARTICIPATION *** ImageCLEF 2022 is an evaluation campaign that is being organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org) and are presented in the CLEF conference. Selected contributions among the participants will be invited for submission to a special section "Best of CLEF'21 Labs" in the Springer Lecture Notes in Computer Science (LNCS) of CLEF'22, together with the annual lab overviews. Target communities involve (but are not limited to): - information retrieval (text, vision, audio, multimedia, social media, sensor data, etc.) - machine learning, deep learning - data mining - natural language processing - image and video processing - computer vision with special attention to the challenges of multi-modality, multi-linguality, and interactive search. *** 2022 TASKS *** *ImageCLEFcoral* (4th edition) https://www.imageclef.org/2022/coral The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. The task addresses this particular issue for monitoring coral reef structure and composition, in support of their conservation. Organizers: Jon Chamberlain, Adrian Clark, and Alba Garc?a Seco de Herrera (University of Essex, UK), and Antonio Campello (Wellcome Trust, UK). *ImageCLEFmedical* (4th edition) https://www.imageclef.org/2022/medical Medical images can be used in a variety of scenarios and this task will combine the most popular medical tasks of ImageCLEF and continue the idea of 2020 by mixing various applications, namely: (i) automatic image captioning with medical visual question answering, and (ii) analysis of tuberculosis patients by finding cavities where the disease possibly remains even after a first treatment. Organizers: Johannes R?ckert, Christoph M. Friedrich, Louise Bloch, Raphael Br?ngel, Ahmad Idrissi-Yaghir, and Henning Sch?fer (University of Applied Sciences and Arts Dortmund, Germany), Asma Ben Abacha (National Library of Medicine, USA), Alba Garc?a Seco de Herrera (University of Essex, UK), Henning M?ller (University of Applied Sciences Western Switzerland, Sierre, Switzerland), Serge Kozlovski, Vitali Liauchuk, and Vassili Kovalev (Institute for Informatics, Minsk, Belarus), and Yashin Dicente Cid (University of Warwick, Coventry, England, UK). *ImageCLEFaware2022* (2nd edition) https://www.imageclef.org/2022/aware The images available on social networks can be exploited in ways users are unaware of when initially shared, including situations that have serious consequences for the users? real lives. For instance, it is common practice for prospective employers to search online for information about their future employees. This task addresses the development of algorithms which raise the users? awareness about real-life impact of online image sharing. Organizers: Adrian Popescu, J?r?me Deshayes-Chossart, and Hugo Schindler (CEA LIST, France), and Bogdan Ionescu (Politehnica University of Bucharest, Romania). *ImageCLEFfusion2022* (new) https://www.imageclef.org/2022/fusion Despite the current advances in knowledge discovery, single learners do not produce satisfactory performance when dealing with complex data, such as class imbalance, high-dimensionality, concept drift, noisy data, multimodal data, etc. The task aims to fill this gap by exploiting novel and innovative late fusion techniques for producing a powerful learner based on the expertise of the pool of classifiers it integrates. The task requires participants to develop aggregation mechanisms of the outputs of the supplied systems and generate ensemble predictions with significantly higher performance than the individual systems. Organizers: Liviu-Daniel Stefan, Mihai Gabriel Constantin, Mihai Dogariu, and Bogdan Ionescu (Politehnica University of Bucharest, Romania). *** IMPORTANT DATES *** (may vary depending on the task) - Task registration opens: November 15, 2021 - Run submission: May 6, 2022 - Working notes submission: May 27, 2022 - CLEF 2022 conference: September 5-8, Bologna, Italy *** REGISTRATION *** Follow the instructions here https://www.imageclef.org/2022. *** OVERALL COORDINATION *** Bogdan Ionescu, Politehnica University of Bucharest, Romania Henning M?ller, HES-SO, Sierre, Switzerland Renaud P?teri, University of La Rochelle, France On behalf of the organizers, Bogdan Ionescu https://www.aimultimedialab.ro/ From angelo.ciaramella at uniparthenope.it Tue Jan 18 16:18:21 2022 From: angelo.ciaramella at uniparthenope.it (ANGELO CIARAMELLA) Date: Tue, 18 Jan 2022 21:18:21 +0000 Subject: Connectionists: CfP: Special Issue on Soft Computing Journal Message-ID: ... Apologize for cross-posting ... ------------------------------------------------------------------------------------------------------ Special issue on ?Human-Centered Intelligent System? Soft Computing - A Fusion of Foundations, Methodologies and Applications ------------------------------------------------------------------------------------------------------ Website of the call for papers https://www.springer.com/journal/500/updates/19724572 Aims and scope Nowadays, Artificial Intelligence has become an enabling technology that pervades many aspects of our daily life. At the forefront of this advancement are data-driven technologies like machine learning. However, as the role of Artificial Intelligence becomes more and more important, so does the need for reliable solutions to several issues that go well beyond technological aspects: How can we make automated agents justify their actions? and how to make them accountable for these actions? What will be the social acceptance of intelligent systems, possibly embodied (e.g. in robots), in their interaction with people? How will automated agents be made aware of the whole spectrum of human nonverbal communication, so as to take it into account and avoid missing crucial messages? Is it possible to avoid amplifying human biases and ensure fairness in decisions that have been taken automatically? How can we enable collaborative intelligence amongst humans and machines? Purely data-driven technologies are showing their limits precisely in these areas. There is a growing need for methods that, in a tight interaction with them, allow different degrees of control over the several facets of automated knowledge processing. The diversity and complementarity of Soft Computing techniques in addressing these issues is playing a crucial role. This Special Issue aims to collect the most recent advancements in the research on Human-Centered Intelligent Systems with special focus on Soft Computing methods, techniques and applications on the following and related topics: * Trustworthiness, explainability, accountability and social acceptance of intelligent systems; * Human-computer interaction to foster collaboration with intelligent systems * Affective computing and sentiment analysis to promote nonverbal communication in intelligent systems * Fighting algorithmic bias and ensuring fairness in intelligent systems; * Real world applications in health-care, justice, education, digital marketing, biology, hard and natural sciences, autonomous vehicles, etc. Proposals related to further topics are welcome as long as they fall within the general scope of this special issue, which is computational intelligence methods in human-centered computing. Submission guidelines and review process Papers must be submitted according to the standard procedure of Soft Computing, selecting the S.I. ?Human-Centered Intelligent Systems?. All submitted papers should report original work and make a meaningful contribution to the state of the art. Each submitted paper will undergo a first screening by the Guest Editors. If the submission falls within the scope of the SI, it will undergo a regular revision process. Acceptance criteria are the same of regular issues of the journal. Important dates * Submissions open: December 1, 2021 * Paper submission deadline: February 1, 2022 * Final decision: May 1, 2022 * Tentative period for final publication: Fall 2022 Authors guidelines and journal information can be found at https://www.springer.com/journal/500 Guest Editors - Angelo Ciaramella - Universit? degli Studi di Napoli Parthenope, Italy - Corrado Mencar - Universit? degli Studi di Bari Aldo Moro, Italy - Susana Montes - Universidad de Oviedo, Spain - Stefano Rovetta - Universit? degli Studi di Genova, Italy For any information, please contact Angelo Ciaramella > ??????????????????????? Prof. Angelo Ciaramella, Ph.D - School of Science, Engineering and Health, Department of Science and Technology, University of Naples Parthenope - Room 431, C4 Island, Centro Direzionale di Napoli - I-80143, Naples, Italy - tel.: 0815476674 - e-mail: angelo.ciaramella at uniparthenope.it [signature_1686498466] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8499 bytes Desc: image001.png URL: From bogdanlapi at gmail.com Tue Jan 18 18:25:55 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Wed, 19 Jan 2022 01:25:55 +0200 Subject: Connectionists: Call-for-papers: 4th International Workshop on Research & Innovation for Secure Societies (RISS 2022) Message-ID: [Apologies for multiple postings] RISS 2022 4th International Workshop on Research & Innovation for Secure Societies @ 14th International Conference on Communications ? COMM 2022 Bucharest, Romania, 16-18 June 2022 https://www.comms.ro/RISS2022/ https://www.comms.ro/paper-submission.html *** Call for papers *** * Paper submission due: March 14, 2022 * Acceptance notification: April 29, 2022 * Camera-ready papers due: May 14, 2022 * Workshop @ COMM 2022: June 16-18 (TBD), 2022 High expansion of urban population and recent world events complemented by the alarming increase of the number of threats to infrastructure safety have mobilized Authorities to redesign societal security concepts. Law Enforcement Authorities are focusing and aiming consistently their actions at preventing crimes and protecting people, properties, and critical infrastructures. With the accelerated advances of communications and storage technologies, access to critical information acquired from various sensors and sources, e.g., land cameras, satellite data, drones, personal devices, has been significantly eased. Manipulation and processing of such a high amount of diverse data is still a steady challenge, and most of the existing solutions involve the use of human resources. However, threats are now hybrid and at a very large scale, requiring very different security solutions which can make use of interdisciplinary approaches. In this context, computer-assisted or automated technologies are now becoming more and more attractive to substitute expensive human resources in the decision systems. RISS 2022 focuses on discussing solutions provided by Intelligent Systems to the challenges. It aims to bring together researchers from academia and industry, end-users, law-enforcing agencies, and citizen groups to share experiences and explore multi- and inter-disciplinary areas where additional research and development are needed, identify possible collaboration and consider the societal impact of such technologies. Authors are invited to submit original, previously unpublished work, reporting on novel and significant research contributions, on-going research projects, experimental results and recent developments related to Intelligent Techniques for Secure Societies on the following topics (but not limited to): - Computer vision (e.g., crowd monitoring, scene understanding) - Multimedia information retrieval (e.g., indexing, searching, and browsing, Big Data) - Information fusion from various sensors (e.g., visual, infrared, depth, sound) - Machine learning (e.g., very large-scale deep learning, neural networks) - Embedded systems, IoT, and low energy footprint computing - Social media, cybercrimes, and fake news - Surveillance systems and interactive solutions - Forensics and crime scene reconstruction - Unmanned aerial, terrestrial, underwater vehicles, and robots - Biometric systems and algorithms (e.g., body, fingerprint, gesture, voice recognition) - Case studies, practical systems, and testbeds - Ethics, data protection, privacy protection, civil liberties, and social exclusion issues - Benchmarking, evaluation, and data sets *** Submission guidelines *** Prospective authors are invited to submit original, previously unpublished technical papers for presentation in the conference sections and for publication in the COMM2022 Conference Proceedings. Since 2010, the papers presented at the conference have been published in IEEEXplore and ISI database. IEEE reserves the right to exclude a paper from distribution after the conference, including IEEE Xplore? Digital Library, if the paper is not presented by the author at the conference. To submit your contribution, please adhere strictly to the conference guidelines available here https://www.comms.ro/paper-submission.html. *** Organizing committee *** Bogdan Ionescu, Politehnica University of Bucharest, Romania Cristian Molder, Ferdinand I Military Technical Academy, Romania Dragos Sburlan, Ovidius University of Constan?a, Romania Razvan Roman, Protection and Guard Service, Romania On behalf of the organizers, Bogdan Ionescu https://www.aimultimedialab.ro/ From liufengchaos at gmail.com Wed Jan 19 02:07:19 2022 From: liufengchaos at gmail.com (Feng Liu) Date: Wed, 19 Jan 2022 02:07:19 -0500 Subject: Connectionists: CfP: Graph Learning for Brain Imaging in Frontiers of Neuroscience (deadline in 10 days) Message-ID: Dear Colleagues, We are writing to let you know that we are organizing a special issue "Graph Learning for Brain Imaging" in Frontiers in Neuroscience (impact factor 4.7). We believe this is a timely special issue to showcase the new developments using graph representation, deep learning on graph-structured data to address important brain imaging and computational neuroscience problems. *Link*: https://www.frontiersin.org/research-topics/23683/graph-learning-for-brain-imaging *Keywords*: Brain Networks, Graph Neural Networks, Brain Imaging, Graph Embedding, Multi-Modal Imaging. *Topics*: We are looking for original, high-quality submissions on innovative research and developments in the analysis of brain imaging using graph learning techniques. Topics of interest include (but are not limited to): ? Graph neural networks (GNN) for network neuroscience applications ? Graph neural network for brain mapping and data integration ? Graph convolution network (GCN) for brain disorder classification ? (Dynamic) Functional brain networks ? Brain networks development trajectories ? Graphical model for brain imaging data analysis ? Spatial-temporal brain network modeling ? Graph embedding and graph representation learning ? Information fusion for brain networks from multiple modalities or scales (fMRI, M/EEG, DTI, PET, genetics) ? Generative graph models in brain imaging ? Brain network inference: scalable, online, and from non-linear relationships ? Machine learning over graphs: kernel-based techniques, clustering methods, scalable algorithms for brain imaging ? A few-shot learning for learning from limited brain data ? Graph federated learning for brain imaging *Important Dates*: Full paper deadline: 31-Jan-2022 (submission of abstract is not required by highly preferred) *Background:* Unprecedented collections of large-scale brain imaging data, such as MRI, PET, fMRI, M/EEG, DTI, etc. provide a unique opportunity to deepen our understanding of the brain working mechanisms, improve prognostic predictions for mental disorders, and tailor personalized treatment plans for brain diseases. Recent advances in machine learning and large-scale brain imaging data collection, storage, and sharing lead to a series of novel interdisciplinary approaches among the fields of computational neuroscience, signal processing, deep learning, brain imaging, cognitive science, and computational psychiatry, among which graph learning provides a valuable means to address important questions in brain imaging. Graph learning refers to designing effective machine learning and deep learning methods extracting important information from graphs or exploiting the graph structure in the data to guide the knowledge discovery. Given the complex data structure in different imaging modalities as well as the networked organizational structure of the human brain, novel learning methods based on graphs inferred from imaging data, graph regularizations for the data, and graph embedding of the recorded data, have shown great promise in modeling the interactions of multiple brain regions, information fusion among networks derived from different brain imaging modalities, latent space modeling of the high dimensional brain networks, and quantifying topological neurobiomarkers. The goal of this Research Topic is to synergize the start-of-the-art discoveries in terms of new computational brain imaging models and insights of brain mechanisms through the lens of brain networks and graph learning. --On Behalf of all the Guest Editors Feng Liu, Stevens Institute of Technology, Hoboken, NJ, USA Yu Zhang, Lehigh University, Bethlehem, PA, USA Jordi Sol?-Casals, Universitat de Vic - Universitat Central de Catalunya, Barcelona, SpainIslem Rekik, Istanbul Technical University, Istanbul, TurkeyYehia Massoud, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Wed Jan 19 03:23:18 2022 From: axel.hutt at inria.fr (Axel Hutt) Date: Wed, 19 Jan 2022 09:23:18 +0100 (CET) Subject: Connectionists: Neuromodulation by Digital and Analogue Drugs in Consciousness Research Message-ID: <745284563.33641469.1642580598258.JavaMail.zimbra@inria.fr> -------------------------------------------------------------------------- Call for Contributions to Frontiers Research Topic on ?Neuromodulation by Digital and Analogue Drugs in Consciousness Research?. (https://www.frontiersin.org/research-topics/29684/neuromodulation-by-digital-and-analogue-drugs-in-consciousness-research) ---------------------- Frontiers Research Topics (RT) are a specific kind of Open Access article collection with the option to choose the journal of publication among a set of pre-selected journals. In our RT, you may choose between Frontiers in Psychology (https://www.frontiersin.org/journals/psychology/sections/consciousness-research), Frontiers in Medicine (https://www.frontiersin.org/journals/medicine/sections/intensive-care-medicine-and-anesthesiology), or Frontiers in Systems Neuroscience (https://www.frontiersin.org/journals/systems-neuroscience) Guest Editors are Anthony Hudetz (U Michigan), Darren Hight (Inselspital Bern) and Axel Hutt (INRIA Nancy Grand Est) Description: Neuromodulation attracts increasing attention in today?s pre-clinical and clinical practice. It includes diverse medical procedures such as neurostimulation, general anesthesia, or drug treatments for mental disorders. These procedures are known to affect and even control the patient?s state of consciousness. Stimulations generated externally by electronic devices, such as transcranial and intracranial electric stimulations, are known as Digital Drugs. Conversely, pharmacological and thus non-digital drugs are important elements in medical treatment of, e.g. mental disorders, and can be termed Analogue Drugs. Interestingly, in certain cases, their neural actions resemble Digital Drug actions, although their origin and neurophysiological action is different. By virtue of both their similarity and difference of Analogue (pharmacological) Drugs and Digital Drugs. Well-known pairs of Digital and Analogue Drugs are visual flicker and psychedelic drugs (which both induce hallucinations) and isochronic auditory beats and anesthetics (with both inducing sedation). The RT is the follow-up issue of a previous RT on ?General Anaesthesia: >From theory to experiments? (https://www.frontiersin.org/research-topics/2345/general-anesthesia-from-theory-to-experiments). Submission deadline is March 24. Further information on ? Frontiers Research Topics can be found under https://www.frontiersin.org/about/research-topics ? the different article types can be found under ?For authors? on the corresponding journal web sites ? the Article Processing Charges and the corresponding fee waiver program can be found under https://www.frontiersin.org/about/publishing-fees -- Axel Hutt Directeur de Recherche Equipe MIMESIS INRIA Nancy Grand Est B?timent IHU 1, Place de l'Hopital 67000 Strasbourg, France https://mimesis.inria.fr/speaker/axel-hutt/ From yokoya at k.u-tokyo.ac.jp Wed Jan 19 08:37:56 2022 From: yokoya at k.u-tokyo.ac.jp (Naoto Yokoya) Date: Wed, 19 Jan 2022 22:37:56 +0900 Subject: Connectionists: [Conferences] EARTHVISION 2022 @ CVPR - Call for Papers Message-ID: Apologies for cross-posting ******************************* CALL FOR PARTICIPANTS & PAPERS *EarthVision 2022* *Large Scale Computer Vision for Remote Sensing Imagery IEEE GRSS Workshop, * *in conjunction with CVPR 2022, 19 June 2022, New Orleans, Louisiana, hybrid/virtual* Website: https://www.grss-ieee.org/events/earthvision-2022/ *AIMS AND SCOPE* Earth Observation (EO)/Remote Sensing is an ever-growing field of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale, homogeneous information about processes occurring at the surface of the Earth exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, ranging from detection to registration, data mining, multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion, and regression, to name just a few. It is motivated by numerous applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, etc. The sheer amount of data calls for highly automated scene interpretation workflows. Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations. The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change. This workshop, held for its sixth edition at the CVPR 2022, aims at fostering collaboration between the computer vision and EO communities to, on the one hand, boost automated interpretation of EO data, and, on the other hand, raise awareness inside the computer vision and machine learning communities for this highly challenging and quickly evolving field of research with an extensive impact on human society, economy, industry, and the environment. Submissions are invited from all areas of computer vision and image analysis relevant for, or applied to, environmental remote sensing. Topics of interest include, but are not limited to: - Super-resolution in the spectral and spatial domain - Hyperspectral and multispectral image processing - 3D reconstruction from aerial optical and LiDAR acquisitions - Feature extraction and learning from spatio-temporal data - Semantic classification of UAV / aerial and satellite images and videos - Deep learning tailored for large-scale Earth observation - Domain adaptation, concept drift, and the detection of out-of-distribution data - Self-, weakly, and unsupervised approaches for learning with spatial data - Human-in-the-loop and active learning - Multi-resolution, multi-temporal, multi-sensor, multi-modal processing - Fusion of machine learning and physical models - Explainable and interpretable machine learning in Earth Observation applications - Applications for climate change, sustainable development goals, and geoscience - Public benchmark datasets: Training data standards, testing & evaluation metrics, as well as open-source research and development. *IMPORTANT DATES* Full paper submission: *March 9, 2022* Notification of acceptance: April 1, 2022 Camera-ready paper: April 8, 2022 Workshop (full day): June 19, 2022 *SUBMISSION GUIDELINES* A complete paper should be submitted using the EarthVision templates provided on the workshop website. The paper length must not exceed 8 pages (excluding references) and formatting follows CVPR 2022 instructions. All manuscripts will be subject to a double-blind review process, i.e. authors must not identify themselves on the submitted papers. Papers are to be submitted using the dedicated submission platform on the Workshop website (https://www.grss-ieee.org/earthvision2022/submission.html). By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply. *WORKSHOP ORGANIZERS* - Ronny H?nsch, German Aerospace Center, Germany - Devis Tuia, EPFL, Switzerland - Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland - Bertrand Le Saux, ESA/ESRIN, Italy - Naoto Yokoya, Uni. of Tokyo & RIKEN, Japan - Nathan Jacobs, Uni. of Kentucky, USA - Fabio Pacifici, Maxar, USA - Mariko Burgin, NASA JPL, USA - Lo?c Landrieu, IGN, France - Charlotte Pelletier, UBS Vannes, France *CHALLENGE* EarthVision 2022 will again feature interesting challenges addressing modern problems of Remote Sensing and Earth Observation. Stay tuned for details! *SPONSORING* The event is co-organized by the Image Analysis and Data Fusion Technical Committee of the IEEE-GRSS, and it is sponsored by SpaceNet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From muftimahmud at gmail.com Wed Jan 19 19:41:06 2022 From: muftimahmud at gmail.com (Mufti Mahmud) Date: Thu, 20 Jan 2022 00:41:06 +0000 Subject: Connectionists: [AII 2022] Call for Special Sessions, Tutorials and Workshops. Message-ID: Dear Colleague, Call for Papers and Special Sessions, Tutorial and Workshop Proposals for the second edition of the International Conference of Applied Intelligence and Informatics 2022 (AII 2022) is now open! AII 2022 conference will be held in Reggio Calabria , a beautiful seaside city located in the southernmost tip of Italy, from 1 to 3 September 2022. - Submit your Special Session/ Tutorial/ Workshop proposal by email at info at aii2022.org (deadline 15th February 2022) - Submit your paper (full/short) using this link: https://proconf.org/conferences/aii-2022/app/login.php (deadline 1st May 2022) For further information, www.aii2022.org or contact us: info at aii2022.org **** IMPORTANT DATES **** *15th Feb 2022:* Special Sessions, Tutorial and Workshop Proposal Deadline *1st May 2022:* Paper Submission Deadline for both Full and Short papers *1st June 2022:* Paper Acceptance Notification Due *1st July 2022:* Final Paper Submission & Early Registration Deadline *1st ? 3rd September 2022:* AII 2022 in Reggio Calabria, Italy **** TOPICS & AREAS **** Relevant topics include but are not limited to: *Track 1:* Emerging Applications of AI and Informatics *Track 2:* Application of AI and Informatics in Healthcare *Track 3:* Application of AI and Informatics in Pattern Recognition *Track 4:* Application of AI and Informatics in Network, Security, and Analytics Details: https://aii2022.org/tracks_topics.php **** PUBLICATIONS **** - For the AII 2022 edition, we have applied to *Springer *for approval of the conference proceedings. - *Top AII conference papers will be invited to extend and submit* for a fast-track review and publication *at a number of* *ISI and Scopus indexed journals* (more information will be made available later). - Workshop/special session organisers and AII conference session chairs may consider and can be invited to prepare a* book proposal* of special topics for possible book publication in the *Springer-Nature Applied Intelligence & Informatics book series* **** SPECIAL SESSION, TUTORAL & WORSKSHOP PROPOSAL **** - *Proposal Submissions:* AII2022 will be hosting a series of special sessions, tutorials and workshops featuring topics relevant to the intelligence and informatics community on the latest research and industry applications. For Special Session/ Tutorial/ Workshop proposals the submission is via email at info at aii2022.org - *Proposal Guidelines*: Each proposal should be concise with 1-3 pages and should include 1) special session/tutorial/workshop title; 2) length in hours; 3) names, main contact, and a short bio (150 words) of the organisers; 4) a brief description of the scope and timeline; 5) prior history (if any); 6) potential program committee members and invited speakers; 7) Any other relevant information. - *Papers and Presentations:* A special session/tutorial is expected to be completed within 1 to 2 hours (without break) while a workshop should last between 2 and 4 hours (including breaks). Tutorials include a mix of invited presentations from academic or industrial experts, while special sessions/workshops may consist of invited speakers and may also include regular, short and invited papers. The paper submissions to workshops/special sessions follow the same format as the conference and the review process will be coordinated centrally by the technical programme committee of the conference. - *Publications:* Accepted workshop and special session full papers will be published at the same AII 2022 proceedings (subject to review process). **** PAPER SUBMISSION **** - *Paper Submission*: - *Full papers*: submissions should be between 12 and 15 pages long including figures and references. Additional pages will be charged. - *Short papers*: high-quality papers with up to 12 pages (that is, less than 12 pages) are also welcome and will be accepted as short papers based on their originality, significance of the contribution to the field, technical merit, and presentation quality. - *Journal Opportunities*: Top AII conference papers will be invited to extend and submit for a fast-track review and publication at a number of ISI and Scopus indexed journals. - *Paper Submission Site:* https://proconf.org/conferences/aii -2022/app/login.php - *Paper Format*: All paper submissions MUST follow Springer LNCS Proceedings format: https://www.springer.com/us/computer-science/lncs/conference-proceedings-guidelines - *Microsoft Word template download from here*: https://aii2022.org/download/Author-template-Springer-LNCS.docx - *For Latex template use this overleaf link*: https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer-science/kzwwpvhwnvfj - *Questions and Suggestions*: Concerning paper submissions, conference registrations, suggestions for the program, and other inquires, please feel free to contact AI2022 chairs. **** ORGANIZING COMMITTEE **** *Honorary Chairs* Hojjat Adeli, Ohio State University, USA Amir Hussain, Edinburgh Napier University, UK Nikola Kasabov, Auckland University of Technology, New Zealand Francesco Carlo Morabito, University Mediterranea of Reggio Calabria, Italy *General Chairs* Cosimo Ieracitano, University Mediterranea of Reggio Calabria, Italy Mufti Mahmud, Nottingham Trent University, UK *Advisors* Anirban Bandyopadhyay, National Institute for Materials Science, Japan David Brown, Nottingham Trent University, UK Marcos Faundez-Zanuy, Escola Superior Polit?cnica Tecnocampus, Spain Hamido Fujita, Iwate Prefectural University, Japan Khan Iftekharuddin, Old Dominion University, USA Shariful Islam, Deakin University, Australia Kanad Ray, Amity University, India Roberto Tagliaferri, University of Salerno, Italy Marley Vellasco?, Pontif?cia Universidade Cat?lica do Rio de Janeiro, Brazli Yu-Dong Zhang (Eugene), University of Leicester, UK Ning Zhong, Maebashi Institute of Technology, Japan *Program Chairs* Anna Esposito, University of Salerno, Italy M Shamim Kaiser, Jahangirnagar University, Bangladesh Nadia Mammone, University Mediterranea of Reggio Calabria, Italy Ning Zhong, Maebashi Institute of Technology, Japan *Track Chairs* Tingwen Huang, Texas A&M University, Qatar Joarder Kamruzzaman, Federation University, Australia Gianluca Lax, University Mediterranea of Reggio Calabria, Italy M Murugappan, Kuwait College of Science & Technology, Kuwait Nelishia Pillay, University of Pretoria, South Africa ?Giuseppe Maria Sarn?, University Bicocca of Milan, Italy Domenico Ursino, Universit? Politecnica delle Marche, Italy Salvatore Vitabile, University of Palermo, Italy *Special Session, Tutorial and Workshop Chairs* Tianhua Chen, University of Huddersfield, UK Massimiliano Ferrara, University Mediterranea of Reggio Calabria, Italy Giancarlo Fortino, University of Calabria, Italy Alessio Micheli, University of Pisa, Italy Massimo Panella, University Sapienza of Rome, Italy M Arifur Rahman, Nottingham Trent University, UK Domenico Rosaci, University Mediterranea of Reggio Calabria, Italy Marta Savino, Aubay, taly Simone Scardapane, University Sapienza of Rome, Italy Noushath Shaffi, College of Applied Sciences, Oman *Local Organizing Chairs* Cosimo Ieracitano, University Mediterranea of Reggio Calabria, Italy Nadia Mammone, University Mediterranea of Reggio Calabria, Italy Francesco Carlo Morabito, University Mediterranea of Reggio Calabria, Italy Mario Versaci, University Mediterranea of Reggio Calabria, Italy *Publicity Chairs* Abzetdin Adamov, ADA University, Azerbaijan Manjunath Aradhya, JSS Science and Technology University, India Nilanjan Dey, JSS University, India Ramani Kannan, Universiti Teknologi PETRONAS, Malaysia Juan P. Amezquita-Sanchez, Universidad Autonoma de Queretaro, Mexico KC Santosh, University of South Dakota, USA Stefano Squartini, Universit? Politecnica delle Marche, Italy *Conference Secretaries* Shamim Al Mamun, Jahangirnagar University, Bangladesh Michele Lo Giudice, University Mediterranea of Reggio Calabria, Italy *Webmaster* Md Asif Ur Rahman, PropertyPro Plus, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaser.amd at gmail.com Thu Jan 20 01:09:13 2022 From: yaser.amd at gmail.com (Yaser Jararweh) Date: Thu, 20 Jan 2022 01:09:13 -0500 Subject: Connectionists: Special Issue on Emerging Information Processing and Management Paradigms: Edge Intelligence, Federated Learning, and Blockchain, IP&M Journal Elsevier Message-ID: *Special Issue on Emerging Information Processing and Management Paradigms: Edge Intelligence, Federated Learning, and Blockchain * *A Special Issue for Information Processing & Management (IP&M), Elsevier* *Note*: This special issue is a Thematic Track at IP&MC2022 conference, The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. *For more information about IP&MC2022, please visit* *https://www.elsevier.com/events/conferences/information-processing-and-management-conference* *.* *IP&MC2022 will take during 20-23 October 2022 | Xiamen, China * *Call-for-Papers* *Aims and Scope:* Our ever-increasing ability to allocate, process, and extract valuable information at the network's edge triggered many modern applications like autonomous vehicles, network softwarization, smart cities applications, connected health systems, and industrial IoT, etc. However, such applications require high communication latency with real-time response and trustworthy models. Decentralizing the data analytics beyond the traditional cloud silos is critical, with several requirements to be accommodated. The recent emerging edge/fog capacities as a supporting and complementary infrastructure for the centralized cloud systems provide a golden opportunity by harnessing decentralized machine intelligence abilities to make decisions in the right place and time. Moreover, the emergence of distributed machine learning techniques with specific applications of Federated Learning improves user data privacy and trust throughout the complete system being applied. A futuristic paradigm spear-headed known as Edge Intelligence (EI) is taking shape so that AI/ML services occur close to where data is captured. EI is expected to improve the agility of big data services and leverage resources located at the edge of the network and along the continuum between the cloud and the IoT. Nevertheless, addressing the deployment complexity, security, privacy, and trust of the edge resources is of paramount importance. Also, achieving this vision required synergizing the border communication system advances, including big data, distributed machine learning, Blockchain technology, and privacy-preserving federated learning. The main objective of this track is to solicit papers at the intersection of these technologies. This track will provide a venue for researchers, scientists, industry experts, and practitioners to share their novel research results on recent advances in Edge Intelligence, Federated Learning, and Blockchain architectures and applications. High-quality research contributions describing original and unpublished constructive, empirical, experimental, and theoretical work in EI are invited to submit their timely findings. *Recommended Topics:* Topics to be discussed in this track include (but are not limited to) Architectures and Applications in the following: - Distributed and federated machine learning in edge computing - Theory and Applications of EI - Middleware and runtime systems for EI - Programming models compliant with EI - Scheduling and resource management for EI - Data allocation and application placement strategies for EI - Osmotic computing with edge continuum, Microservices and MicroData architectures - ML/AI models and algorithms for load balancing - Theory and Applications of federated learning - Federated learning and privacy-preserving large-scale data analytics - MLOps and ML pipelines at edge computing - Transfer learning, interactive learning, and Reinforcement Learning for edge computing - Modeling and simulation of EI and edge-to-cloud environments - Security, privacy, trust, and provenance issues in edge computing - Distributed consensus and blockchains at edge architecture - Blockchain networking for Edge Computing Architecture - Blockchain technology for Edge Computing Security - Blockchain-based access controls for Edge-to-cloud continuum - Blockchain-enabled solutions for Cloud and Edge/Fog IoT systems - Forensic Data Analytics compliant with EI *Important Dates* Thematic track manuscript submission due date; authors are welcome to submit early as reviews will be rolling *June 15, 2022* Author notification July 31, 2022 IP&MC conference presentation and feedback October 20-23, 2022 Post conference revision due date, but authors welcome to submit earlier January 1, 2023 *Track Editors:* - Yaser Jararweh, Duquesne University, USA (yaser.amd at gmail.com) (Managing Editor) - Feras Awaysheh, University of Tartu, Estonia (feras.awaysheh at ut.ee) - Moayad Aloqaily, MBZUAI, UAE(maloqaily at ieee.org ) - Nadra Guizani, University of Texas Arlington, USA ( nadra.guizani at uta.edu) - Yuli Yang, University of Lincoln, United Kingdom (yyang at lincoln.ac.uk) * Submission Guidelines* Submit your manuscript to the Special Issue category (*VSI: IPMC2022 EMERGING*) through the online submission system of Information Processing & Management: https://www.editorialmanager.com/ipm/ Authors will prepare the submission following the Guide for Authors on IP&M journal at ( https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures. The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022. Please see this infographic for the manuscript flow: https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf For more information about IP&MC2022, please visit: https://www.elsevier.com/events/conferences/information-processing-and-management-conference -------------- next part -------------- An HTML attachment was scrubbed... URL: From antona at alleninstitute.org Wed Jan 19 18:36:06 2022 From: antona at alleninstitute.org (Anton Arkhipov) Date: Wed, 19 Jan 2022 23:36:06 +0000 Subject: Connectionists: Allen Institute Modeling Software Workshop Message-ID: <51018C02-9303-471D-A7E9-0D2B43DA5E74@alleninstitute.org> Join us for the inaugural Allen Institute Modeling Software Workshop in beautiful Seattle, on July 25-26, 2022! https://alleninstitute.org/what-we-do/brain-science/events-training/2022-modeling-workshop/ Sponsored by the NIH BRAIN program, this workshop is organized by the Allen Institute and our collaborators at the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign. This 2-day in-person* workshop will consist of interactive seminars and hands-on computational work. It will focus on teaching the skills for building and simulating complex and heterogeneous network models grounded in real biological data. The tools covered by the workshop include: * The SONATA file format for multiscale neuronal network models and simulation output, supporting standardized and computationally efficient storage and exchange of models. * The Brain Modeling ToolKit (BMTK) ? a Python-based software package for building and simulating large-scale neural network models at multiple levels of resolution. * The Visual Neuronal Dynamics (VND) ? a program for displaying, animating, and analyzing neural network models using 3D graphics and built-in scripting. Workshop Topics: * Building heterogenous neural networks at different levels of resolution * Simulating networks of biophysically detailed, compartmental neuronal models * Simulating networks of point-neuron models * Providing realistic spiking inputs to the neural networks * Simulating perturbations * Simulating extracellular electric field * Using and sharing models in the SONATA format * Visualizing network models? structure and dynamics in 3D We are eager to host a diverse audience, and we encourage trainees, scientists, and PIs to apply. Prior experience in modeling is not required or expected. The Allen Institute strives to make training opportunities available on a fair and equitable basis. Some travel funding is available for participants with financial need. Please indicate on your application whether such support is needed. Applications will be evaluated on a need-blind basis. There is no fee to participate in the workshop. Applications are due on March 15. All applicants will be notified of the status of their application by May 20. *If conducting the workshop in person becomes impossible due to covid-related safety concerns, it will be replaced by an online workshop on the same dates. See more information and apply here: https://alleninstitute.org/what-we-do/brain-science/events-training/2022-modeling-workshop/ Anton Arkhipov Associate Investigator T: 206.548.8414 E: antona at alleninstitute.org [Text Description automatically generated] alleninstitute.org brain-map.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 26603 bytes Desc: image001.png URL: From michal.ptaszynski at gmail.com Wed Jan 19 22:54:00 2022 From: michal.ptaszynski at gmail.com (Ptaszynski Michal) Date: Thu, 20 Jan 2022 12:54:00 +0900 Subject: Connectionists: [CfP] Call for Papers: Information Processing & Management (IP&M) (IF: 6.222) Special Issue on Science Behind Neural Language Models Message-ID: Dear Colleagues, ** Apologies for cross-posting ** This is Michal Ptaszynski from Kitami Institute of Technology, Japan. We are accepting papers for the Information Processing & Management (IP&M) (IF: 6.222) journal Special Issue on Science Behind Neural Language Models. This special issue is also a Thematic Track at Information Processing & Management Conference 2022 (IP&MC2022), meaning, that at least one author of the accepted manuscript will need to attend the IP&MC2022 conference. For more information about IP&MC2022, please visit: https://www.elsevier.com/events/conferences/information-processing-and-management-conference The deadline for manuscript submission is June 15, 2022, but your paper will be reviewed immediately after submission and will be published as soon as it is accepted. We hope you will consider submitting your paper. https://www.elsevier.com/events/conferences/information-processing-and-management-conference/author-submission/science-behind-neural-language-models Info regarding submission: https://www.elsevier.com/events/conferences/information-processing-and-management-conference/author-submission Best regards, Michal PTASZYNSKI, Ph.D., Associate Professor Department of Computer Science Kitami Institute of Technology, 165 Koen-cho, Kitami, 090-8507, Japan TEL/FAX: +81-157-26-9327 michal at mail.kitami-it.ac.jp ============================================ Information Processing & Management (IP&M) (IF: 6.222) Special Issue on "Science Behind Neural Language Models" & Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track on "Science Behind Neural Language Models" Motivation The last several years showed explosive popularity of neural language models, especially large pre-trained language models based on the transformer architecture. The field of Natural Language Processing (NLP) and Computational Linguistics (CL) experienced a shift from simple language models such as Bag-of-Words, and word representations like word2vec, or GloVe, to more contextually-aware language models, such as ELMo, or more recently, BERT, or GPT including their improvements and derivatives. The general high performance obtained by BERT-based models in various tasks even convinced Google to apply it as a default backbone in its search engine query expansion module, thus making BERT-based models a mainstream, and a strong baseline in NLP/CL research. The popularity of large pretrained language models also allowed a major growth of companies providing freely available repositories of such models, and, more recently, the founding of Stanford University?s Center for Research on Foundation Models (CRFM). However, despite the overwhelming popularity, and undeniable performance of large pretrained language models, or ?foundation models?, the specific inner-workings of those models have been notoriously difficult to analyze and the causes of - usually unexpected and unreasonable - errors they make, difficult to untangle and mitigate. As the neural language models keep gaining in popularity while expanding into the area of multimodality by incorporating visual and speech information, it has become the more important to thoroughly analyze, fully explain and understand the internal mechanisms of neural language models. In other words, the science behind neural language models needs to be developed. Aims and scope With the above background in mind, we propose the following Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track and Information Processing & Management Journal Special Issue on Science Behind Neural Language Models. The TT/SI will focus on topics deepening the knowledge on how the neural language models work. Therefore, instead of taking up basic topics from the fields of CL and NLP, such as improvement of part-of-speech tagging, or standard sentiment analysis, regardless of whether they apply neural language models in practice, we will focus on promoting research that specifically aims at analyzing and understanding the ?bells and whistles? of neural language models, for which the generally perceived science has not been established yet. Target audience The TT/SI will aim at the audience of scientists, researchers, scholars, and students performing research on the analysis of pretrained language models, with a specific focus on explainable approaches to language models, analysis of errors such models make, methods for debiasing, detoxification and other methods of improvement of the pretrained language models. The TT/SI will not accept research on basic NLP/CL topics for which the field has been well established, such as improvement of part-of-speech tagging, sentiment analysis, etc., even if they apply neural language models unless they directly contribute to furthering the understanding and explanation of the inner workings of large scale pretrained language models. List of Topics List of Topics The Thematic Track / Special Issue will invite papers on topics listed, but not limited to the following: - Neural language model architectures - Improvement of neural language model generation process - Methods for fine tuning and optimization of neural language models - Debiasing neural language models - Detoxification of neural language models - Error analysis and probing of neural language models - Explainable methods for neural language models - Neural language models and linguistic phenomena - Lottery Ticket Hypothesis for neural language models - Multimodality in neural language models - Generative neural language models - Inferential neural language models - Cross-lingual or multilingual neural language models - Compression of neural language models - Domain specific neural language models - Expansion of information embedded in neural language models Important Dates: Thematic track manuscript submission due date; authors are welcome to submit early as reviews will be rolling: June 15, 2022 Author notification: July 31, 2022 IP&MC conference presentation and feedback: October 20-23, 2022 Post conference revision due date: January 1, 2023 Submission Guidelines: Submit your manuscript to the Special Issue category (VSI: IPMC2022 HCICTS) through the online submission system of Information Processing & Management. https://www.editorialmanager.com/ipm/ Authors will prepare the submission following the Guide for Authors on IP&M journal at (https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures. The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022. Please see this infographic for the manuscript flow: https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf For more information about IP&MC2022, please visit https://www.elsevier.com/events/conferences/information-processing-and-management-conference. Thematic Track / Special Issue Editors: Managing Guest Editor: Michal Ptaszynski (Kitami Institute of Technology) Guest Editors: Rafal Rzepka (Hokkaido University) Anna Rogers (University of Copenhagen) Karol Nowakowski (Tohoku University of Community Service and Science) For further information, please feel free to contact Michal Ptaszynski directly. From ioannakoroni at csd.auth.gr Thu Jan 20 03:32:38 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Thu, 20 Jan 2022 10:32:38 +0200 Subject: Connectionists: Asynchronous Web e-Courses offered on Deep Learning, Computer Vision, Autonomous Systems, Signal/Image/Video Processing, Human-centered Computing, Social Media, Mathematical Foundations, CVML SW tools References: <004a01d80d24$1bcb9460$5362bd20$@csd.auth.gr> Message-ID: <02e801d80dd8$48b18480$da148d80$@csd.auth.gr> Dear Computer Vision, Machine Learning, Autonomous Systems, DSP/DIP, Social Media Engineers, Scientists and Enthusiasts, you are welcomed to register and attend Web e-Courses consisting of one or more of the 21 CVML Web e-Course Modules on offer (Lecture Series having 208 lectures in total). These asynchronous Web e-Course Modules (Lecture Series) provide an overview and in-depth presentation of 21 different domains: * Deep Learning and Neural Networks: Machine Learning (12 Lectures), Neural Networks/Deep Learning (14 Lectures), Advanced Deep Learning (7 Lectures) * Deep Learning and Computer Vision Foundations and Tools: Mathematical Foundations (9 Lectures), SW Development and Programming Tools (3 Lectures) * Computer Vision/Image Processing and 3D Imaging: Computer Vision (12 Lectures), 2D Computer Vision/Image Analysis (8 Lectures), Image Processing (21 Lectures), Video Processing and Analysis (17 Lectures), 3D Imaging (9 Lectures), 3D Computer Graphics and Virtual Reality (5 Lectures) * Autonomous Systems: Autonomous Systems principles (8 Lectures), Robotics and Automatic Control (3 Lectures), Autonomous Cars (8 Lectures), Autonomous Drones (15 Lectures), Autonomous Marine Systems (4 Lectures). * Human centered computing. Social Networks. Graph Theory: Human Centered Computing (15 Lectures), Network Theory. Social Media Analysis (10 Lectures). * Digital Signal Processing and Applications: Signal and Systems (11 Lectures), Digital Signal Processing and Analysis (7 Lectures), Medical Image and Signal Analysis (4 Lectures), Acoustics, Speech, Natural Language Processing and Analysis (4 Lectures), Communications (2 Lectures). You can combine CVML Web e-Course Modules to create CVML Web e-Courses (typically consisting of 16 lectures) of your own choice that cater your personal education needs. Each CVML Web e-Course you will create (16 lectures) provides you material that can cover a semester course, but you can master it in approximately 1 month. Asynchronous tutor support will be provided in case of questions. CVML Web e-Course Module materials typically consist of: a) a lecture pdf/ppt, b) lecture self-assessment understanding questionnaire and lecture video, programming exercises, tutorial exercises (for several modules/lectures) and overall course module satisfaction questionnaire. Course materials have been very successfully used in many top conference keynote speeches/tutorials worldwide and in short courses, summer schools, semester courses delivered by AIIA Lab physically or on-line from 2018 onwards, attracting many hundreds of registrants. Course materials are at senior undergraduate/MSc level in a CS, CSE, EE or ECE or related Engineering or Science Department. Their structure, level and offer are completely different from what you can find in either Coursera or Udemy. You can find sample Web e-Course Module material to make up your mind and/or can perform CVML Web e-Course registration in: http://icarus.csd.auth.gr/cvml-web-lecture-series/ For questions, please contact: Ioanna Koroni > Academic/Research/Industry offer and arrangements Special arrangements can be made to offer the material of these CVML Web e-Course Modules at University/Department/Company level: * by granting access to the material to University/research/industry lecturers to be used as an aid in their teaching, * by enabling class registration in CVML Web e-Courses * by delivering such live short courses physically or on-line by Prof. Ioannis Pitas * by combinations of the above. The CVML Web e-Course is organized by Prof. I. Pitas, IEEE and EURASIP fellow, Coordinator of International AI Doctoral Academy (AIDA), past Chair of the IEEE SPS Autonomous Systems Initiative, Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab), Aristotle University of Thessaloniki, Greece, Coordinator of the European Horizon2020 R&D project Multidrone. He is ranked 249-top Computer Science and Electronics scientist internationally by Guide2research (2018). He has 34100+ citations to his work and h-index 87+. The informatics Department at AUTH ranked 106th internationally in the field of Computer Science for 2019 in the Leiden Ranking list Relevant links: 1. Prof. I. Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ &hl=el 2. International AI Doctoral Academy (AIDA): https://www.i-aida.org/ 3. Horizon2020 EU funded R&D project Aerial-Core: https://aerial-core.eu/ 4. Horizon2020 EU funded R&D project Multidrone: https://multidrone.eu/ 5. Horizon2020 EU funded R&D project AI4Media: https://ai4media.eu/ 6. AIIA Lab: https://aiia.csd.auth.gr/ Sincerely yours Prof. I. Pitas Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab) Aristotle University of Thessaloniki, Greece Post scriptum: To stay current on CVML matters, you may want to register to the CVML email list, following instructions in https://lists.auth.gr/sympa/info/cvml -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Thu Jan 20 03:30:12 2022 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Thu, 20 Jan 2022 09:30:12 +0100 Subject: Connectionists: COMPLEX NETWORKS 2021 PROCEEDINGS AVAILABLE FOR FREE DOWNLOAD Message-ID: Dear, You can download the two volumes of the proceedings of COMPLEX NETWORKS 2021 for free until the end of the month from the conference website. https://www.complexnetworks.org/ Best regards Join us at COMPLEX NETWORKS 2021 Madrid Spain *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Deputy Director LIB EA N? 7534 Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.o.sousa at inesctec.pt Thu Jan 20 11:19:53 2022 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Thu, 20 Jan 2022 16:19:53 +0000 Subject: Connectionists: Text2Story'22 Deadline Extension: Jan 31st. ECIR'22 Workshop on Narrative Extraction from Texts Message-ID: *** Apologies for cross-posting *** ++ CALL FOR PAPERS ++ **************************************************************************** Fifth International Workshop on Narrative Extraction from Texts (Text2Story'22) Held in conjunction with the 44th European Conference on Information Retrieval (ECIR'22) April 10th, 2022 - Stavanger, Norway Website: https://text2story22.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission deadline: January 24th, 2022 January 31th, 2022 - Acceptance Notification Date: March 1st, 2022 - Camera-ready copies: March 18th, 2022 - Workshop: April 10th, 2022 ++ Overview ++ Although information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of constructing consistent narrative structures is yet to be solved. ++ List of Topics ++ In the fifth edition of the Text2Story workshop, we aim to foster the discussion of recent advances in the link between Information Retrieval (IR) and formal narrative understanding and representation of texts. Specifically, we aim to provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: * Narrative Representation Language * Story Evolution and Shift Detection * Temporal Relation Identification * Temporal Reasoning and ordering of events * Causal Relation Extraction and Arrangement * Narrative Summarization * Multi-modal Summarization * Automatic Timeline Generation * Storyline Visualization * Comprehension of Generated Narratives and Timelines * Big data applied to Narrative Extraction * Personalization and Recommendation of Narratives * User Profiling and User Behavior Modeling * Sentiment and Opinion Detection in Texts * Argumentation Analysis * Models for detection and removal of bias in generated stories * Ethical and fair narrative generation * Misinformation and Fact Checking * Bots Influence * Information Retrieval Models based on Story Evolution * Narrative-focused Search in Text Collections * Event and Entity importance Estimation in Narratives * Multilinguality: multilingual and cross-lingual narrative analysis * Evaluation Methodologies for Narrative Extraction * Resources and Dataset showcase * Dataset annotation and annotation schemas * Applications in social media (e.g. narrative generation during a natural disaster) ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We invite four kinds of submissions: * Research papers (max 7 pages + references) * Demos and position papers (max 5 pages + references) * Work in progress and project description papers (max 4 pages + references) * Nectar papers with a summary of own work published in other conferences or journals that is worthwhile sharing with the Text2Story community, by emphasizing how it can be applied for narrative extraction, processing or storytelling, adding some more insights or discussions; novel aspects, results or case studies (max 3 pages + references) Papers must be submitted electronically in PDF format through EasyChair (https://easychair.org/conferences/?conf=text2story2022). All submissions must be in English and formatted according to the one-column CEUR-ART style with no page numbers. Templates, either in Word or LaTeX, can be found in the following zip folder: http://ceur-ws.org/Vol-XXX/CEURART.zip. There is also an Overleaf page for LaTeX users, available at: https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/hpvjjzhjxzjk. Submissions will be peer-reviewed by at least two members of the program committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (usually indexed on DBLP). ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC; University of Beira Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Ant?nio Horta Branco (University of Lisbon) Arian Pasquali (CitizenLab) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST and INESC-ID - Instituto Superior T?cnico, University of Lisbon) Demian Gholipour (University College Dublin) Daniel Gomes (FCT/Arquivo.pt) Daniel Loureiro (University of Porto) Denilson Barbosa (University of Alberta) Deya Banisakher (Defense Threat Reduction Agency (DTRA), Ft. Belvior, VA, USA.) Dhruv Gupta (Norwegian University of Science and Technology (NTNU), Trondheim, Norway) Dwaipayan Roy (ISI Kolkata, India) Dyaa Albakour (Signal) Evelin Amorim (INESC TEC) Florian Boudin (Universit? de Nantes) Grigorios Tsoumakas (Aristotle University of Thessaloniki) Henrique Lopes Cardoso (University of Porto) Hugo Sousa (INESC TEC) Ismail Sengor Altingovde (Middle East Technical University) Jeffery Ansah (BHP) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Kumar Bandeli (Walmart Inc.) Ludovic Moncla (INSA Lyon) Marc Spaniol (Universit? de Caen Normandie) Nina Tahmasebi (University of Gothenburg) Pablo Gamallo (University of Santiago de Compostela) Paulo Quaresma (Universidade de ?vora) Pablo Gerv?s (Universidad Complutense de Madrid) Paul Rayson (Lancaster University) Preslav Nakov (Qatar Computing Research Institute (QCRI)) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Udo Kruschwitz (University of Regensburg) Yihong Zhang (Kyoto University) ++ Contacts ++ Website: https://text2story22.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2022 at easychair.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.kollias at qmul.ac.uk Fri Jan 21 13:28:18 2022 From: d.kollias at qmul.ac.uk (Dimitrios Kollias) Date: Fri, 21 Jan 2022 18:28:18 +0000 Subject: Connectionists: (CfP) CVPR 2022: 3rd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) Message-ID: Dear All, Please find below the invitation to contribute to the 3rd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) to be held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022. (1): The Competition is split into four Challenges, which are based on Aff-Wild2 database (or a static version of it), which is the first comprehensive benchmark annotated for different affective tasks (dimensional and categorical ones). The four Challenges are: * Valence-Arousal Estimation Challenge * Expression Classification Challenge * Action Unit Detection Challenge * Multi-Task-Learning Challenge Aff-Wild2 is an audiovisual in-the-wild database of 564 videos of around 2.8M frames. Participants are invited to participate in at least one of these Challenges. There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2022 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2022 proceedings. More information about the Competition can be found here. Important Dates: * Call for participation announced, team registration begins, data available: 20 January, 2022 * Final submission deadline: 16 March, 2022 * Winners Announcement: 18 March, 2022 * Final paper submission deadline: 25 March, 2022 * Review decisions sent to authors; Notification of acceptance: 1 April, 2022 * Camera ready version deadline: 8 April, 2022 Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Viktoriia Sharmanska, University of Sussex, UK Elnar Hajiyev, Realeyes - Emotional Intelligence (2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation. Original high-quality contributions, including: - databases or - surveys and comparative studies or - Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones) are solicited on the following topics: i) "in-the-wild" facial expression or micro-expression analysis, ii) "in-the-wild" facial action unit detection, iii) "in-the-wild" valence-arousal estimation, iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis, v) domain adaptation for affect recognition in the previous 4 cases vi) "in-the-wild" face recognition, detection or tracking, vii) "in-the-wild" body recognition, detection or tracking, viii) "in-the-wild" gesture recognition or detection, ix) "in-the-wild" pose estimation or tracking, x) "in-the-wild" activity recognition or tracking, xi) "in-the-wild" lip reading and voice understanding, xii) "in-the-wild" face and body characterization (e.g., behavioral understanding), xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition), xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality) xv) subgroup distribution shift analysis in affect recognition xvi) subgroup distribution shift analysis in face and body behaviour xvii) subgroup distribution shift analysis in characteristic analysis Accepted workshop papers will appear at CVPR 2022 proceedings. Important Dates: Paper Submission Deadline: 25 March, 2022 Review decisions sent to authors; Notification of acceptance: 1 April, 2022 Camera ready version 8 April, 2022 Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Viktoriia Sharmanska, University of Sussex, UK Elnar Hajiyev, Realeyes - Emotional Intelligence In case of any queries, please contact d.kollias at qmul.ac.uk Kind Regards, Dimitrios Kollias, on behalf of the organising committee ======================================================================== Dr Dimitrios Kollias Lecturer (equivalent to Assistant Professor) in Artificial Intelligence School of EECS Queen Mary University of London ======================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobi at ini.uzh.ch Sat Jan 22 03:05:40 2022 From: tobi at ini.uzh.ch (Tobi Delbruck (UZH-ETH)) Date: Sat, 22 Jan 2022 09:05:40 +0100 Subject: Connectionists: AICAS 2022 paper deadline Feb 4 2022 Message-ID: The? IEEE Artificial Intelligence Circuits and Systems conference deadline has been extended to final deadline Feb 4, 2022. This conference is an excellent venue for? real-world hardware-software AI systems and components. See https://aicas2022.org. -Tobi Delbruck, UZH-ETH Zurich From david at irdta.eu Sat Jan 22 09:54:22 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 22 Jan 2022 15:54:22 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring - DeepLearn 2022 Summer Message-ID: <100323494.271546.1642863262879@webmail.strato.com> Dear all, DeepLearn, the International School on Deep Learning, is running since 2017 successfully. Please note the next editions of the program in 2022: https://irdta.eu/deeplearn/2022sp/ https://irdta.eu/deeplearn/2022su/ Best regards, DeepLearn organizing team -------------- next part -------------- An HTML attachment was scrubbed... URL: From nemanja at temple.edu Sun Jan 23 08:55:44 2022 From: nemanja at temple.edu (Nemanja Djuric) Date: Sun, 23 Jan 2022 13:55:44 +0000 Subject: Connectionists: CfP: The 4th Workshop on "Precognition: Seeing through the Future" @ CVPR 2022 Message-ID: Call for Workshop Papers The 4th Workshop on "Precognition: Seeing through the Future" in conjunction with The 35th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022) New Orleans, June 19th-24th, 2022 https://sites.google.com/view/ieeecvf-cvpr2022-precognition ================= Despite its potential and relevance for real-world applications, visual forecasting or precognition has not been in the focus of new theoretical studies and practical applications as much as detection and recognition problems. Through the organization of this workshop we aim to facilitate further discussion and interest within the research community regarding this nascent topic. The workshop will discuss recent approaches and research trends not only in anticipating human behavior from videos, but also precognition in multiple other visual applications, such as: medical imaging, health-care, human face aging prediction, early event prediction, autonomous driving forecasting, and so on. In addition, this workshop will give an opportunity for the community in both academia and industry to meet and discuss future work and research directions. It will bring together researchers from different fields and viewpoints to discuss existing major research problems and identify opportunities in further research directions in both research topics and industrial applications. This is the fourth Precognition workshop organized at CVPR. It follows very successful workshops organized since 2019, which featured talks from researchers across a number of industries, insightful presentations, and large attendance. For full programs, slides, posters, and other resources, please visit the websites of earlier Precognition workshops, linked at the workshop website. ================= Topics: The workshop focuses on several important aspects of visual forecasting. The topics of interest for this workshop include, but are not limited to: - Early event prediction - Activity and trajectory forecasting - Multi-agent forecasting - Human behavior and pose prediction - Human face aging prediction - Predicting frames and features in videos and other sensors in autonomous driving - Traffic congestion anomaly prediction - Automated Covid-19 prediction in medical imaging - Visual DeepFake prediction - Short- and long-term prediction and diagnoses in medical imaging - Prediction of agricultural parameters from satellite imagery - Databases, evaluation and benchmarking in precognition ================= Submission Instructions: All submitted work will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. For each accepted submission, at least one author must attend the workshop and present the paper. There are two ways to contribute submissions to the workshop: - Extended abstracts submissions are single-blind peer-reviewed, and author names and affiliations should be listed. Extended abstract submissions are limited to a total of four pages. Extended abstracts of already published works can also be submitted. Accepted abstracts will be presented at the poster session, and will not be included in the printed proceedings of the workshop. - Full paper submissions are double-blind peer-reviewed. The submissions are limited to eight pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed (additional information about formatting and style files is available at the website). Accepted papers will be presented at the poster session, with selected papers also being presented in an oral session. All accepted papers will be published by the CVPR in the workshop proceedings. Submission website: https://cmt3.research.microsoft.com/PRECOGNITION2022 ================= Important Deadlines: Submission : March 19th, 2022 Decisions : April 4th, 2022 Camera-ready : April 8th, 2022 Workshop : June 19th, 2022 (subject to change by the CVPR organizers) ================= Program Committee Chairs: - Dr. Khoa Luu (University of Arkansas) - Dr. Kris Kitani (Carnegie Mellon University) - Dr. Hien Van Nguyen (University of Houston) - Dr. Nemanja Djuric (Aurora Innovation) - Dr. Utsav Prabhu (Google) For further questions please contact a member of the organizing committee at precognition.organizers at gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From caspar.schwiedrzik at googlemail.com Sat Jan 22 14:54:06 2022 From: caspar.schwiedrzik at googlemail.com (Caspar M. Schwiedrzik) Date: Sat, 22 Jan 2022 20:54:06 +0100 Subject: Connectionists: =?utf-8?q?Data_analyst_=28full_time=29_=40_Europe?= =?utf-8?q?an_Neuroscience_Institute_G=C3=B6ttingen?= Message-ID: The European Neuroscience Institute is looking to fill the position of a data analyst (full time). We are looking for a data analyst with interest and experience in systems neuroscience. Research at the European Neuroscience Institute ranges from molecular biology to human psychophysics and involves a range of model organisms (from drosophila to non-human primates), as well as techniques and approaches (electrophysiology, two-photon imaging, fMRI, EEG, behavior). The data analyst is expected to work closely with all of the research groups at the European Neuroscience Institute, supporting research efforts, e.g. through modelling and statistical analyses of high-dimensional data, image processing, and programming/development of experiments including the opportunity to develop and publish, e.g. analytical tools that arise from this work. - The applicant should possess a university degree (minimum M. Sc. or equivalent) in a relevant field, e.g., statistics, biostatistics, informatics, or similar. Prior experience in the field of systems neuroscience is highly desired. - The applicant should have experience in a research setting utilizing quantitative methods and statistics. The applicant should also demonstrate strong analytical skills. Knowledge of novel and emerging analysis techniques is highly desirable. Future/forward thinking in the area of big data analytics/informatics and applying them to contribute to the research groups? scientific process is expected. - The applicant should be skilled in the analysis of multivariate datasets to reveal patterns and build models; conduct exploratory data analysis, and communicate with team lead/team members; identify improvements for existing data management and recommend requirements for new systems. - In addition, contribute to replicability by making suggestions for existing data management and recommend requirements for new systems and identify potential data integrity issues. We are committed to open data/open science and appreciate interest and expertise in this area. - Utilize programming languages such as Python, Matlab and/or C++. - A good command of English is mandatory. The University Medical Center G?ttingen takes flexible account of the individual design of working hours at the workplace. It is interested in implementing the wishes of its employees as far as possible. If you are interested in this job and have specific questions about working hours, please contact us. Women are especially encouraged to apply. Applicants with disabilities and equal qualifications will be given preferential treatment. *Employment Start Date:* 1 May 2022 *Contract Length:* Initially limited until 31.12.2024 with extension options. Please send your application (CV, letter of motivation) via e-mail as a single PDF-file to j.clemens at eni-g.de. -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Sun Jan 23 06:45:10 2022 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sun, 23 Jan 2022 13:45:10 +0200 Subject: Connectionists: 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022): Last Call for Papers Message-ID: *** Last Call for Papers *** 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022) May 25-27, 2022, Golden Bay Hotel 5*, Larnaca, Cyprus http://cyprusconferences.org/eais2022/ (Proceedings to be published by the IEEE Xplore Digital Library; Special Journal Issue with Evolving Systems, Springer) (*** Submission Deadline: February 7, 2022 (firm) ***) IEEE EAIS 2022 will provide a working and friendly atmosphere and will be a leading international forum focusing on the discussion of recent advances, the exchange of recent innovations and the outline of open important future challenges in the area of Evolving and Adaptive Intelligent Systems. Over the past decade, this area has emerged to play an important role on a broad international level in today's real-world applications, especially those ones with high complexity and dynamic changes. Its embedded modelling and learning methodologies are able to cope with real-time demands, changing operation conditions, varying environmental influences, human behaviours, knowledge expansion scenarios and drifts in online data streams. Conference Topics Basic Methodologies Evolving Soft Computing Techniques. Evolving Fuzzy Systems. Evolving Rule-Based Classifiers. Evolving Neuro-Fuzzy Systems. Adaptive Evolving Neural Networks. Online Genetic and Evolutionary Algorithms. Data Stream Mining. Incremental and Evolving Clustering. Adaptive Pattern Recognition. Incremental and Evolving ML Classifiers. Adaptive Statistical Techniques. Evolving Decision Systems. Big Data. Problems and Methodologies in Data Streams Stability, Robustness, Convergence in Evolving Systems. Online Feature Selection and Dimension Reduction. Online Active and Semi-supervised Learning. Online Complexity Reduction. Computational Aspects. Interpretability Issues. Incremental Adaptive Ensemble Methods. Online Bagging and Boosting. Self-monitoring Evolving Systems. Human-Machine Interaction Issues. Hybrid Modelling, Transfer Learning. Reservoir Computing. Applications of EAIS Time Series Prediction. Data Stream Mining and Adaptive Knowledge Discovery. Robotics. Intelligent Transport and Advanced Manufacturing. Advanced Communications and Multimedia Applications. Bioinformatics and Medicine. Online Quality Control and Fault Diagnosis. Condition Monitoring Systems. Adaptive Evolving Controller Design. User Activities Recognition. Huge Database and Web Mining. Visual Inspection and Image Classification. Image Processing. Cloud Computing. Multiple Sensor Networks. Query Systems and Social Networks. Alternative Statistical and Machine Learning Approaches. Submissions Submitted papers should not exceed 8 pages plus at most 2 pages overlength. Submissions of full papers are accepted online through Easy Chair (https://easychair.org/conferences/?conf=eais2022). The EAIS 2022 proceedings will be published on IEEE Xplore Digital Library. Authors of selected papers will be invited to submit extended versions for possible inclusion in a special issue of Evolving Systems, published by Springer (https://www.springer.com/journal/12530 ). Important Dates ? Paper submission: February 7, 2022 (firm) ? Notification of acceptance/rejection: March 7, 2022 ? Camera ready submission: March 20, 2022 ? Authors registration: March 20, 2022 ? Conference Dates: May 25-27, 2022 Social Media FB: https://www.facebook.com/IEEEEAIS Twitter: https://twitter.com/IEEE_EAIS Linkedin: https://www.linkedin.com/events/2022ieeeconferenceonevolvingand6815560078674972672/ Organization Honorary Chairs ? Dimitar Filev, Ford Motor Co., USA ? Nikola Kasabov, Auckland University of Technology, New Zealand General Chairs ? George A. Papadopoulos, University of Cyprus, Nicosia, Cyprus ? Plamen Angelov, Lancaster University, UK Program Committee Chairs ? Giovanna Castellano, University of Bari, Italy ? Jos? A. Iglesias, Carlos III University of Madrid, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From amir.kalfat at gmail.com Sun Jan 23 12:10:21 2022 From: amir.kalfat at gmail.com (Amir Aly) Date: Sun, 23 Jan 2022 17:10:21 +0000 Subject: Connectionists: CRNS Talk Series - Live Talk by Dr. Alessandra Sciutti - Italian Institute of Technology (IIT) Message-ID: **Apologies for cross-posting ** Dear All, The Center for Robotics and Neural Systems (CRNS ) at Plymouth University is pleased to announce the talk of *Dr. Alessandra Sciutti* from the *Italian Institute of Technology* (IIT) on Wednesday February 2nd at *11:00 am - 12:30 pm (GMT) *over *Zoom*. Registration is* required* and free: Registration Form . *Title of the talk*: Cognitive robots for more humane interactions *Abstract*: A cognitive robot is a robot capable to adapt, predict, and pro-actively interact with the environment and communicate with the human partners. Our research leverages on the use of the humanoid robot iCub to test how to build such a cognitive interactive agent. We model the minimal skills necessary for cognitive development, such as the visual features that enable to recognize the presence of other agents in the scene, their internal state and their responses to robot behavior. In a dual approach, we are trying to understand how to modulate robot movement to make it more transparent and understandable to non-expert users. As a next step, we are focusing on the development of simple cognitive architectures that could integrate the sensory and motor capabilities developed in isolation together with memory, internal motivation and learning mechanisms, to achieve personalization and adaptation skills. We believe that only a structured effort toward cognition will in the future allow for more humane machines, able to see the world and people as we do and engage with them in a meaningful manner. --- If you have any questions, please don't hesitate to contact me, Regards ---------------- *Dr. Amir Aly* Lecturer in Artificial Intelligence and Robotics Center for Robotics and Neural Systems (CRNS) School of Engineering, Computing, and Mathematics Room B332, Portland Square, Drake Circus, PL4 8AA University of Plymouth, UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From poirazi at imbb.forth.gr Sun Jan 23 06:59:19 2022 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Sun, 23 Jan 2022 13:59:19 +0200 Subject: Connectionists: DENDRITES 2022: second call for abstracts due on Feb. 1st, 2022 In-Reply-To: References: Message-ID: DENDRITES 2022 EMBO Workshop on Dendritic Anatomy, Molecules and Function Heraklion, Crete, Greece 23-26 May 2022 http://meetings.embo.org/event/20-dendrites Dear Colleagues, We are pleased to announce the solicitation of abstracts for short oral or poster presentations at the EMBO Workshop on DENDRITES 2022, which will take place in Heraklion, Crete on 23-26 May 2022. This is the 4th of a very successful series of meetings on the island of Crete that is dedicated to dendrites. The meeting will bring together scientific leaders from around the globe to present their theoretical and experimental work on dendrites. The meeting program is designed to facilitate discussions of new ideas and discoveries, in a relaxed atmosphere that emphasizes interaction. We have secured an exciting list of speakers, including: Anthony Holtmaat, University of Geneva, CH Attila Losonczy, Columbia University, US Christine Grienberger, Brandeis University, US David DiGregorio, Institut Pasteur, FR Dan Johnston, University of Texas at Austin, US Hermann Cuntz, Ernst Str?ngmann Institute for Neuroscience, DE Holly Cline, The Scripps Research Institute, US Idan Segev, Hebrew University, IL Jackie Schiller, Technion, IL Judit Makara, Institute of Experimental Medicine of the Hungarian Academy of Sciences, HU Julijana Gjorgjieva, MPI for Brain Research, DE Karen Zito, University of California Davis, US Linnaea Ostroff, University of Connecticut, US Lisa Topolnik, Universit? Laval, Canada, CA Mark Harnett, MIT, US Peter Jonas, Institute of Science and Technology Austria, AT Terry Sejnowski, Salk Institute, US Wenbiao Gan, New York University, US Please register (no payment required) and submit your abstract online at: http://meetings.embo.org/event/20-dendrites Submissions of abstracts are due by *February 1st, **2022* Notifications will be provided by February 28th, 2022 Registration payment due by April 15th, 2022 Potential attendees are strongly encouraged to submit an abstract as presenters will have registration priority. For more information about the conference, please refer to our web site or send email to info at mitos.com.gr We look forward to seeing you in person at DENDRITES 2022! The organizers, Yiota Poirazi, Kristen Harris, Matthew Larkum, Michael H?usser -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385, GR 70013, Heraklion, Crete GREECE Tel: +30 2810-391139 / -391238 Fax: +30 2810-391101 ?mail: poirazi at imbb.forth.gr Lab site: www.dendrites.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From Francesco.Rea at iit.it Mon Jan 24 05:08:23 2022 From: Francesco.Rea at iit.it (Francesco Rea) Date: Mon, 24 Jan 2022 10:08:23 +0000 Subject: Connectionists: [jobs] DEADLINE POSTPONED: Post-doc Functional Memory Network in collaborative AI for context awareness and action planning in robotics @ Italian Institute of Technology (IIT) Message-ID: <5c759cd41829498b9c22f8a1acc025e5@iit.it> Post-doc Functional Memory Network in collaborative AI for context awareness and action planning in robotics At IIT we work enthusiastically to develop human-centered Science and Technology to tackle some of the most pressing societal challenges of our times and transfer these technologies to the production system and society. Our Genoa headquarter is strictly inter-connected with our 11 centers around Italy and two outer-stations based in the US for a truly interdisciplinary experience. The CONTACT Research Line is coordinated by Alessandra Sciutti, who has extensive experience in Cognitive Architecture for Human Robot Interaction. Within the team, your main responsibilities will be: * Exploiting functional memory networks and related AI in a cognitive architecture for better human robot collaboration; * Design of control systems for dextrose mobile robots aiming at natural human-robot collaboration; * Development of an AI solution for context awareness in collaborative unstructured manufacturing contexts; * Development of an AI solution for action planning in collaborative unstructured manufacturing contexts. This open position is financed by European Commission through HBP (Human Brain Project) project CEoI for SGA3 - Application of functional architectures supporting advanced cognitive functions to address AI and automation problems of industrial and commercial within the awarded PROMEN-AID, Proactive Memory iN AI for Development project (GA-94553) Please submit your application using the online form (https://iit.taleo.net/careersection/ex/jobdetail.ftl?lang=it&job=21000089 ) and including a detailed CV, cover letter (outlining motivation, experience and qualifications), names and contact of 2 referees. Application's deadline: February 20, 2022. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiachen.xu at univie.ac.at Mon Jan 24 04:30:10 2022 From: jiachen.xu at univie.ac.at (Jiachen Xu) Date: Mon, 24 Jan 2022 10:30:10 +0100 Subject: Connectionists: [3rd BCI-UC] Final program online Message-ID: <8dfb9f4d4297d0b5c77d333a3e8982ee@univie.ac.at> Dear colleagues, We are very excited to announce that the final program of the 3rd Brain-Computer Interface Un-Conference (BCI-UC) is now online: https://bciunconference.univie.ac.at/3rd-bci-uc/ All presentations are streamed live and free-of-charge on January 27th (Thursday), 2022, from 03:00 pm to 09:00 pm (CET) via: https://www.crowdcast.io/e/3rd-brain-computer-interface-unconference/1 The 3rd BCI-UC features keynotes by *Cynthia A. Chestek* on "Neural Interfaces for Controlling Finger Movements" and by *Thorsten O. Zander* on "The Age of Neuroadaptivity". In addition, we have seven contributed presentations and one panel discussion on a broad range of topics for BCI technologies. We are looking forward to seeing you all at the 3rd BCI-UC! Best regards, The BCI-UC committee: Moritz Grosse-Wentrup Anja Meunier Philipp Raggam Jiachen Xu --------------------------------------- M.Sc. Jiachen Xu, Research Group Neuroinformatics Faculty of Computer Science University of Vienna Address: Kolingasse 14-16, A-1090 Wien, Austria From Pavis at iit.it Mon Jan 24 06:23:09 2022 From: Pavis at iit.it (Pavis) Date: Mon, 24 Jan 2022 11:23:09 +0000 Subject: Connectionists: 2 Postdoc positions in 3D scene understanding from multi-modal data - (2200000J) Message-ID: 2 Postdoc positions in 3D scene understanding from multi-modal data - (2200000J) Commitment & contract: up to 2 years, collaboration contract Location: Italian Institute of Technology (IIT), Genoa, Via Enrico Melen 83, Italy WHO WE ARE At IIT we work enthusiastically to develop human-centered Science and Technology to tackle some of the most pressing societal challenges of our times and transfer these technologies to the society. Our Genoa headquarter is strictly inter-connected with our 11 centres around Italy and two outer-stations based in the US for a truly interdisciplinary experience. YOUR TEAM You?d be working in the multicultural and multidisciplinary PAVIS group, where a team of 30 international Researchers, Postdocs, Ph.D. and Engineers enjoy working in cutting-edge Computer Vision and Machine Learning research. PAVIS infrastructure comprises a multi-modal sensor network (RGB, depth, event-based, thermal, lidar cameras) used for deploying AI systems for scene understanding and behavioural analysis and the availability of an HPC with more than 62 GPU nodes to run large scale training and deployment of ML models. The PAVIS Research Line is coordinated by Dr. Alessio Del Bue. The research focuses on developing novel Artificial Intelligence (AI) systems to understand our 3D physical space and the human behaviours within it. The aim is to create assistive AI systems that can automatically react and support humans in their daily life by analysing and forecasting activities and their interaction with the environment. We work towards this goal by developing new Computer Vision and Machine (Deep) Learning methodologies that can be readily deployed in realistic applications. The specific research topics of interest for this call are: - Object 3D localization from multi-modal sensors - Semantic Structure from Motion and SLAM - 2D/3D scene understanding using Graph Neural Networks - Deep learning models subject to multi-view geometry constraints - Retrieval and Knowledge-based Computer Vision in large-scale 3D maps Within the team, your main responsibilities will be: - Develop new methods for 3D scene understanding from multi-modal data (images, audio, and text) to solve practical problems in Computer Vision - Publish in top-tier Computer Vision and Machine Learning conferences and journals WHAT WOULD MAKE YOU SHINE - A PhD in Computer vision, including Machine Learning, Robotics, or similar disciplines - Documented experience on 3D vision, scene understanding and related fields - A strong publication record - Excellent programming skills in Python and/or C/C++ - The ability to properly report, organize and publish research data - Good command in spoken and written English EXTRA AWESOME - Experience in collaborative software development. Good knowledge of tools like GIT - Experience in Linux operating system and familiarity with deep learning libraries and dependencies (CUDA, Pytorch, Pytorch3D, Tensorflow, Tensorflow3D) - Experience on DNN and Graph Neural Networks - Experience in deploying DNN models on HPC platforms - Good communication skills - Strong problem-solving attitude - High motivation to learn - Good in time and priority management - Ability to work in a challenging and international environment - Ability to work independently and collaboratively in a highly interdisciplinary environment COMPENSATION & BENEFITS - Competitive salary package for international standards - Private health care coverage - Wide range of staff discounts WHAT?S IN FOR YOU? An equal, inclusive and multicultural environment ready to welcome you with open arms. Discrimination is a big NO for us! We like contamination and encourage you to mingle and discover what other people are up to in our labs! If paperwork is not your piece of cake, we got you! There?s a specialized team working to help you with that, especially during your relocation! If you are a start upper or a business-minded person, you will find some exceptionally gifted professionals ready to nurture and guide your attitude and aspirations. If you want your work to have a real impact, in IIT you will find an innovative and stimulating culture that drives our mission to contribute to the improvement and well-being of society! We stick to our values! Integrity, courage, societal responsibility and inclusivity are the values we believe in! They define us and our actions in our everyday life. They guide us to accomplish IIT mission! If you feel this tickles your appetite for change, do not hesitate and apply! Please submit your application using the online form at https://iit.taleo.net/careersection/ex/jobdetail.ftl?lang=it&job=2200000J and including a detailed CV, cover letter (outlining motivation, experience and qualifications) and contact details of 2 referees. Application?s deadline: February 17th 2022 We inform you that the information you provide will be used solely for the purposes of evaluating and selecting professional profiles in order to meet the requirements of Istituto Italiano di Tecnologia. Your data will be processed by Istituto Italiano di Tecnologia, based in Genoa, Via Morego 30, acting as Data Controller, in compliance with the rules on protection of personal data, including those related to data security. Please also note that, pursuant to articles 15 et. seq. of European Regulation no. 679/2016 (General Data Protection Regulation), you may exercise your rights at any time by contacting the Data Protection Officer (phone Tel: +39 010 28961 - email: dpo at iit.it - kindly note that this e-mail address is exclusively reserved for handling data protection issues. Please, do not use this e-mail address to send any document and/or request of information about this opening). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahirose at ee.t.u-tokyo.ac.jp Mon Jan 24 07:43:22 2022 From: ahirose at ee.t.u-tokyo.ac.jp (Akira Hirose) Date: Mon, 24 Jan 2022 21:43:22 +0900 Subject: Connectionists: CFP: ICETCI -- Emerging Techniques in Computational Intelligence - International Conference - Call for Papers Message-ID: <61EE9EEA.1010406@ee.t.u-tokyo.ac.jp> (Apologies for cross-posting) =========== 2022 International Conference on Emerging Techniques in Computational Intelligence (ICETCI) http://ietcint.com/ Deardline for Paper Submission: Feb 15, 2022 Final Notification of review outcomes: May 15, 2022 Submission of Final paper: May 31, 2022 Early Registration Deadline: May 31, 2022 Final Registration Deadline: Aug 24, 2022 Conference dates: Aug 25-27, 2022. ========== Dear All, After the success of the recently concluded First International Conference on Emerging Techniques in Computational Intelligence, ICETCI 2021 (proceedings live on IEEE Xplore), with multiple tutorial sessions, competitions, ten keynote lectures by internationally acclaimed specialists, and technical paper presentations, we are now getting into the second edition of the Conference in the coming year. The Second International Conference on Emerging Techniques in Computational Intelligence, ICETCI 2022 will be held at Mahindra University, Hyderabad on Aug 25-27, 2022. This aims to highlight the evolution of topics, frontline research and multiple applications, in the domain of Computational Intelligence from the mainstream foundations to novel investigations and applications. The conference comprises of one day of tutorial sessions followed by two days of Keynote Lectures by invited international experts from Industry and Academia, and technical paper presentations. Also, the conference hosts several special sessions on emerging technologies and applications related to computational intelligence. In addition to tutorials by experts from the academia, the conference is also expected to have industry-relevant tutorials by experts from top industries like NVIDIA and Tech Mahindra. You may like to listen to an introductory video on ICETCI 2022 by Dean Research, Mahindra University, Prof Arya K. Bhattacharya by clicking the following link: https://www.youtube.com/watch?v=dc12sCwiUlk More information on the Conference may be found at http://ietcint.com/ ICETCI 2022 invites submissions that are original, previously unpublished innovative work in any area of Computational Intelligence, both emerging topics which form the theme of the conference as well as more foundational areas. The three main tracks of the Conference are: ? Deep Learning ? Sequence Modelling and ? General Topics in Computational Intelligence LIST OF TOPICS Models ?Neural Networks ?Evolutionary Algorithms ?Fuzzy Logic ?Rough Sets ?Bayesian Methods ?Reinforcement Learning ?Cognitive Learning ?Quantum Computing ?Learning Paradigms ?Memory Paradigms ?Reasoning Models ? Deep Learning ?Explainable AI ?Physics Informed Neural Networks ?Adversarial Machine Learning ?Game Theory ?Extreme Learning Machines ?Intelligent Agents ?Multi-Objective Optimization Applications ?Natural Language Processing ?Computational Genomics ?Recommendation Systems ?Music Information Retrieval ?Generative Adversarial Models ?Blockchain ?Augmented & Virtual Reality ?Industry 4.0 ? Cybersecurity ?Social and Crowd Computing ?Big Data Analytics ?Robotic Process Automation ?5G/6G Communications ?Renewable Energy Systems ?Structural Health Monitoring ?Smart Cities ?Intelligent Transportation Systems ?Neuroscience ?Healthcare ?Graphical Models ?Climate Science ?Unsupervised Learning ?Remote Sensing ?SSE Devices Modelling ?Computational Finance ?Computer Vision ?Sentiment Analysis Paper Submission Manuscripts for ICETCI 2022 should be submitted electronically at https://edas.info/N29171 Authors should submit manuscripts up to 8 A4-size pages in length, including figures, tables and references, prepared using the provided templates. At most two extra pages can be included at $50.0 each. All submitted manuscripts will be subjected to four or more reviews. Accepted papers will be submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore?s scope and quality requirements. Paper Templates The paper size should be A4 and must be prepared in two-column format. They should be formatted according to standard templates for IEEE Conference Proceedings, available for Microsoft Word and LaTeX at https://www.ieee.org/conferences/publishing/templates.html Important Dates: Special Session Proposal Deadline: Dec 31, 2021 Tutorial Proposal Deadline: Feb 15, 2022 Last date for Paper Submission: Feb 15, 2022 Final Notification of review outcomes: May 15, 2022 Submission of Final paper: May 31, 2022 Early Registration Deadline: May 31, 2022 Final Registration Deadline: Aug 24, 2022 Conference dates: Aug 25-27, 2022. Call for Special Sessions Special Session proposals are invited for the ICETCI 2022 conference. The proposal shall include title, aims, scope, and organizers name with short biography. A list of potential contributors would be helpful in evaluating the proposal. All proposals are to be submitted to the Special Session Chair (rama.murthy at mahindrauniversity.edu.in) Call for Tutorials Tutorials provide a forum to learn about emerging techniques in computational intelligence through hands or demonstration mode. Potential organizers shall send their proposals to the Tutorials Committee Chairs (tilottama.goswami at ieee.org , neha.bharill at mahindrauniversity.edu.in ). We look forward to welcoming you to Hyderabad! Best Regards, Publicity Committee Chairs - ICETCI 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.pascanu at gmail.com Mon Jan 24 12:37:18 2022 From: r.pascanu at gmail.com (Razvan Pascanu) Date: Mon, 24 Jan 2022 17:37:18 +0000 Subject: Connectionists: CFP: 1st Conference on Lifelong Learning Agents (CoLLAs) 2022 - Deadline March 04. Message-ID: Dear All, We invite submissions to the 1st Conference on Lifelong Learning Agents (CoLLAs) that describe new theory, methodology or *new insights into existing algorithms and/or benchmarks*. Accepted papers will be published in the Proceedings of Machine Learning Research (PMLR). Topics of submission may include, but are not limited to, reinforcement learning, supervised learning or unsupervised learning approaches for: - Lifelong Learning / Continual Learning - Meta-Learning - Multi-task learning - Transfer Learning - Domain adaptation - Few-shot learning - Out-of-distribution generalization - Online Learning The conference also welcomes submissions at the intersection of machine learning and neuroscience and applications of the topics of interest to real-world problems. Submitted papers will be evaluated based on their novelty, technical quality, and potential impact. Experimental methods and results are expected to be reproducible, and authors are strongly encouraged to make code and data available. We also encourage submissions of proof-of-concept research that puts forward novel ideas and demonstrates potential, as well as in-depth analysis of existing methods and concepts. Key datesThe planned dates are as follows: - Abstract deadline: March 01, 2022, 11:59 pm (Anywhere on Earth, AoE) - Paper submission deadline: March 04, 2022, 11:59 pm (AoE) - Review released: April 08, 2022 - Author rebuttals due: April 15, 2022, 11:59 pm (AoE) - Notifications: May 06, 2022 - Resubmissions*: July 06, 2022, 11:59 pm (AoE) - Decisions on resubmissions* : August 06, 2022 - Tentative conference dates: August 29-31, 2022 *For more information, see the review process section. Review Process Papers will be selected via rigorous double-blind peer-review process. All accepted papers will be presented at the Conference as contributed talks or as posters and will be published in the Proceedings. The reviews process will be hosted on OpenReview with submissions and reviews being private until a decision is made. Reviews and discussions of the accepted papers will be made available after acceptance. In addition to accept/reject, a paper can be marked for revision and resubmission. In this case, the authors have a fixed amount of time to update the work and resubmit, to get a final accept/reject decision. If the paper was marked for resubmission but after resubmission it is rejected, then the work will automatically be accepted to a non-archival Workshop track of CoLLAs. The authors will still be able to present a poster on their work as part of this track. This system is aimed to produce a fairer treatment of borderline papers and to save the time spent in going through the entire reviewing process from scratch when resubmitting to a future edition of the conference or a different relevant conference. During the rebuttal period, authors are allowed to update their paper once. However, reviewers are not required to read the new version. Any paper which had a substantial update during this period will automatically go to the resubmission stage for a detailed re-review. Formatting and Supplementary Material Submissions should have a recommended length of 9 single-column CoLLAs-formatted pages, plus unlimited pages for references and appendices. We enforce a maximum length of 10 pages, where the 10th page can be used if it helps with the formatting of the paper. The appendices should be within the same pdf file as the main publication, however an additional zip file can be submitted that can include multiple files of different formats (e.g. videos or code). Note that reviewers are under no obligation to examine the appendix and the supplementary material. Please format the paper using the official LaTeX style files that can be found here: https://www.overleaf.com/read/grrjqdpztnpb. We do not support submissions in formats other than LaTeX. Please do not modify the layout given by the style file. For any questions, you can reach us at . Submissions will be through OpenReview and will be open approximately 4 weeks before the abstract submission deadline. Complete CFP can be found here: https://lifelong-ml.cc/call Regards, Doina Precup, Sarath Chandar, Razvan Pascanu CoLLAs 2022 General and Program Chairs https://lifelong-ml.cc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Mon Jan 24 20:03:30 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 25 Jan 2022 01:03:30 +0000 Subject: Connectionists: Call for Papers - Cognitive Computation Special Issue - "What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories" - Submission Deadline - Feb 15, 2022 Message-ID: Dear Colleagues, This Special Issue is about stepping back and taking a look at where we are in terms of understanding the brain. We want to publish short position papers, maximum 10 pages long. We are aiming for quick reviews, about two weeks. Further details are provided below. It took a while to get the website set up. So the new deadline is February 15, 2022. Asim Roy Professor, Information Systems Arizona State University Asim Roy | iSearch (asu.edu) Lifeboat Foundation Bios: Professor Asim Roy ------------------------------------------------------------------------------------------------------------------------------ Special Issue Call for Papers: What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories Guest Editors: * (Lead) Asim Roy, Arizona State University, USA, E-mail: ASIM.ROY at asu.edu * Claudius Gros, Institute for Theoretical Physics, Goethe University Frankfurt, Germany, E-mail: gros at itp.uni-frankfurt.de * Juyang Weng, Brain Mind Institute, USA, Email: weng at msu.edu * Jean-Philippe Thivierge, University of Ottawa, Canada, E-mail: Jean-Philippe.Thivierge at uottawa.ca * Tsvi Achler, Optimizing Mind, Email: achler at optimizingmind.com * Ali A. Minai, University of Cincinnati, USA, E-mail: Ali.Minai at uc.edu Aim and Motivation: Arguments about the brain and how it works are endless. Despite some conflicting conjectures and theories that have existed for decades without resolution, we have made significant progress in creating brain-like computational systems to solve some important engineering problems. It would be a good idea to step back and examine where we are in terms of our understanding of the brain and potential problems with the brain-like AI systems that have been successful so far. For this special issue of Cognitive Computation, we invite thoughtful articles on some of the issues that we have failed to address and comprehend in our journey so far in understanding the brain. We aim for rapid peer-reviews by experts (about two weeks) for all selected submissions and plan to publish the special issue papers on a rolling basis from early 2022. Topics: We plan to publish a collection of short articles on a variety of topics that could be asking new questions, proposing new theories, resolving conflicts between existing theories, and proposing new types of computational models that are brain-like. Deadlines: SI submissions deadline: 15 February 2022 First notification of acceptance: 11 March 2022 Submission of revised papers: 10 April 2022 Final notification to authors: 30 April 2022 Publication of SI: Rolling basis (2022) Submission Instruction: Prepare your paper in accordance with the Journal guidelines: www.springer.com/12559. Submit manuscripts at: http://www.editorialmanager.com/cogn/. Select "SI: AI and Neuroscience" for the special issue under "Additional Information." Your paper must contain significant and original work that has not been published nor submitted to any journals. All papers will be reviewed following standard reviewing procedures of the Journal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Tue Jan 25 04:47:09 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Tue, 25 Jan 2022 11:47:09 +0200 Subject: Connectionists: Call-for-papers: 30th ACM International Conference on Multimedia 2022 Message-ID: [Apologies for multiple postings] ************************ ACM Multimedia 2022 Lisbon, Portugal, 10-14 October, 2022 https://2022.acmmm.org/ https://2022.acmmm.org/call-for-papers/ ************************ *** Call for papers *** * Paper submission deadline (abstract): 31st March, 2022 * Paper submission deadline (.pdf submission): 7th April, 2022 * Acceptance notification: 29th June, 2022 * Camera-ready submission: 13th July, 2022 ACM Multimedia 2022 calls for research papers presenting novel theoretical and algorithmic solutions to address problems across multimedia and related application fields. The conference also calls for papers presenting novel, thought-provoking ideas and promising (preliminary) results in realizing these ideas. Topics of interest include but are not limited to four major themes of multimedia: Engagement, Experience, Systems and Understanding. > Theme: Engaging Users with Multimedia The engagement of multimedia with society as a whole requires research that addresses how multimedia can be used to connect people with multimedia artifacts that meet their needs in a variety of contexts. The topic areas included under this theme include: - Emotional and Social Signals - Multimedia Search and Recommendation - Summarization, Analytics, and Storytelling > Theme: Experience One of the core tenants of our research community is that multimedia contributes to the user experience in a rich and meaningful manner. The topics organized under this theme are concerned with innovative uses of multimedia to enhance the user experience, how this experience is manifested in specific domains, and metrics for qualitatively and quantitatively measuring that experience in useful and meaningful ways. Specific topic areas addressed this year include: - Interactions and Quality of Experience - Art and Culture - Multimedia Applications > Theme: Multimedia Systems Research in multimedia systems is generally concerned with understanding fundamental tradeoffs between competing resource requirements, developing practical techniques and heuristics for realizing complex optimization and allocation strategies, and demonstrating innovative mechanisms and frameworks for building large-scale multimedia applications. Within this theme, we have focused on three target topic areas: - Systems and Middleware - Transport and Delivery - Data Systems Management and Indexing > Theme: Understanding Multimedia Content Multimedia data types by their very nature are complex and often involve intertwined instances of different kinds of information. We can leverage this multi-modal perspective in order to extract meaning and understanding of the world, often with surprising results. Specific topics addressed this year include: - Multimodal Fusion and Embeddings - Vision and Language - Media Interpretation *** Submission Instructions *** All submissions will be handled electronically via the CMT/OpenReview conference submission website. The abstract submission deadline is 31st March, 2022 (23:59 AoE). The paper submission deadline is 07th April, 2022 (23:59 AoE). *** Important Dates *** Please note: The submission deadline is at 23:59 of the stated deadline date Anywhere on Earth. All submission deadlines are firm. Paper submission deadline (Abstract): 31st March, 2022 Paper submission deadline (.pdf submission): 7th April, 2022 Supplementary material firm deadline: 14th April, 2022 Regular Paper Reviews to Author: 26th May, 2022 Regular Paper Rebuttal Deadline: 7th June, 2022 Notification: 29th June, 2022 Camera-ready Submission: 13th July, 2022 *** Contacts *** For any questions, please contact the Technical Program Chairs : Xavier Alameda-Pineda, Inria, Grenoble, France Qin Jin, Renmin University of China, China Vincent Oria, New Jersey Institute of Technology, USA Laura Toni, UCL, UK On behalf of the Publicity Chairs, Bogdan Ionescu https://www.aimultimedialab.ro/ From bogdanlapi at gmail.com Tue Jan 25 04:47:22 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Tue, 25 Jan 2022 11:47:22 +0200 Subject: Connectionists: Call-for-grand-challenge: 30th ACM International Conference on Multimedia 2022 Message-ID: [Apologies for multiple postings] ************************ ACM Multimedia 2022 Lisbon, Portugal, 10-14 October, 2022 https://2022.acmmm.org/ https://2022.acmmm.org/call-for-grand-challenge/ ************************ *** Call for Grand Challenge *** * Grand Challenge submission: 12th February, 2022 * Acceptance notification: 1st March, 2022 ACM Multimedia is the premier international conference in the area of multimedia within the field of computer science. Multimedia research focuses on integration of the multiple perspectives offered by different digital modalities including images, text, video, music, sensor data, spoken audio. ACM Multimedia is calling for proposals for Grand Challenges in 2022. Proposers with an innovative idea of a Multimedia Grand Challenge, should gather an organizational team with the capacity to carry out the organization of a challenge, and submit a proposal according to the instructions below. In 2022, we are emphasizing the continuity of Grand Challenges, which is important in order to support sustained and substantial progress in the state of the art. We ask that organizer teams who would like to propose Grand Challenges to express a commitment to organize their Grand Challenge multiple years in a row. The Multimedia Grand Challenge was first presented as part of ACM Multimedia 2009 and has established itself as a prestigious competition in the multimedia community. The purpose of the Multimedia Grand Challenge is to engage the multimedia research community by establishing well-defined and objectively judged challenge problems intended to exercise the state-of-the-art methods and inspire future research directions. The key criteria for Grand Challenges are that they should be useful, interesting, and their solution should involve a series of research tasks over a long period of time, with pointers towards longer-term research. *** Proposal Format *** A Multimedia Grand Challenge proposal should include: - A brief description to explain why the challenge problem is important and relevant to the multimedia research community, industry, and society over the next 3-5 years or a longer horizon. - A description of a specific set of research tasks or sub-tasks to be carried out towards tackling the challenge problem in the long run. - An outline of current state-of-the-art techniques and why this Grand Challenge would help accelerate research in this important area. - Link to sites containing relevant datasets to be used for objective training and evaluation of the grand challenge tasks. Full appropriate documentation on the datasets should be provided or made accessible. - A description of rigorously defined objective criteria and/or procedures on how the submissions will be evaluated or judged. - A commitment to publish and maintain a website related to their specific Grand Challenge containing the information, datasets, tasks for the Grand Challenge at least the next 3 years. - Work with ACM Multimedia Conference organizers to publicize the Grand Challenge tasks to researchers for participation. Contact information of at least two organizers who will be responsible for organizing, publicizing, reviewing and judging the Grand Challenge submissions as described in the proposal. - Note that although we ask organizers to express a multi-year commitment to their Grand Challenge, the Challenge will still undergo a new review each year. Priority will be given to Grand Challenges which have been successful in the past and are clearly contributing to continuity. *** Submission *** Please send your MM Grand Challenge proposals to by the deadline listed below. *** Important Dates *** Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth. Submission of Grand Challenge Proposals: 12th February, 2022 Notification of Acceptance: 1st March, 2022 Web Site and Call for Participation Ready: 15th March, 2022 MM Grand Challenge Camera Ready papers due: 17th July, 2022 *** Contacts *** For questions regarding the Grand Challenges you can email the Multimedia Grand Challenge Chairs : Miriam Redi, Wikimedia Foundation Georges Qu?not, LIG-CNRS, France On behalf of the Publicity Chairs, Bogdan Ionescu https://www.aimultimedialab.ro/ From bogdanlapi at gmail.com Tue Jan 25 04:47:39 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Tue, 25 Jan 2022 11:47:39 +0200 Subject: Connectionists: Call-for-workshops: 30th ACM International Conference on Multimedia 2022 Message-ID: [Apologies for multiple postings] ************************ ACM Multimedia 2022 Lisbon, Portugal, 10-14 October, 2022 https://2022.acmmm.org/ https://2022.acmmm.org/call-for-workshops/ ************************ *** Call for workshops *** * Workshop proposal submission: 1st March, 2022 * Acceptance notification: 20th March, 2022 We are soliciting proposals for workshops to be held in conjunction with the ACM Multimedia 2022. The purpose of the workshops is to provide a comprehensive forum on current and emerging topics that will not be fully explored during the main conference and to encourage in-depth discussion of technical and application issues. *** Proposal Format *** Each workshop proposal (maximum 4 pages, in PDF format) must include: 1. Title of the workshop. 2. Workshop organizers (name, affiliation and short biography). 3. Scope and topics of the workshop. 4. Rationale: - Why the workshop is related to ACM Multimedia 2022. - Why the topic is important. - Why the workshop may attract a significant number of attendees. - A brief biography for each organizer and panelist. 5. Workshop details: - A draft call for papers (including organizers, program committee and steering committee if any, as well as tentative dates). Organizers are expected to be fully committed and physically present at the workshop. - Workshop tentative schedule (number of expected papers, number of expected attendees, duration full/half day, format talks/posters, etc.). We encourage events that demonstrate the interest of the community in the proposed topic and guarantee the commitment of the organizers. - Names of potential participants and invited speakers (if any). 6. Workshop history: If there are past workshops, the history of the workshop. *** Important Dates *** The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth. Workshop proposal submission: 1st March, 2022 Decision notification: 20th March, 2022 Workshop paper notification: 29th July, 2022 Workshop paper camera-ready: 21th August, 2022 *** Proposal Submission *** Please send your Workshop proposals to by the deadline listed below. *** Contacts *** For questions regarding the submission you can email the workshop chairs : Teresa Chambel, Universidade de Lisboa, Portugal Riccardo Leonardi, University of Brescia, Italy Richang Hong, Hefei University of Technology, China Liqiang Nie, Shandong University, China On behalf of the Publicity Chairs, Bogdan Ionescu https://www.aimultimedialab.ro/ From marcella.cornia at unimore.it Tue Jan 25 03:30:48 2022 From: marcella.cornia at unimore.it (Marcella Cornia) Date: Tue, 25 Jan 2022 09:30:48 +0100 Subject: Connectionists: [CFP] Research Topic "Attentive Models in Vision" - Computer Vision Section of Frontiers in Computer Science Message-ID: ******************************** Research Topic ?Attentive Models in Vision? Computer Vision Section | Frontiers in Computer Science https://www.frontiersin.org/research-topics/23980/attentive-models-in-vision ******************************** === SUBMISSIONS ARE OPEN!!! ==== Apologies for multiple posting Please distribute this call to interested parties AIMS AND SCOPE =============== The modeling and replication of visual attention mechanisms have been extensively studied for more than 80 years by neuroscientists and more recently by computer vision researchers, contributing to the formation of various subproblems in the field. Among them, saliency estimation and human-eye fixation prediction have demonstrated their importance in improving many vision-based inference mechanisms: image segmentation and annotation, image and video captioning, and autonomous driving are some examples. Nowadays, with the surge of attentive and Transformer-based models, the modeling of attention has grown significantly and is a pillar of cutting-edge research in computer vision, multimedia, and natural language processing. In this context, current research efforts are also focused on new architectures which are candidates to replace the convolutional operator, as testified by recent works that perform image classification using attention-based architectures or that combine vision with other modalities, such as language, audio, and speech, by leveraging on fully-attentive solutions. Given the fundamental role of attention in the field of computer vision, the goal of this Research Topic is to contribute to the growth and development of attention-based solutions focusing on both traditional approaches and fully-attentive models. Moreover, the study of human attention has inspired models that leverage human gaze data to supervise machine attention. This Research Topic aims to present innovative research that relates to the study of human attention and to the usage of attention mechanisms in the development of deep learning architectures and enhancing model explainability. Research papers employing traditional attentive operations or employing novel Transformer-based architectures are encouraged, as well as works that apply attentive models to integrate vision and other modalities (e.g., language, audio, speech, etc.). We also welcome submissions on novel algorithms, datasets, literature reviews, and other innovations related to the scope of this Research Topic. TOPICS ======= The topics of interest include but are not limited to: - Saliency prediction and salient object detection - Applications of human attention in Vision - Visualization of attentive maps for Explainability of Deep Networks - Use of Explainable-AI techniques to improve any aspect of the network (generalization, robustness, and fairness) - Applications of attentive operators in the design of Deep Networks - Transformer-based or attention-based models for Computer Vision tasks (e.g. classification, detection, segmentation) - Transformer-based or attention-based models to combine Vision with other modalities (e.g. language, audio, speech) - Transformer-based or attention-based models for Vision-and-Language tasks (e.g., image and video captioning, visual question answering, cross-modal retrieval, textual grounding / referring expression localization, vision-and-language navigation) - Computational issues in attentive models - Applications of attentive models (e.g., robotics and embodied AI, medical imaging, document analysis, cultural heritage) IMPORTANT DATES ================= - Paper Submission Deadline: January 31st, 2022 Research topic page: https://www.frontiersin.org/research-topics/23980/attentive-models-in-vision Click here to participate: https://www.frontiersin.org/research-topics/23980/attentive-models-in-vision/participate-in-open-access-research-topic By expressing your interest in contributing to this collection, you will be registered as a contributing author and will receive regular updates regarding this Research Topic. SUBMISSION GUIDELINES ====================== All submitted articles are peer reviewed. All published articles are subject to article processing charges (APCs). Frontiers works with leading institutions to ensure researchers are supported when publishing open access. See if your institution has a payment plan with Frontiers or apply to the Frontiers Fee Support program. If you wish to know more about Frontiers publishing and contribution process, please head to the following sections: - Collaborative peer review - Author guidelines - Open Access, publishing fees, and waivers TOPIC EDITORS ============== - Marcella Cornia, University of Modena and Reggio Emilia (Italy) - Luowei Zhou, Microsoft (United States) - Ramprasaath R. Selvaraju, Saleforce Research (United States) - Prof. Xavier Gir?-i-Nieto, Universitat Politecnica de Catalunya (Spain) - Prof. Jason Corso, Stevens Institute of Technology (United States) -- *Marcella Cornia*, PhD AImageLab, Dipartimento di Ingegneria "Enzo Ferrari" Universit? degli Studi di Modena e Reggio Emilia e-mail: marcella.cornia at unimore.it phone: +39 059 2058790 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver at roesler.co.uk Tue Jan 25 04:24:39 2022 From: oliver at roesler.co.uk (Oliver Roesler) Date: Tue, 25 Jan 2022 09:24:39 +0000 Subject: Connectionists: Deadline Extension - CFP Special Issue on Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation Message-ID: *DEADLINE EXTENSION* **Apologies for cross-posting** We are happy to announce that the deadline for submissions has been extended until _*March 31*_. *CALL FOR PAPERS* *Special Issue* on *Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation* in Interaction Studies *I. Aim and Scope* A key factor for the acceptance of robots as regular partners in human-centered environments is the appropriateness and predictability of their behavior. The behavior of human-human interactions is governed by customary rules that define how people should behave in different situations, thereby governing their expectations. Socially compliant behavior is usually rewarded by group acceptance, while non-compliant behavior might have consequences including isolation from a social group. Making robots able to understand human social norms allows for improving the naturalness and effectiveness of human-robot interaction and collaboration. Since social norms can differ greatly between different cultures and social groups, it is essential that robots are able to learn and adapt their behavior based on feedback and observations from the environment. This special issue in Interaction Studies aims to attract the latest research aiming at learning, producing, and evaluating human-aware robot behavior, thereby, following the recent RO-MAN 2021 Workshop on Robot Behavior Adaptation to Human Social Norms (TSAR) in providing a venue to discuss the limitations of the current approaches and future directions towards intelligent human-aware robot behaviors. *II. Submission* 1. Before submitting, please check the official journal guidelines . 2. For paper submission, please use the online submission system . 3. After logging into the submission system, please click on "Submit a manuscript" and select "Original article". 4. Please ensure that you select "Special Issue: Socially Acceptable Robot Behavior" under "General information". ??? The primary list of topics covers the following points (but not limited to): * Human-human vs human-robot social norms * Influence of cultural and social background on robot behavior perception * Learning of socially accepted behavior * Behavior adaptation based on social feedback * Transfer learning of social norms experience * The role of robot appearance on applied social norms * Perception of socially normative robot behavior * Human-aware collaboration and navigation * Social norms and trust in human-robot interaction * Representation and modeling techniques for social norms * Metrics and evaluation criteria for socially compliant robot behavior *III. Timeline* 1. Deadline for paper submission: *March 31, 2022*** 2. First notification for authors: *June 15, 2022* 3. Deadline for revised papers submission: *July 31, 2022* 4. Final notification for authors: *September 15, 2022* 5. Deadline for submission of camera-ready manuscripts: *October 15, 2022* ??? Please note that these deadlines are only indicative and that all submitted papers will be reviewed as soon as they are received. *IV. Guest Editors* 1. *Oliver Roesler* ? Vrije Universiteit Brussel ? Belgium 2. *Elahe Bagheri* ? Vrije Universiteit Brussel ? Belgium 3. *Amir Aly* ? University of Plymouth ? UK 4. *Silvia Rossi* ? University of Naples Federico II ? Italy 5. *Rachid Alami* ? CNRS-LAAS ? France -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at oist.jp Tue Jan 25 05:31:11 2022 From: erik at oist.jp (Erik De Schutter) Date: Tue, 25 Jan 2022 10:31:11 +0000 Subject: Connectionists: Announcing Okinawa/OIST Computational Neuroscience Course 2022 References: <367D1A2B-0339-4E63-9148-A6F5A9D25207@oist.jp> Message-ID: OKINAWA/OIST COMPUTATIONAL NEUROSCIENCE COURSE 2022 Methods, Neurons, Networks and Behaviors June 13 to June 29, 2022 Okinawa Institute of Science and Technology Graduate University, Japan https://groups.oist.jp/ocnc After two consecutive cancelations due to COVID-19, OCNC 2022 will take place on June 13-29, preceding Neuro2022 (https://neuro2022.jnss.org) in Okinawa. Depending on the immigration situation in June the course will be either pure on-site or hybrid: a mixture of on-site and remote. The aim of the Okinawa/OIST Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling. We invite graduate students and postgraduate researchers to participate in the course, held from June June 13 through June 29, 2022 at an oceanfront seminar house of the Okinawa Institute of Science and Technology Graduate University. Applications are through the course web page (https://groups.oist.jp/ocnc) only: January 26 - February 28, 2022. Applicants will receive confirmation of acceptance in April. The 17th OCNC will be a shorter course than in the past: a two-week course covering single neurons, networks, and behaviors with time for student projects. Teaching will focus on methods with hands-on tutorials during the afternoons, and lectures by international experts. The course has a strong hands-on component based on student proposed modeling or data analysis projects, which are further refined with the help of a dedicated tutor. Applicants are required to propose their project at the time of application. However in the case of a hybrid format course, only on-site students will receive support for student projects. There is no tuition fee. The sponsor will provide lodging and meals during the course and may provide partial travel support. We hope that this course will be a good opportunity for theoretical and experimental neuroscientists to meet each other and to explore the attractive nature and culture of Okinawa, the southernmost island prefecture of Japan. Invited faculty: ? Michael Berry II (Princeton University, USA) ? Anne Churchland, Cold Spring Harbor Labs, USA ? Erik De Schutter (OIST) ? Kenji Doya (OIST) ? Gaute Einevoll, Norwegian University of Life Sciences (online) ? Tomoki Fukai (OIST) ? Boris Gutkin (Ecole Normale Sup?rieure, Paris, France) ? Yukiyasu Kamitani, Kyoto University, Japan ? Bernd Kuhn (OIST) ? Sang Wan Lee, KAIST, South Korea ? Devika Narain, Erasmus Medical Center, Rotterdam, Netherlands ? Viola Priesemann, MPG Go?ttingen, Germany (online) ? Ivan Soltesz, Stanford University, USA ? Greg Stuart, Australian National University, Australia ? Greg Stephens (OIST) ? Saori Tanaka, (ATR, Japan) ? Marylka Yoe Uusisaari (OIST) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3766 bytes Desc: not available URL: From m.okun at leicester.ac.uk Tue Jan 25 07:11:28 2022 From: m.okun at leicester.ac.uk (Okun, Michael (Dr.)) Date: Tue, 25 Jan 2022 12:11:28 +0000 Subject: Connectionists: Special Issue on Computational Neuroscience in the Journal of Physiology Message-ID: Dear Colleagues, We would like to announce a special issue of the Journal of Physiology focused on computational neuroscience, to be published in commemoration of the 70th anniversary of the publication of the Hodgkin-Huxley model and to celebrate the 600th volume of the journal. The special issue will include a number of invited reviews covering synaptic and network models, with further information available in the issue's advert flyer: https://physoc.onlinelibrary.wiley.com/pb-assets/assets/14697793/Affiche_Computational_Neuroscience_v7-1642754941.jpeg To complement the above reviews, the journal seeks submissions of original research articles in all areas of theoretical and computational neuroscience. The submission deadline is 15th of April 2022. Additional details on the special issue can be found in the official announcement on the journal's webpage: https://physoc.onlinelibrary.wiley.com/hub/journal/14697793/resources/call-for-papers?#COMP Thank you, Katalin Toth and Michael Okun From beierh at gmail.com Tue Jan 25 15:38:23 2022 From: beierh at gmail.com (Ulrik Beierholm) Date: Tue, 25 Jan 2022 20:38:23 +0000 Subject: Connectionists: Assistant Professor in Cognitive Neuroscience (PSYC22-7) (ID: 21001678) In-Reply-To: References: Message-ID: Dear colleagues Please see the job advert below for a faculty position at Durham University. Note that we would be especially interested in anyone able to use multiple techniques, including computational modelling. I am happy to chat informally about this. Regards, Ulrik Beierholm, Durham Univ, UK ------ The Dept of Psychology at Durham University seeks to appoint a talented individual to the role of Assistant Professor. We welcome applications from those with research and teaching interests in the broad field of Cognitive Neuroscience and we are particularly eager to hear from applicants with a focus on perception, action, multisensory integration, learning and memory, or clinically relevant research areas, and able to use multiple techniques (e.g. psychophysics, fMRI, EEG, TMS or computational modelling). We welcome applicants whose work would extend our Cognitive Neuroscience research and MSc teaching in novel directions, as well as those whose research can forge new links between Cognitive Neuroscience and our other research groups, Developmental Science and Quantitative Social Psychology. This post offers an exciting opportunity to make a major contribution to the development of internationally excellent research and teaching while allowing you unrivalled opportunities to progress and embed your career in an exciting and progressive institution. For more information about Cognitive Neuroscience research in Durham, please visit our Department pages at https://www.dur.ac.uk/psychology/ Open Date *17 January 2022* Closing Date *14 February 2022 *at midnight Web link to this vacancy? Job Description - Assistant Professor in Cognitive Neuroscience (PSYC22-7) (21001678) (taleo.net) If you would like to make an informal enquiry about this position, please contact Prof Daniel T. Smith (Daniel.smith2 at durham.ac.uk) ------------------------------------------ Ulrik Beierholm, PhD (he/him) Associate Professor Department of Psychology, Durham University http://beierholm.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.miasnikof at utoronto.ca Tue Jan 25 19:23:46 2022 From: p.miasnikof at utoronto.ca (P. Miasnikof) Date: Tue, 25 Jan 2022 19:23:46 -0500 Subject: Connectionists: CORS/INFORMS 2022 Data Science for Optimization Message-ID: <731d5136-792c-03ba-7cbc-dfc9c2444339@utoronto.ca> This year's CORS (Canadian Operational Research Society) and INFORMS International Conference will be held jointly June 5-8 in Vancouber, British Columbia, Canada The conference will include a session devoted to work in optimization for data science. If you are interested in presenting, please send your abstract before Feb 14 to the session chair: p.miasnikof at utoronto.ca Thanks! Hoping to see you in Vancouver! :-) Best, Pierre PS Here's the link to conference website: http://meetings.informs.org/wordpress/2022international/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Tue Jan 25 12:03:24 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Tue, 25 Jan 2022 17:03:24 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> Message-ID: <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> PS: Terry, you also wrote: "Our precious time is better spent moving the field forward.? However, it seems like in recent years much of your own precious time has gone to promulgating a revisionist history of deep learning (and writing the corresponding "amicus curiae" letters to award committees). For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2] nor those learnt by Amari?s stochastic gradient descent for MLPs in 1967-1968 [GD1-2]. Nor did your recent survey [S20] attempt to correct this as good science should strive to do. On the other hand, it seems you celebrated your co-author's birthday in a special session while you were head of NeurIPS, instead of correcting these inaccuracies and celebrating the true pioneers of deep learning, such as Ivakhnenko and Amari. Even your recent interview https://blog.paperspace.com/terry-sejnowski-boltzmann-machines/ claims: "Our goal was to try to take a network with multiple layers - an input layer, an output layer and layers in between ? and make it learn. It was generally thought, because of early work that was done in AI in the 60s, that no one would ever find such a learning algorithm because it was just too mathematically difficult.? You wrote this although you knew exactly that such learning algorithms were first created in the 1960s, and that they worked. You are a well-known scientist, head of NeurIPS, and chief editor of a major journal. You must correct this. We must all be better than this as scientists. We owe it to both the past, present, and future scientists as well as those we ultimately serve. The last paragraph of my report https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html quotes Elvis Presley: "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.? I wonder how the future will reflect on the choices we make now. J?rgen > On 3 Jan 2022, at 11:38, Schmidhuber Juergen wrote: > > Terry, please don't throw smoke candles like that! > > This is not about basic math such as Calculus (actually first published by Leibniz; later Newton was also credited for his unpublished work; Archimedes already had special cases thereof over 2000 years ago; the Indian Kerala school made essential contributions around 1400). In fact, my report addresses such smoke candles in Sec. XII: "Some claim that 'backpropagation' is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970 [BP1]." > > You write: "All these threads will be sorted out by historians one hundred years from now." To answer that, let me just cut and paste the last sentence of my conclusions: "However, today's scientists won't have to wait for AI historians to establish proper credit assignment. It is easy enough to do the right thing right now." > > You write: "let us be good role models and mentors" to the new generation. Then please do what's right! Your recent survey [S20] does not help. It's mentioned in my report as follows: "ACM seems to be influenced by a misleading 'history of deep learning' propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. XIII). It goes more or less like this: 'In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s [S20].' However, as mentioned above, the 1969 book [M69] addressed a 'problem' of Gauss & Legendre's shallow learning (~1800)[DL1-2] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method [DEEP1-2][DL2] (and then also by Amari's SGD for MLPs [GD1-2]). Minsky was apparently unaware of this and failed to correct it later [HIN](Sec. I).... deep learning research was alive and kicking also in the 1970s, especially outside of the Anglosphere." > > Just follow ACM's Code of Ethics and Professional Conduct [ACM18] which states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." No need to wait for 100 years. > > J?rgen > > > > > >> On 2 Jan 2022, at 23:29, Terry Sejnowski wrote: >> >> We would be remiss not to acknowledge that backprop would not be possible without the calculus, >> so Isaac newton should also have been given credit, at least as much credit as Gauss. >> >> All these threads will be sorted out by historians one hundred years from now. >> Our precious time is better spent moving the field forward. There is much more to discover. >> >> A new generation with better computational and mathematical tools than we had back >> in the last century have joined us, so let us be good role models and mentors to them. >> >> Terry >> >> ----- >> >> On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >>> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially in Sec. XII. >>> >>> The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. >>> >>> Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). >>> >>> Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. >>> >>> Happy New Year! >>> >>> J?rgen >>> >>> >>>> On 2 Jan 2022, at 03:43, Asim Roy wrote: >>>> >>>> And, by the way, Paul Werbos was also there at the same debate. And so was Teuvo Kohonen. >>>> >>>> Asim >>>> >>>> -----Original Message----- >>>> From: Asim Roy >>>> Sent: Saturday, January 1, 2022 3:19 PM >>>> To: Schmidhuber Juergen ; connectionists at cs.cmu.edu >>>> Subject: RE: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >>>> >>>> In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 (International Conference on Neural Networks) in Houston. He literally said "Amari invented back propagation" and Amari was sitting next to him. I still have a recording of that debate. >>>> >>>> Asim Roy >>>> Professor, Information Systems >>>> Arizona State University >>>> https://isearch.asu.edu/profile/9973 >>>> https://lifeboat.com/ex/bios.asim.roy >>> >>> On 2 Jan 2022, at 02:31, Stephen Jos? Hanson wrote: >>> >>> Juergen: Happy New Year! >>> >>> "are not quite the same".. >>> >>> I understand that its expedient sometimes to use linear regression to approximate the Perceptron.(i've had other connectionist friends tell me the same thing) which has its own incremental update rule..that is doing <0,1> classification. So I guess if you don't like the analogy to logistic regression.. maybe Fisher's LDA? This whole thing still doesn't scan for me. >>> >>> So, again the point here is context. Do you really believe that Frank Rosenblatt didn't reference Gauss/Legendre/Laplace because it slipped his mind?? He certainly understood modern statistics (of the 1940s and 1950s) >>> >>> Certainly you'd agree that FR could have referenced linear regression as a precursor, or "pretty similar" to what he was working on, it seems disingenuous to imply he was plagiarizing Gauss et al.--right? Why would he? >>> >>> Finally then, in any historical reconstruction, I can think of, it just doesn't make sense. Sorry. >>> >>> Steve >>> >>> >>>> -----Original Message----- >>>> From: Connectionists On Behalf Of Schmidhuber Juergen >>>> Sent: Friday, December 31, 2021 11:00 AM >>>> To: connectionists at cs.cmu.edu >>>> Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. >>>> >>>> Sure, Steve, perceptron/Adaline/other similar methods of the 1950s/60s are not quite the same, but the obvious origin and ancestor of all those single-layer ?shallow learning? architectures/methods is indeed linear regression; today?s simplest NNs minimizing mean squared error are exactly what they had 2 centuries ago. And the first working deep learning methods of the 1960s did NOT really require ?modern? backprop (published in 1970 by Linnainmaa [BP1-5]). For example, Ivakhnenko & Lapa (1965) [DEEP1-2] incrementally trained and pruned their deep networks layer by layer to learn internal representations, using regression and a separate validation set. Amari (1967-68)[GD1] used stochastic gradient descent [STO51-52] to learn internal representations WITHOUT ?modern" backprop in his multilayer perceptrons. J?rgen >>>> >>>> >>>>> On 31 Dec 2021, at 18:24, Stephen Jos? Hanson wrote: >>>>> >>>>> Well the perceptron is closer to logistic regression... but the heaviside function of course is <0,1> so technically not related to linear regression which is using covariance to estimate betas... >>>>> >>>>> does that matter? Yes, if you want to be hyper correct--as this appears to be-- Berkson (1944) coined the logit.. as log odds.. for probabilistic classification.. this was formally developed by Cox in the early 60s, so unlikely even in this case to be a precursor to perceptron. >>>>> >>>>> My point was that DL requires both Learning algorithm (BP) and an >>>>> architecture.. which seems to me much more responsible for the the success of Dl. >>>>> >>>>> S >>>>> >>>>> >>>>> >>>>> On 12/31/21 4:03 AM, Schmidhuber Juergen wrote: >>>>>> Steve, this is not about machine learning in general, just about deep >>>>>> learning vs shallow learning. However, I added the Pandemonium - >>>>>> thanks for that! You ask: how is a linear regressor of 1800 >>>>>> (Gauss/Legendre) related to a linear neural network? It's formally >>>>>> equivalent, of course! (The only difference is that the weights are >>>>>> often called beta_i rather than w_i.) Shallow learning: one adaptive >>>>>> layer. Deep learning: many adaptive layers. Cheers, J?rgen >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> On 31 Dec 2021, at 00:28, Stephen Jos? Hanson >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> Despite the comprehensive feel of this it still appears to me to be too focused on Back-propagation per se.. (except for that pesky Gauss/Legendre ref--which still baffles me at least how this is related to a "neural network"), and at the same time it appears to be missing other more general epoch-conceptually relevant cases, say: >>>>>>> >>>>>>> Oliver Selfridge and his Pandemonium model.. which was a hierarchical feature analysis system.. which certainly was in the air during the Neural network learning heyday...in fact, Minsky cites Selfridge as one of his mentors. >>>>>>> >>>>>>> Arthur Samuels: Checker playing system.. which learned a evaluation function from a hierarchical search. >>>>>>> >>>>>>> Rosenblatt's advisor was Egon Brunswick.. who was a gestalt perceptual psychologist who introduced the concept that the world was stochastic and the the organism had to adapt to this variance somehow.. he called it "probabilistic functionalism" which brought attention to learning, perception and decision theory, certainly all piece parts of what we call neural networks. >>>>>>> >>>>>>> There are many other such examples that influenced or provided context for the yeasty mix that was 1940s and 1950s where Neural Networks first appeared partly due to PItts and McCulloch which entangled the human brain with computation and early computers themselves. >>>>>>> >>>>>>> I just don't see this as didactic, in the sense of a conceptual view of the multidimensional history of the field, as opposed to a 1-dimensional exegesis of mathematical threads through various statistical algorithms. >>>>>>> >>>>>>> Steve >>>>>>> >>>>>>> On 12/30/21 1:03 PM, Schmidhuber Juergen wrote: >>>>>>> >>>>>>>> Dear connectionists, >>>>>>>> >>>>>>>> in the wake of massive open online peer review, public comments on the connectionists mailing list [CONN21] and many additional private comments (some by well-known deep learning pioneers) helped to update and improve upon version 1 of the report. The essential statements of the text remain unchanged as their accuracy remains unchallenged. I'd like to thank everyone from the bottom of my heart for their feedback up until this point and hope everyone will be satisfied with the changes. Here is the revised version 2 with over 300 references: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> In particular, Sec. II has become a brief history of deep learning up to the 1970s: >>>>>>>> >>>>>>>> Some of the most powerful NN architectures (i.e., recurrent NNs) were discussed in 1943 by McCulloch and Pitts [MC43] and formally analyzed in 1956 by Kleene [K56] - the closely related prior work in physics by Lenz, Ising, Kramers, and Wannier dates back to the 1920s [L20][I25][K41][W45]. In 1948, Turing wrote up ideas related to artificial evolution [TUR1] and learning NNs. He failed to formally publish his ideas though, which explains the obscurity of his thoughts here. Minsky's simple neural SNARC computer dates back to 1951. Rosenblatt's perceptron with a single adaptive layer learned in 1958 [R58] (Joseph [R61] mentions an earlier perceptron-like device by Farley & Clark); Widrow & Hoff's similar Adaline learned in 1962 [WID62]. Such single-layer "shallow learning" actually started around 1800 when Gauss & Legendre introduced linear regression and the method of least squares [DL1-2] - a famous early example of pattern recognition and generalization from training data through a parameterized predictor is Gauss' rediscovery of the asteroid Ceres based on previous astronomical observations. Deeper multilayer perceptrons (MLPs) were discussed by Steinbuch [ST61-95] (1961), Joseph [R61] (1961), and Rosenblatt [R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer [R62], but did not yet have a general deep learning algorithm for deep MLPs (what's now called backpropagation is quite different and was first published by Linnainmaa in 1970 [BP1-BP5][BPA-C]). Successful learning in deep architectures started in 1965 when Ivakhnenko & Lapa published the first general, working learning algorithms for deep MLPs with arbitrarily many hidden layers (already containing the now popular multiplicative gates) [DEEP1-2][DL1-2]. A paper of 1971 [DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium [DL2], especially in Eastern Europe, where much of Machine Learning was born [MIR](Sec. 1)[R8]. LBH fai led to cite this, just like they failed to cite Amari [GD1], who in 1967 proposed stochastic gradient descent [STO51-52] (SGD) for MLPs and whose implementation [GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin's work [GDa-b]). (In 1972, Amari also published what was later sometimes called the Hopfield network or Amari-Hopfield Network [AMH1-3].) Fukushima's now widely used deep convolutional NN architecture was first introduced in the 1970s [CNN1]. >>>>>> >>>>>>>> J?rgen >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ****************************** >>>>>>>> >>>>>>>> On 27 Oct 2021, at 10:52, Schmidhuber Juergen >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, fellow artificial neural network enthusiasts! >>>>>>>> >>>>>>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>>>>>> >>>>>>>> Following the great success of massive open online peer review >>>>>>>> (MOOR) for my 2015 survey of deep learning (now the most cited >>>>>>>> article ever published in the journal Neural Networks), I've >>>>>>>> decided to put forward another piece for MOOR. I want to thank the >>>>>>>> many experts who have already provided me with comments on it. >>>>>>>> Please send additional relevant references and suggestions for >>>>>>>> improvements for the following draft directly to me at >>>>>>>> >>>>>>>> juergen at idsia.ch >>>>>>>> >>>>>>>> : >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scient >>>>>>>> ific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ >>>>>>>> !NsJ4lf4yO2BDIBzlUVfGKvTtf_QXY8dpZaHzCSzHCvEhXGJUTyRTzZybDQg-DZY$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>>>>>> >>>>>>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>>>>>> >>>>>>>> Thank you all in advance for your help! >>>>>>>> >>>>>>>> J?rgen Schmidhuber >>>>>>>> >>>>>>>> From victorpitron at yahoo.fr Wed Jan 26 07:35:18 2022 From: victorpitron at yahoo.fr (Victor Pitron) Date: Wed, 26 Jan 2022 13:35:18 +0100 Subject: Connectionists: LAST DAYS : 3 open positions: 2 PhD and 1 post-doc in Paris to begin in September 2022, submission deadline January the 31th, about the cognitive investigation of idiopathic environmental intolerance In-Reply-To: References: Message-ID: The Research Project: Symptoms that patients attribute to the environment while medical examination shows no bodily malfunction are labeled as ?idiopathic environmental intolerance? (IEI). People suffering from IEI single out several agents from the environment, including chemical substances and electromagnetic fields, which they blame for a wide range of chronic and unspecific symptoms such as diffuse pain, fatigue, dizziness, dyspnea, or palpitations. IEI is an emerging health issue and specific diagnostic tools as well as evidence-based treatment programs are still lacking. In the last years, several works suggested that cognitive biases contribute to IEI. In this research project funded by the French Fondation pour la Recherche M?dicale and the Agence Nationale de S?curit? Sanitaire, we will test the relevance of a cognitive model based on the assumption that symptoms of IEI result from impairments in interoceptive awareness. The project will combine behavioral experiments, computational modeling of behavior and beliefs, and the development and testing of a dedicated treatment program with Cognitive Behavioral Therapy (CBT). Three Positions available: Two PhD students and one post-doc fellow will be recruited in September 2022 for 3 years. Ideal candidates are highly motivated to work on this project, good team players open to an interdisciplinary approach between medicine, cognitive science, and computational approaches, and speak and write English fluently. Three complementary profiles are proposed: - One candidate with a good knowledge of scientific methods and statistics for behavioral experiments in cognitive science, psychology, or a related field. Experience with psychometric testing of patients, data analysis, and programming in Matlab or similar software is advantageous. The candidate?s main missions will be to program, run and analyze behavioral tests with patients suffering from IEI, involving interoceptive tasks and tests of cognitive biases. - One candidate with previous experience in data analysis, programming in Matlab or Python, and experience in developing computational models of psychopathological conditions (computational psychiatry) and in the model-based analysis of behavioral data, using methods such as Bayesian inference, reinforcement learning, and deep learning. The candidate?s main missions will be to build, simulate, fit and test computational models of human behavior for patients with IEI. - One candidate needs to be a French speaking CBT-trained psychologist with great clinical experience and a strong interest in innovative CBT programs about environmental issues. Experience in qualitative analysis is a plus. The candidate?s main missions will be to build, run and test the CBT treatment program with patients suffering from IEI. A high level of proactive involvement will be expected from all members of the team, which will be expected to be physically present for the term of the project. The postdoc position moreover offers the opportunity to train in soft skills, crucial for becoming a PI, since the postdoctoral candidate is expected to contribute to lead the core-team composed by her/him and the two PhD candidates, together with our supervising team. The supervising team: An international medical and scientific supervision is organized with complementary skills for this interdisciplinary project that targets an emerging field of medicine. The main medical and scientific supervisor is Pr C?dric Lemogne, assisted by Dr Victor Pitron (both psychiatrists, MD, PhD, H?tel-Dieu, Paris). Dr Liane Schmidt and Dr Leonie Koban (both PI researchers at the Control-Interoception-Attention team at the Paris Brain Institute, Piti?-Salp?tri?re hospital, Paris) will provide additional scientific supervision for computational modelling. Pr Damien L?ger, and Dr Lynda Bensefa-Colas (both Occupational and Environmental physicians, MD, PhD, H?tel-Dieu, Paris) will provide additional medical supervision about IEI. Three senior European researchers will offer monthly supervision: Pr Omer Van den Bergh (Leuven) and Pr Michael Witth?ft (Mainz) for the work on the behavioral testing and the treatment program, Pr Giovanni Pezzulo (Rome) for the work on computational modelling. The work environment: The research team will be based at the VIFASOM lab of the H?tel-Dieu, a beautiful hospital in the heart of ancient neighborhoods of Paris, where patients will come for testing and treatment. The lab currently houses 3 PI and > 10 PhD students and engineers working on various fields of cognitive science. This will offer the opportunity for fruitful discussions and collaborations and a stimulating workplace. Nearby, the Paris Brain Institute (Piti?-Salp?tri?re hospital, Paris) and the Ecole Normale Sup?rieure also offer many opportunities for exciting scientific training and conferences in cognitive science. The PhD students will have courses and scientific supervision at the Doctorate School Bio SPC of Paris. All supervisors endorse values of equity and diversity, and are committed to ensuring a safe, welcoming, and inclusive workplace. Everyone is therefore strongly encouraged to apply. Application : CV, motivational and recommandation letters should be sent to Dr Victor Pitron : victor.pitron at aphp.fr. Applications are reviewed on a rolling basis and all candidates will receive full consideration. Deadline for application is January the 31th 2022. From hans.ekkehard.plesser at nmbu.no Thu Jan 27 02:59:53 2022 From: hans.ekkehard.plesser at nmbu.no (Hans Ekkehard Plesser) Date: Thu, 27 Jan 2022 07:59:53 +0000 Subject: Connectionists: NMBU / Associate Professors in Machine Learning and Scientific Computing / Deadline 15 Feb 2022 Message-ID: Dear Colleagues, The Department of Data Science at NMBU is currently looking for two full-time permanent associate professors in machine learning and scientific computing, respectively. We are also looking for adjunct professors (20%, 4 years) to cover ethical and legal aspects of data science and data security, respectively. I'd appreciate if you would pass this information on to colleagues who may be interested in any of the positions. As we currently are mostly male faculty members, we'd in particular appreciate applications by women. * Associate professor in Data Science (Machine Learning) - Deadline: Tuesday, February 15, 2022 * Associate professor in Scientific Computing - Deadline: Tuesday, February 15, 2022 * Professor II/Associate professor II in Ethics and Law of Data Science - Deadline: Tuesday, February 15, 2022 * Professor II/Associate professor II in Data Security - Deadline: Tuesday, February 15, 2022 Best regards, Hans Ekkehard -- Prof. Dr. Hans Ekkehard Plesser Head, Department of Data Science Faculty of Science and Technology Norwegian University of Life Sciences PO Box 5003, 1432 Aas, Norway Phone +47 6723 1560 Email hans.ekkehard.plesser at nmbu.no Home http://arken.nmbu.no/~plesser -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgalle at gmail.com Wed Jan 26 11:36:25 2022 From: mgalle at gmail.com (=?UTF-8?Q?Matthias_Gall=C3=A9?=) Date: Wed, 26 Jan 2022 17:36:25 +0100 Subject: Connectionists: [CfP] Challenges & Perspectives in Creating Large Language Models Message-ID: *Call for Papers: Workshop on Challenges & Perspectives in Creating Large Language Models* May 27th 2022 (w/ ACL) https://bigscience.huggingface.co/acl-2022 Two years after the appearance of GPT-3, large language models seem to have taken over NLP. Their capabilities, limitations, societal impact and the potential new applications they unlocked have been discussed and debated at length. A handful of replication studies have been published since then, confirming some of the initial findings and discovering new limitations. This workshop aims to gather researchers and practitioners involved in the creation of these models in order to: 1. Share ideas on the next directions of research in this field, including ? but not limited to ? grounding, multi-modal models, continuous updates and reasoning capabilities. 2. Share best-practices, brainstorm solutions to identified limitations and discuss challenges, such as: - ?*Infrastructure*. What are the infrastructure and software challenges involved in scaling models to billions or trillions of parameters, and deploying training and inference on distributed servers when each model replicas is itself larger than a single node capacity? - ?*Data*. While the self-supervised setting dispenses with human annotation, the importance of cleaning, filtering and the bias and limitation in existing or reported corpora has become more and more apparent over the last years. - ?*Ethical & Legal frameworks*. What type of data can/should be used, what type of access should be provided, what filters are or should be necessary? - ?*Evaluation*. Investigating the diversity of intrinsic and extrinsic evaluation measures, how do they correlate and how the performances of a very large pretrained language model should be evaluated. - *?Training efficiency.* Discussing the practical scaling approaches, practical questions around large scale training hyper-parameters and early-stopping conditions. Discussing measures to reduce the associated energy consumption. This workshop is organized by the BigScience initiative and will also serve as the closing session of this one year-long initiative aimed at developing a multilingual large language model, which is currently gathering 900 researchers from more than 60 countries and 250 institutions. Its goal is to investigate the creation of a large scale dataset and model from a very wide diversity of angles. *Submissions* We call for relevant contributions, either in long (8 pages) or short (4 pages) format. Accepted papers will be presented during a poster session. Submissions can be archival or non-archival. Submission opens on February 1st, 2022 and should be made via OpenReview ( https://openreview.net/group?id=aclweb.org/ACL/2022/Workshop/BigScience). *Dates* Feb. 28, 2022: Submission Deadline ?March 26, 2022: Notification of Acceptance ?April 10, 2022: Camera-ready papers due -------------- next part -------------- An HTML attachment was scrubbed... URL: From timvogels at gmail.com Wed Jan 26 08:12:37 2022 From: timvogels at gmail.com (Tim Vogels) Date: Wed, 26 Jan 2022 14:12:37 +0100 Subject: Connectionists: DEADLINE Feb 15 for the Computational Neuroscience Imbizo in Cape Town, South Africa Message-ID: Dear all, The DEADLINE for the next IMBIZO is approaching very quickly. Don?t let it swoosh by to join a diverse neuroscience summer school at the most beautiful beach in the world! If you know anyone who would benefit from a diverse and extraordinary computational neuroscience summer school, please forward this information: IBRO-SIMONS COMPUTATIONAL NEUROSCIENCE IMBIZO #isiCNI2022 12 August - 4 September 2022, Noordhoek Beach, Cape Town, South Africa http://imbizo.africa/ Application deadline: 15th February 2022 The #isiCNI2022 is a southern hemisphere summer school aiming to promote computational neuroscience in Africa. It will bring together international and local students under the tutelage of the world's leading experts in the field. Like its international sister courses, this four-week summer school aims to teach central ideas, methods, and practices of modern computational neuroscience through a combination of lectures and hands-on project work. Mornings will be devoted to lectures on topics across the breadth of computational neuroscience, including experimental underpinnings and machine learning analogues. The rest of the day will be spent working on research projects under the close supervision of expert tutors and faculty. Individual research projects will focus on the modelling of neurons, neural systems, behaviour, the analysis of state-of-the-art neural data, and the development of theories to explain experimental observations. New this year is a week focused on neuroscience-inspired machine learning! Who should apply? This course is aimed at Masters and early-PhD level students though Honours or advanced undergraduates may also apply. Postdoctoral students who can motivate why the course would benefit them are also encouraged to apply, Students should have sufficient quantitative skills, (e.g. a background in mathematics, physics, computer science, statistics, engineering or related field). Some knowledge of neural biology will be useful but not essential. Experimental neuroscience students are encouraged to apply, but should ensure that they have a reasonable level of quantitative proficiency (i.e. at least second-year level mathematics or statistics and have done at least one course in computer programming). Please distribute this information as widely as you can. Essential details * Fee (which covers tuition, lodging, and meals): 1100 EUR Thanks to our generous sponsors, significant financial assistance is available to reduce and waiver fees for students, particularly for African applicants. We also hope to provide some travel bursaries for international students. If you are in need of financial assistance to attend the Imbizo, please state so clearly in the relevant section of your application. The Imbizo is planned to be hosted in person, with everyone following our COVID policy . If it is unsafe to hold the summer school, we will follow up with further information at the appropriate time. * Application deadline: 15th February 2022 * Notification of results: late April 2022 Information and application https://imbizo.africa/ Questions? isicn.imbizo at gmail.com What is an Imbizo? \?m?bi?z?\ | Xhosa - Zulu A gathering of the people to share knowledge. FACULTY Demba Ba - Harvard University Adrienne Fairhall - Washington University Peter Latham - University College London Daphne Bavelier - University of Geneva Timothy Lillicrap - DeepMind Jonathan Pillow - Princeton University Joseph Raimondo - University of Cape Town Mackenzie Mathis - EPFL Lausanne Athanassia Papoutsi - IMBB-FORTH Evan Schaffer - Columbia University Henning Sprekeler - Technical University of Berlin Thomas Tagoe - University of Ghana Misha Tsodyks - Weiszmann Institute of Science Tim Vogels - Institute of Science and Technology Austria Blake Richards - McGill University Alex Pouget - University of Geneva TUTORS Mohamed Abdelhack - Krembil Centre for Neuroinformatics Annik Carson - McGill University Spiros Chavlis - IMBB-FORTH Christopher Currin - Institute of Science and Technology Austria Sanjukta Krishnaopal - University College London Marjorie Xie - Columbia University ORGANISERS Demba Ba (Harvard University) Christopher Currin (Institute of Science and Technology Austria) Peter Latham (Gatsby Unit for Computational Neuroscience) Joseph Raimondo (University of Cape Town) Emma Vaughan (Imbizo Logistics) Tim Vogels (Institute of Science and Technology Austria) Sponsors The isiCNI is made possible by the generous support from the Simons Foundation and the International Brain Research Organisation (IBRO) , as well as the?Wellcome Trust , Deep Mind and Wits University Organizational Affiliates University of Cape Town, University College London, Institute of Science and Technology Austria, TReND in Africa , Neuroscience Institute , Gatsby Foundation , IBRO African Center for Advanced Training in Neurosciences at UCT -------------- next part -------------- An HTML attachment was scrubbed... URL: From eswc2022 at gmail.com Thu Jan 27 04:40:53 2022 From: eswc2022 at gmail.com (ESWC 2022) Date: Thu, 27 Jan 2022 10:40:53 +0100 Subject: Connectionists: Call for Papers ESWC 2022 Industry Track Message-ID: *Call for Papers ESWC 2022 Industry Track* ESWC is a key academic conference for research results and new developments in the area of the Semantic Web and Knowledge Graphs. ESWC2022 marks the 19th edition, which will take place from 29 May to 2 June 2022, in Heraklion, Greece. The ESWC Industry Track is a forum for exchanging ideas, results, and lessons learned amongst Semantic Web researchers, technologists and product leaders across industry and academia. The goal is to learn from the process of bringing cutting edge Semantic Web research to state of the art applications and to align current research efforts with existing real world requirements that justify the adoption of novel approaches in the face of otherwise unfeasible challenges. *The ESWC2022 Industry Track aims to:* - identify application domains of semantic technologies that yield a significant business value; - present the state of adoption of semantic technologies in industry; and - facilitate a discussion about what current industry challenges can be addressed with semantic web technologies, the hurdles that may stand in the way of broader adoption, and any novel problems and use cases. Come and join us at the ESWC 2022 industry track to share your experience! For more information see: https://2022.eswc-conferences.org/call-for-papers-industry-track/ *Important Dates*: - Submission: March 7, 2022 - Notification to authors: April 4, 2022 - Camera ready papers due: April 11, 2022 All deadlines are 23:59 anywhere on earth (UTC-12). *Track Chairs*: Rinke Hoekstra, Elsevier, The Netherlands Panos Alexopoulos, Textkernel, The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Thu Jan 27 09:37:40 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Thu, 27 Jan 2022 09:37:40 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> References: <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> Message-ID: Juergen, I have read through GMHD paper and a 1971 Review paper by Ivakhnenko.??? These are papers about function approximation. The method proposes to use series of polynomial functions that are stacked in filtered sets.?? The filtered sets are chosen based on best fit, and from what I can tell are manually grown.. so this must of been a tedious and slow process (I assume could be automated).???? So are the GMHDs "deep", in that they are stacked 4 deep in figure 1 (8 deep in another).???? Interestingly, they are using (with obvious FA justification) polynomials of various degree.?? Has this much to do with neural networks?? Yes, there were examples initiated by Rumelhart (and me: https://www.routledge.com/Backpropagation-Theory-Architectures-and-Applications/Chauvin-Rumelhart/p/book/9780805812596), based on poly-synaptic dendrite complexity, but not in the GMHD paper.. which was specifically about function approximation. Ivakhnenko, lists four reasons for the approach they took: mainly reducing data size and being more efficient with data that one had.?? No mention of "internal representations" So when Terry, talks about "internal representations"? --does he mean function approximation?? Not so much.? That of course is part of this, but the actual focus is on cognitive or perceptual or motor functions. Representation in the brain. ? Hidden units (which could be polynomials) cluster and project and model the input features wrt to the function constraints conditioned by training data.?? This is more similar to model specification through function space search.? And the original Rumelhart meaning of internal representation in PDP vol 1, was in the case of representation certain binary functions (XOR), but more generally about the need for "neurons" (inter-neurons) explicitly between input (sensory) and output (motor).???? Consider NETTALK, in which I did the first hierarchical clustering of the hidden units over the input features (letters).? What appeared wasn't probably surprising.. but without model specification, the network (w.hidden units), learned VOWELS and CONSONANT distinctions just from training (Hanson & Burr, 1990).?? This would be a clear example of "internal representations" in the sense of Rumelhart.???? This was not in the intellectual space of Ivakhnenko's Group Method of Handling Data.? (some of this is discussed in more detail in some recent conversations with Terry Sejnowski and another one to appear shortly with Geoff Hinton (AIHUB.org? look in Opinions). Now I suppose one could be cynical and opportunistic, and even conclude if you wanted to get more clicks, rather than title your article GROUP METHOD OF HANDLING DATA, you should at least consider:? NEURAL NETWORKS FOR HANDLING DATA, even if you didn't think neural networks had anything to do with your algorithm, after all everyone else is!? Might get it published in this time frame, or even read. ??? This is not scholarship.? These publications threads are related but not dependent.? And although they diverge? they could be informative if one were to try and develop? polynomial inductive growth networks (see Falhman, 1989; Cascade correlation and Hanson 1990: Meiosis nets)? to motor control in the brain.???? But that's not what happened.??? I think, like Gauss,? you need to drop this specific claim as well. With best regards, Steve On 1/25/22 12:03 PM, Schmidhuber Juergen wrote: > For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2] -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From aeck at oberlin.edu Thu Jan 27 10:53:42 2022 From: aeck at oberlin.edu (Adam Eck) Date: Thu, 27 Jan 2022 10:53:42 -0500 Subject: Connectionists: Visiting Assistant Professor in Data Science at Oberlin College (deadline February 14, 2022) Message-ID: Link: https://jobs.oberlin.edu/postings/11490 Oberlin College invites applications for a full-time non-continuing faculty position in the College of Arts and Sciences. Appointment to this position will be for a term of two years, beginning fall semester of 2022, and will carry the rank of Visiting Assistant Professor. Founded in 1833, Oberlin is a private four-year, selective liberal arts college near Cleveland, Ohio and is also home to an outstanding Conservatory of Music. Together, the two divisions enroll approximately 2900 students. Oberlin College was the first college in the US to make interracial education and co-education central to its mission. The College continues to view a diverse, equitable and inclusive educational environment as essential to the excellence of its academic program. Among liberal arts colleges, Oberlin is a national leader in successfully placing graduates into PhD programs. Responsibilities: The incumbent will teach a total of five courses per year in the general area of data science. Qualifications: Among the qualifications required for appointment is the Ph.D degree (in hand or expected by first semester of academic year 2022-23) in a field related to data science, including (but not limited to) Computer Science, Statistics, or Mathematics. Candidates must demonstrate interest and potential excellence in undergraduate teaching. Successful teaching experience at the college level is desirable. Oberlin College is committed to student and faculty diversity, equity and inclusion. The incumbent will bring understanding of or experience working with underrepresented and diverse academic populations. Oberlin is especially interested in candidates who can contribute to the excellence and diversity of the academic community through their research, teaching, and service. Oberlin recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender, gender identity, sexual orientation, disability, age, veteran?s status, and/or other protected status as required by applicable law. Compensation: Within the range established for this position, salary will be commensurate with qualifications and experience and includes an excellent benefits package. Special Instructions: To apply, candidates should visit the online application site found at https://jobs.oberlin.edu . A complete application will be comprised of 1) a Cover Letter describing your teaching, scholarship, mentorship, and service, detailing any connections to supporting an inclusive learning environment; 2) a Curriculum Vitae; 3) an unofficial graduate transcript; 4) a Teaching Statement showing your commitment to diversity and inclusion, and how you incorporate current instructional research into your teaching; 5) a research statement that includes how you will support undergraduate research and, if applicable, how you might incorporate undergraduate students into your research program; and 6) Letters of Reference from three recommenders.* All application materials must be submitted electronically through Oberlin College and Conservatory's online application process at: https://jobs.oberlin.edu/ *By providing three letters of reference, you agree that we may contact your letter writers. Review of applications will begin on February 14, 2022, and will continue until the position is filled. Completed applications received by the February 14 deadline will be guaranteed full consideration. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sepand.haghighi at yahoo.com Thu Jan 27 14:18:46 2022 From: sepand.haghighi at yahoo.com (Sepand Haghighi) Date: Thu, 27 Jan 2022 19:18:46 +0000 (UTC) Subject: Connectionists: PyCM 3.4 Released: Machine learning library for confusion matrix statistical analysis References: <2068970350.1650202.1643311126510.ref@mail.yahoo.com> Message-ID: <2068970350.1650202.1643311126510@mail.yahoo.com> https://github.com/sepandhaghighi/pycm https://www.pycm.ir http://list.pycm.ir - Colab badge added?#389 - Discord badge added?#397 - brier_score?method added?#219 - J (Jaccard index)?section in?Document.ipynb?updated?#401 - save_obj?method updated?#219 - Python 3.10?added to?test.yml?#391 - Example-3 updated?#405 - Docstrings of the functions updated?#345 - CONTRIBUTING.md?updated?#345 Best RegardsSepand Haghighi -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Thu Jan 27 13:32:12 2022 From: timofte.radu at gmail.com (Radu Timofte) Date: Thu, 27 Jan 2022 19:32:12 +0100 Subject: Connectionists: [CFP] CVPR 2022 New Trends in Image Restoration and Enhancement (NTIRE) workshop and challenges Message-ID: Apologies for multiple postings *********************************** CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 11 CHALLENGES NTIRE: 7th New Trends in Image Restoration and Enhancement workshop and image, video, and multi-frame challenges. In conjunction with CVPR 2022, June 19, New Orleans, US. Website: https://data.vision.ee.ethz.ch/cvl/ntire22/ Contact: radu.timofte at vision.ee.ethz.ch TOPICS ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral imaging ? Underwater imaging ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video restoration, enhancement, manipulation on constrained settings ? Image/video processing on mobile devices ? Visual domain translation ? Multimodal translation ? Perceptual enhancement ? Perceptual manipulation ? Depth estimation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation, depth estimation ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. https://cvpr2022.thecvf.com/author-guidelines The review process is double blind. Accepted and presented papers will be published after the conference in the 2022 CVPR Workshops Proceedings. Author Kit: https://cvpr2022.thecvf.com/sites/default/files/2021-10/cvpr2022-author_kit-v1_1-1.zip Submission site: https://cmt3.research.microsoft.com/NTIRE2022 WORKSHOP DATES ? *Regular Papers Submission Deadline: March 10, 2022* ? Challenge Papers Submission Deadline: April 1, 2022 IMAGE CHALLENGES 1. *Spectral Reconstruction from RGB* 2. *Demosaicing * 3. *Inpainting* 4. *Perceptual Image Quality Assessment* 5. *Learning the Super-Resolution Space * 6. *Super-Resolution* 7. *Night Images Rendering * VIDEO / MULTI-FRAME CHALLENGES 1. *Super-Resolution and Enhancement of Compressed Videos* 2. *Stereo Super-Resolution* 3. *Burst Super-Resolution* 4. *High Dynamic Range (HDR) * To learn more about the challenges, to participate in the challenges, and to access the data everybody is invited to check the NTIRE 2022 web page: https://data.vision.ee.ethz.ch/cvl/ntire22/ For those interested in constrained and efficient solutions validated on mobile devices we refer to the CVPR22* Mobile AI Workshop and Challenges:* https://ai-benchmark.com/workshops/mai/2022/ CHALLENGES DATES ? *Release of train data: January 25, 2022* ? Competitions end: March 20, 2022 SPEAKERS (TBA) SPONSORS (TBA) Website: https://data.vision.ee.ethz.ch/cvl/ntire22/ Contact: radu.timofte at vision.ee.ethz.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Thu Jan 27 13:32:12 2022 From: timofte.radu at gmail.com (Radu Timofte) Date: Thu, 27 Jan 2022 19:32:12 +0100 Subject: Connectionists: [CFP] CVPR 2022 New Trends in Image Restoration and Enhancement (NTIRE) workshop and challenges Message-ID: Apologies for multiple postings *********************************** CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 11 CHALLENGES NTIRE: 7th New Trends in Image Restoration and Enhancement workshop and image, video, and multi-frame challenges. In conjunction with CVPR 2022, June 19, New Orleans, US. Website: https://data.vision.ee.ethz.ch/cvl/ntire22/ Contact: radu.timofte at vision.ee.ethz.ch TOPICS ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral imaging ? Underwater imaging ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video restoration, enhancement, manipulation on constrained settings ? Image/video processing on mobile devices ? Visual domain translation ? Multimodal translation ? Perceptual enhancement ? Perceptual manipulation ? Depth estimation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation, depth estimation ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. https://cvpr2022.thecvf.com/author-guidelines The review process is double blind. Accepted and presented papers will be published after the conference in the 2022 CVPR Workshops Proceedings. Author Kit: https://cvpr2022.thecvf.com/sites/default/files/2021-10/cvpr2022-author_kit-v1_1-1.zip Submission site: https://cmt3.research.microsoft.com/NTIRE2022 WORKSHOP DATES ? *Regular Papers Submission Deadline: March 10, 2022* ? Challenge Papers Submission Deadline: April 1, 2022 IMAGE CHALLENGES 1. *Spectral Reconstruction from RGB* 2. *Demosaicing * 3. *Inpainting* 4. *Perceptual Image Quality Assessment* 5. *Learning the Super-Resolution Space * 6. *Super-Resolution* 7. *Night Images Rendering * VIDEO / MULTI-FRAME CHALLENGES 1. *Super-Resolution and Enhancement of Compressed Videos* 2. *Stereo Super-Resolution* 3. *Burst Super-Resolution* 4. *High Dynamic Range (HDR) * To learn more about the challenges, to participate in the challenges, and to access the data everybody is invited to check the NTIRE 2022 web page: https://data.vision.ee.ethz.ch/cvl/ntire22/ For those interested in constrained and efficient solutions validated on mobile devices we refer to the CVPR22* Mobile AI Workshop and Challenges:* https://ai-benchmark.com/workshops/mai/2022/ CHALLENGES DATES ? *Release of train data: January 25, 2022* ? Competitions end: March 20, 2022 SPEAKERS (TBA) SPONSORS (TBA) Website: https://data.vision.ee.ethz.ch/cvl/ntire22/ Contact: radu.timofte at vision.ee.ethz.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From papaleon at sch.gr Thu Jan 27 11:08:30 2022 From: papaleon at sch.gr (Papaleonidas Antonios) Date: Thu, 27 Jan 2022 18:08:30 +0200 Subject: Connectionists: 18th AIAI 2022 Hybrid @ Crete, Greece - Call for Papers References: <062d01d81397$60c5e100$2251a300$@sch.gr> Message-ID: <064801d81398$20624ee0$6126eca0$@sch.gr> 18th AIAI 2022, 17 - 20 June 2022 Hybrid@ Web & Aldemar Knossos Royal, Crete, Greece www.ifipaiai.org/2022 CALL FOR PAPERS for 18th AIAI 2022 Hybrid @ Web & Crete, Greece Dear Colleagues We would like to invite you to submit your work at the 18th International Conference on Artificial Intelligence Applications and Innovations ( AIAI2022) 18th International Conference on Artificial Intelligence Applications and Innovations, AIAI 2022, is technically sponsored by IFIP Artificial Intelligence Applications WG12.5. It is going to be co-organized as a Joint event with 23rd Conference on Engineering Applications of Neural Networks, EANN 2022, which is technically sponsored by the INNS (International Neural Network Society). SPECIAL ISSUES - PROCEEDINGS: Selected papers will be published in 4 special issues of high quality international scientific Journals: * World Scientific journal, International Journal of Neural Systems, Impact factor 5.87 * Springer journal , Neural Computing and Applications, Impact Factor 5.61 * ???? journal, International Journal of Biomedical and Health Informatics, Impact factor 5.772 * Springer journal, AI & Ethics PROCEEDINGS will be published SPRINGER IFIP AICT Series and they are INDEXED BY SCOPUS, DBLP, Google Scholar, ACM Digital Library, IO-Port, MAthSciNet, CPCI, Zentralblatt MATH and EI Engineering Index Papers submissions will be up to 12 pages long and not less than 6 pages. BIBLIOMETRIC DETAILS: We proudly announce that according to Springer?s statistics, the last 15 AIAI conferences have been downloaded 1,719,00 times! IFIP AIAI series has reached h-index of 29 and published papers have been Cited more than 6000 times! For more Bibliometric Details please click at AIAI BIBLIOMETRIC DETAILS page IMPORTANT DATES: * Paper Submission Deadline: 25th of February 2022 * Notification of Acceptance: 26th of March 2022 * Camera ready Submission: 22th of April 2022 * Early / Authors Registration Deadline: 22th of April 2022 * Conference: 17 - 20 of June 2022 WORKSHOPS & SPECIAL SESSIONS: So far, the following 8 high quality Workshops & Speccail Sessions have been accepted and scheduled: * 11th Mining Humanistic Data Workshop (MHDW 2022) * 7th Workshop on ?5G ? Putting Intelligence to the Network Edge? (5G-PINE 2021) * 2nd Defense Applications of AI Workshop (DAAI) an EDA ? EU Workshop * 2nd Distributed AI for Resource-Constrained Platforms Workshop (DARE 2022) * 2nd Artificial Intelligence in Biomedical Engineering and Informatics (AI-BEI 2022) * 2nd Artificail Intelligence & Ethics Workshop (AIETH 2022) * AI in Energy, Buildings and Micro-Grids Workshop (??BMG) * Machine Learning and Big Data in Health Care (ML at HC) For more info please visit AIAI 2022 workshop info page KEYNOTE SPEAKERS: So far two Plenary Lectures have been announced, both by distinguished Professors with an important imprint in AI and Machine Learning. * Professor Hojjat Adeli Ohio State University, Columbus, USA, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) (IEEE), Honorary Professor, Southeast University, Nanjing, China, Member, Polish and Lithuanian Academy of Sciences, Elected corresponding member of the Spanish Royal Academy of Engineering. Visit Google Scholar profile , h-index: 114 * Professor Riitta Salmelin Department of Neuroscience and Biomedical Engineering Aalto University, Finland Visit Google Scholar profile, h-index: 65 * Professor Dr. Elisabeth Andr? Human-Centered Artificial Intelligence, Institute for Informatics, University of Augsburg, Germany Visit Google Scholar profile, h-index: 61 * Professor Verena Reiser School of Mathematical and Computer Sciences (MACS) at Heriot Watt University, Edinburgh Visit Google Scholar profile, h-index: 31 For more info please visit AIAI 2022 Keynote info page VENUE: ALDEMAR KNOSSOS ROYAL Beach Resort in Hersonisso Peninsula, Crete, Greece. Special Half Board prices have been arranged for the conference delegates in the Aldemar Knossos Royal Beach Resort. For details please see: https://ifipaiai.org/2022/venue/ Conference topics, CFPs, Submissions & Registration details can be found at: * ifipaiai.org/2022/calls-for-papers/ * ifipaiai.org/2022/paper-submission/ * ifipaiai.org/2022/registration/ We are expecting Submissions on all topics related to Artificial and Computational Intelligence and their Applications. Detailed Guidelines on the Topics and the submission details can be found at the links above General co-Chairs: * Ilias Maglogiannis, University of Piraeus, Greece * John Macintyre, University of Sunderland, United Kingdom Program co-Chairs: * Lazaros Iliadis, School of Engineering, Democritus University of Thrace, Greece * Konstantinos Votis, Information Technologies Institute, ITI Thessaloniki, Greece * Vangelis Metsis, Texas State University, USA *** Apologies for cross-posting *** Dr Papaleonidas Antonios Organizing - Publication & Publicity co-Chair of 23rd EANN 2022 & 18th AIAI 2022 Civil Engineering Department Democritus University of Thrace papaleon at civil.duth.gr papaleon at sch.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From U.K.Gadiraju at tudelft.nl Thu Jan 27 11:29:00 2022 From: U.K.Gadiraju at tudelft.nl (Ujwal Gadiraju) Date: Thu, 27 Jan 2022 16:29:00 +0000 Subject: Connectionists: [ACM HT 2022] 3rd Call for Papers Message-ID: ****** Apologies for cross-posting ****** Call for Papers The 33th ACM Conference on Hypertext and Social Media (ACM HT) Barcelona*, Spain, June 28 - July 1, 2022 https://ht.acm.org/ht2022/ co-located with ACM UMAP 2022 Due to the ongoing COVID-19 pandemic, we are planning for a hybrid conference and will accommodate online presentations where needed. ACM HT - Hypertext and Social Media conference - is a premium venue for high-quality peer-reviewed research on hypertext theory, systems and applications. It is concerned with all aspects of modern hypertext research including social media, linked open data and knowledge graphs, information exploration and visualisation, dynamic and computed hypermedia, as well applications for digital arts, culture, and humanities. ACM HT is sponsored by ACM SIGWEB. The proceedings are published by the ACM and will be part of the ACM Digital Library. Tracks: ================== HT 2022 will explore, study and shape a broad range of dimensions faced by modern hypertext studies, covering the following tracks chaired by leading researchers: * Social web content, language and network (chair: Marcelo Armentano) * Digital humanities, culture and society (chair: Jessica Rubart) * Information exploration and visualisation (chair: Claus Atzenbeck) * Personalized recommender systems (chairs: Markus Zanker, Eva Zangerle, Osnat Mokryn) Submissions: ================== HT 2022 will include high-quality peer-reviewed papers related to the above key areas. Maintaining the high quality and impact of the HT series, each paper will have three reviews by program committee members and a meta-review presenting the reviewers' consensual view; the review process will be coordinated by the program chairs in collaboration with the corresponding area chairs. * Peer reviewed, original, and principled research papers addressing both the theory and practice of HT and papers showcasing innovative use of HT and exploring the benefits and challenges of applying HT technology in real-life applications and contexts are welcome. Papers should present original reports of substantive new research techniques, findings, and applications of HT. They should place the work within the field and clearly indicate innovative aspects. Research procedures and technical methods should be presented in sufficient detail to ensure scrutiny and reproducibility. Results should be clearly communicated and implications of the contributions/findings for HT and beyond should be explicitly discussed. * Length: Papers should be at most 14 pages and will be reviewed according to the presented contributions. In other words, we don?t have a distinct category for short papers, which means that papers, no matter what page length, will be reviewed according to the same criteria. Publication: ================== Accepted papers will be published by ACM and will be available via the ACM Digital Library. Extended versions of selected papers presented at the conference could be selected to appear in different special issues in international journals according to the specific tracks (see the web site for details). At least one author of each accepted paper must register for the conference and present the paper there. Submission details will appear soon. Important Dates: ================== * 10 February 2022: Abstracts (compulsory) * 17 February 2022: Full Papers The submission time is 11:59pm AoE. Organization: ================== GENERAL CHAIRS Alejandro Bellogin, Universidad Aut?noma de Madrid Ludovico Boratto, University of Cagliari PROGRAM CHAIR Federica Cena, University of Torino Best, Ujwal Publicity Chair, ACM Hypertext & Social Media 2022 ____________________________________ Dr. Ir. Ujwal Gadiraju Assistant Professor Web Information Systems Delft University of Technology The Netherlands W: https://wis.ewi.tudelft.nl/gadiraju W: https://www.ujwalgadiraju.com E: u.k.gadiraju at tudelft.nl https://www.academicfringe.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pavis at iit.it Thu Jan 27 11:33:21 2022 From: Pavis at iit.it (Pavis) Date: Thu, 27 Jan 2022 16:33:21 +0000 Subject: Connectionists: Registration Open: Hackathon on covid-19 prognosis from images and clinical data Message-ID: <84bb2e80fe9d426b82ead68695218a71@iit.it> Covid CXR Hackathon - Artificial Intelligence for Covid-19 prognosis: aiming at accuracy and explainability Join data scientists from all over the world in this international virtual challenge on Covid-19 data. This online Hackathon is open to students, Phds and teams of researchers that will be guided towards solving a real and compelling problem in medical imaging. During the hackathon you will use your AI and data science skills to design effective solutions helping the clinician to decide the most probable prognosis of patients infected by Covid-19. To this aim, you will have to process real-world data, composed by chest X-ray images (CXR) and clinical parameters, which were collected from several hospitals in emergency conditions during the first outbreak in Northern Italy in collaboration with Centro Diagnostico Italiano and Bracco Imaging. The hackathon targets finding multimodal solutions relying on both sets of data, with a heavy emphasis on image analysis. Solutions not using CXR images will not be considered. We will have two great challenges: Challenge 1: prediction of prognosis Challenge 2: algorithm explainability You are expected to find solutions that will be evaluated on performance as well as explainability by a panel of clinicians and computer scientists. More information about the hackathon registration and schedule is available at this webpage: http://ai4covid-hackathon.it/ The Hackaton will start the 1st of February 2022 at the Dubai Expo 2020 and will last one month. Two prizes will be awarded to the teams with the best performing solution and the team with most convincing discussion on explainability. Important Dates Jan. 27: Registration opens Feb. 01: Hackathon opening event at Dubai EXPO Feb. 15: Results submission opens Feb. 20: Registration closes Mar. 01: Final day for results submission Mar. 03: Final day for the submission of the solution description (max 2 pages) Mar. 10: Hackathon closing event with announcement of winners and prizes Organization The event is organized by Istituto Italiano di Tecnologia (IIT), Fondazione Bruno Kessler (FBK), and Universit? di Modena e Reggio Emilia with the endorsement from ELLIS Genova, ELLIS Modena, and ELLIS Technion, and with the support from CINI AIIS, Bracco Imaging, Centro Diagnostico Italiano, and NVIDIA AUE. Steering Committee Alessio Del Bue - Istituto Italiano di Tecnologia, Genova, Italy Rita Cucchiara - Universit? di Modena e Reggio Emilia, Italy Diego Sona - Fondazione Bruno Kessler, Trento, Italy Jacopo Tessadori - Universit? di Verona, Italy Marco Al? - CDI Centro Diagnostico Italiano, Italy Lihi Zelnik Manor - Technion, Haifa, Israel -------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From franrruiz87 at gmail.com Fri Jan 28 03:52:51 2022 From: franrruiz87 at gmail.com (=?UTF-8?Q?Francisco_J=2E_Rodr=C3=ADguez_Ruiz?=) Date: Fri, 28 Jan 2022 08:52:51 +0000 Subject: Connectionists: ICBINB Monthly Seminar Series Kick Off! Tamara Broderick: Feb 3rd 10am EST Message-ID: Dear all, We?re very excited to host *Tamara Broderick (MIT)* for the first installment of the newly created* ?I Can?t Believe It?s Not Better!? (ICBINB) virtual seminar series*. More details about this series are below. The *"I Can't Believe It's Not Better!" (ICBINB) monthly online seminar series* seeks to shine a light on the "stuck" phase of research. Speakers will tell us about their most beautiful ideas that didn't "work", about when theory didn't match practice, or perhaps just when the going got tough. These talks will let us peek inside the file drawer of unexpected results and peer behind the curtain to see the real story of *how real researchers did real research*. *When: *Thursday, February 3rd at 10:00AM (EST). *Where: *RSVP for the Zoom link here: https://us02web.zoom.us/meeting/register/tZ0qf-yrqzkqGNTtEu-VQ8l8ECqi2yW8hGu2 *Title:* *An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?* *Abstract:* *Imagine you've got a bold new idea for ending poverty. To check your intervention, you run a gold-standard randomized controlled trial; that is, you randomly assign individuals in the trial to either receive your intervention or to not receive it. You recruit tens of thousands of participants. You run an entirely standard and well-vetted statistical analysis; you conclude that your intervention works with a p-value < 0.01. You publish your paper in a top venue, and your research makes it into the news! Excited to make the world a better place, you apply your intervention to a new set of people and... it fails to reduce poverty. How can this possibly happen? There seems to be some important disconnect between theory and practice, but what is it? And is there any way you could have been tipped off about the issue when running your original data analysis? In the present work, we observe that if a very small percentage of the original data was instrumental in determining the original conclusion, we might worry that the conclusion could be unstable under new conditions. So we propose a method to assess the sensitivity of data analyses to the removal of a very small fraction of the data set. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide an approximation. We call our resulting method the Approximate Maximum Influence Perturbation. Empirics demonstrate that while some (real-life) applications are robust, in others the sign of a treatment effect can be changed by dropping less than 0.1% of the data --- even in simple models and even when p-values are small.* *Bio:* *Tamara Broderick is an Associate Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Laboratory for Information and Decision Systems (LIDS), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). She completed her Ph.D. in Statistics at the University of California, Berkeley in 2014. Previously, she received an AB in Mathematics from Princeton University (2007), a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008), an MPhil by research in Physics from the University of Cambridge (2009), and an MS in Computer Science from the University of California, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning. She has been awarded selection to the COPSS Leadership Academy (2021), an Early Career Grant (ECG) from the Office of Naval Research (2020), an AISTATS Notable Paper Award (2019), an NSF CAREER Award (2018), a Sloan Research Fellowship (2018), an Army Research Office Young Investigator Program (YIP) award (2017), Google Faculty Research Awards, an Amazon Research Award, the ISBA Lifetime Members Junior Researcher Award, the Savage Award (for an outstanding doctoral dissertation in Bayesian theory and methods), the Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the Berkeley campus showing the greatest promise in statistical research), the Berkeley Fellowship, an NSF Graduate Research Fellowship, a Marshall Scholarship, and the Phi Beta Kappa Prize (for the graduating Princeton senior with the highest academic average).* *--* *More info:* This series is organized by the community that grew out of the ICBINB workshops @ NeurIPS. Our goal as a community is to center unexpected results, push back against "leaderboard-ism", and promote "slow science" in machine learning research. This seminar series will be the first of a number community-building and collaborative initiatives we plan to organize. For more information and for ways to get involved, please visit us at http://icbinb.cc/, Tweet to us @ICBINBWorkhop , or email us at cant.believe.it.is.not.better at gmail.com. -- Best wishes, The ICBINB Organizers -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.o.sousa at inesctec.pt Fri Jan 28 05:32:05 2022 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Fri, 28 Jan 2022 10:32:05 +0000 Subject: Connectionists: Text2Story'22 Deadline Extension: Feb 7th. ECIR'22 Workshop on Narrative Extraction from Texts Message-ID: *** Apologies for cross-posting *** ++ CALL FOR PAPERS ++ **************************************************************************** Fifth International Workshop on Narrative Extraction from Texts (Text2Story'22) Held in conjunction with the 44th European Conference on Information Retrieval (ECIR'22) April 10th, 2022 - Stavanger, Norway Website: https://text2story22.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission deadline: January 31st, 2022 February 7th, 2022 - Acceptance Notification Date: March 1st, 2022 - Camera-ready copies: March 18th, 2022 - Workshop: April 10th, 2022 ++ Overview ++ Although information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of constructing consistent narrative structures is yet to be solved. ++ List of Topics ++ In the fifth edition of the Text2Story workshop, we aim to foster the discussion of recent advances in the link between Information Retrieval (IR) and formal narrative understanding and representation of texts. Specifically, we aim to provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: * Narrative Representation Language * Story Evolution and Shift Detection * Temporal Relation Identification * Temporal Reasoning and ordering of events * Causal Relation Extraction and Arrangement * Narrative Summarization * Multi-modal Summarization * Automatic Timeline Generation * Storyline Visualization * Comprehension of Generated Narratives and Timelines * Big data applied to Narrative Extraction * Personalization and Recommendation of Narratives * User Profiling and User Behavior Modeling * Sentiment and Opinion Detection in Texts * Argumentation Analysis * Models for detection and removal of bias in generated stories * Ethical and fair narrative generation * Misinformation and Fact Checking * Bots Influence * Information Retrieval Models based on Story Evolution * Narrative-focused Search in Text Collections * Event and Entity importance Estimation in Narratives * Multilinguality: multilingual and cross-lingual narrative analysis * Evaluation Methodologies for Narrative Extraction * Resources and Dataset showcase * Dataset annotation and annotation schemas * Applications in social media (e.g. narrative generation during a natural disaster) ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We invite four kinds of submissions: * Research papers (max 7 pages + references) * Demos and position papers (max 5 pages + references) * Work in progress and project description papers (max 4 pages + references) * Nectar papers with a summary of own work published in other conferences or journals that is worthwhile sharing with the Text2Story community, by emphasizing how it can be applied for narrative extraction, processing or storytelling, adding some more insights or discussions; novel aspects, results or case studies (max 3 pages + references) Papers must be submitted electronically in PDF format through EasyChair (https://easychair.org/conferences/?conf=text2story2022). All submissions must be in English and formatted according to the one-column CEUR-ART style with no page numbers. Templates, either in Word or LaTeX, can be found in the following zip folder: http://ceur-ws.org/Vol-XXX/CEURART.zip. There is also an Overleaf page for LaTeX users, available at: https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/hpvjjzhjxzjk. Submissions will be peer-reviewed by at least two members of the program committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (usually indexed on DBLP). ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC; University of Beira Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Ant?nio Horta Branco (University of Lisbon) Arian Pasquali (CitizenLab) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST and INESC-ID - Instituto Superior T?cnico, University of Lisbon) Demian Gholipour (University College Dublin) Daniel Gomes (FCT/Arquivo.pt) Daniel Loureiro (University of Porto) Denilson Barbosa (University of Alberta) Deya Banisakher (Defense Threat Reduction Agency (DTRA), Ft. Belvior, VA, USA.) Dhruv Gupta (Norwegian University of Science and Technology (NTNU), Trondheim, Norway) Dwaipayan Roy (ISI Kolkata, India) Dyaa Albakour (Signal) Evelin Amorim (INESC TEC) Florian Boudin (Universit? de Nantes) Grigorios Tsoumakas (Aristotle University of Thessaloniki) Henrique Lopes Cardoso (University of Porto) Hugo Sousa (INESC TEC) Ismail Sengor Altingovde (Middle East Technical University) Jeffery Ansah (BHP) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Kumar Bandeli (Walmart Inc.) Ludovic Moncla (INSA Lyon) Marc Spaniol (Universit? de Caen Normandie) Nina Tahmasebi (University of Gothenburg) Pablo Gamallo (University of Santiago de Compostela) Paulo Quaresma (Universidade de ?vora) Pablo Gerv?s (Universidad Complutense de Madrid) Paul Rayson (Lancaster University) Preslav Nakov (Qatar Computing Research Institute (QCRI)) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Udo Kruschwitz (University of Regensburg) Yihong Zhang (Kyoto University) ++ Contacts ++ Website: https://text2story22.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2022 at easychair.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdm at zhaw.ch Fri Jan 28 07:31:47 2022 From: stdm at zhaw.ch (Stadelmann Thilo (stdm)) Date: Fri, 28 Jan 2022 12:31:47 +0000 Subject: Connectionists: Special Issue "Advances in Deep Neural Networks for Visual Pattern Recognition" -> deadline Feb 20 Message-ID: Friendly reminder and invitation to submit your work and the work of your students to the Special Issue "Advances in Deep Neural Networks for Visual Pattern Recognition". A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Computer Vision and Pattern Recognition". Deadline for manuscript submissions: 20 February 2022. Background: Deep neural networks have been the standard for pattern recognition in computer vision since the ImageNet competition in 2012. Great advances have been made since then, both methodologically and in terms of successful applications. However, with every passing year of alleged breakthroughs, we become more and more aware of the many remaining unknowns, almost to the point of admitting: "We know that we know nothing" (yet). Methodologically, for example, evidence is growing that the long-standing image recognition paradigm of episodic classification of IID samples is stagnating, and that active vision approaches are necessary to increase recognition scores by another order of magnitude (Gori, "What's Wrong with Computer Vision?", 2018). Theoretically, it is still not well understood why deep neural networks are so very efficient in learning generalizable functions (Tishby, "Deep learning and the information bottleneck principle", 2015). This leads to a current trend of empirically detected design principles for neural networks (Kaplan et al., "Scaling laws for neural language models", 2020). Practically, many real-world applications are suffering from an unstable performance of learned models, raising issues of robustness, interpretability, and deployability, not speaking of issues with small training sets (related to sample complexity) (Stadelmann et al., "Deep Learning in the Wild", 2018). In this Special Issue of the Journal of Imaging, we request contributions that cover all three aspects: methodical, theoretical, and practical work addressing current issues in visual pattern recognition with novel insights and scientifically founded evaluations. Manuscript Submission Information: Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website. Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI. Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions. Keywords: supervised, semisupervised, and unsupervised deep learning deep reinforcement learning and active vision principles and best practices for neural network architecture design generative models for pattern recognition interpretability and explainability of neural networks robustness and generalization of neural networks (e.g., confidence, sample efficiency, out-of-distribution performance) metalearning, Auto-ML image classification and segmentation object detection document analysis, e.g., handwriting recognition biometrics industrial applications such as predictive maintenance, automatic quality control, etc. medical image processing, digital histopathology Special Issue Editors: Prof. Dr. Thilo Stadelmann, School of Engineering, Zurich University of Applied Sciences ZHAW, 8400 Winterthur, Switzerland Interests: artificial intelligence; deep learning; pattern recognition; reinforcement learning; speaker recognition Dr. Frank-Peter Schilling, School of Engineering, Zurich University of Applied Sciences ZHAW, 8400 Winterthur, Switzerland Interests: artificial intelligence; deep learning; pattern recognition; reinforcement learning -------------- next part -------------- An HTML attachment was scrubbed... URL: From haim.dub at gmail.com Fri Jan 28 09:32:51 2022 From: haim.dub at gmail.com (Haim Dubossarsky) Date: Fri, 28 Jan 2022 14:32:51 +0000 Subject: Connectionists: =?utf-8?q?=5B2nd_CfP=5D_3rd_International_Worksho?= =?utf-8?q?p_on_Computational_Approaches_to_Historical_Language_Cha?= =?utf-8?b?bmdlIDIwMjIgKExDaGFuZ2XigJkyMik=?= Message-ID: Second call for Papers 3rd International Workshop on Computational Approaches to Historical Language Change 2022 (LChange?22) May 26-27, co-located with ACL https://languagechange.org/events/2022-acl-lchange/ Contact email: PC-ACLws2022 at languagechange.org Workshop description The third LChange workshop will be co-located with ACL (2022) to be held in Dublin, during May 26-27, 2022 as a hybrid event. - All aspects around computational approaches to historical language change with the focus on digital text corpora are welcome. LChange explores state-of-the-art computational methodologies, theories and digital text resources on exploring the time-varying nature of human language. - The aim of this workshop is to provide pioneering researchers who work on computational methods, evaluation, and large-scale modelling of language change an outlet for disseminating research on topics concerning language change. Besides these goals, this workshop will also support discussion on the evaluation of computational methodologies for uncovering language change. - LChange?22 will feature a shared task on semantic change detection for Spanish as one track of the workshop. This year we will offer mentoring for PhD students and young researchers in one-on-one meetings during the workshop. If you are interested, send us a short description of your work and we will set you up with one of the organizers of this workshop. If your paper is rejected from the workshop, we can also provide advice on improving it for future submission. This offer is limited, and will be chosen based on topical fit and availability of appropriate mentors. Deadline for applying for mentorship is May 30th via . Via our sponsor, Iguanodon.ai , we can offer one free registration for a PhD student! Apply by emailing us your short cv and why you need your registration paid. Important Dates * February 28, 2022: Paper submission * March 14, 2022: Task description papers * March 26, 2022: Notification of acceptance * March 30, 2022: Deadline for mentorship application * April 10, 2022: Camera-ready papers due * May 26-27, 2022: Workshop date Keynote Talks We can announce our first confirmed keynote Prof. Dirk Geeraerts. More information to come. Submissions We accept three types of submissions, long and short papers, following the ACL2022 style, and the ACL submission policy, and shared task papers. See ACL submission policy: https://www.aclweb.org/adminwiki/index.php?title=ACL_Policies_for_Submission,_Review_and_Citation Long and short papers may consist of up to eight (8) and four (4) pages of content, respectively, plus unlimited references; final versions will be given one additional page of content so that reviewers' comments can be taken into account. Shared task papers may consist of up to four (4) pages plus unlimited references, but without an additional page upon acceptance. Overleaf templates are available here: https://www.overleaf.com/project/5f64f1fb97c4c50001b60549 Submission is electronic, using the ACL Rolling Review (ARR), and is now open: https://openreview.net/group?id=aclweb.org/ACL/2022/Workshop/LChange We invite original research papers from a wide range of topics, including but not limited to: * Novel methods for detecting diachronic semantic change and lexical replacement * Automatic discovery and quantitative evaluation of laws of language change * Computational theories and generative models of language change * Sense-aware (semantic) change analysis * Diachronic word sense disambiguation * Novel methods for diachronic analysis of low-resource languages * Novel methods for diachronic linguistic data visualization * Novel applications and implications of language change detection * Quantification of sociocultural influences on language change * Cross-linguistic, phylogenetic, and developmental approaches to language change * Novel datasets for cross-linguistic and diachronic analyses of language Submissions are open to all, and are to be submitted anonymously. All papers will be refereed through a double-blind peer review process by at least three reviewers with final acceptance decisions made by the workshop organizers. The workshop is scheduled for May 26-27. Contact us at PC-ACLws2022 at languagechange.org if you have any questions. Workshop organizers: Nina Tahmasebi, University of Gothenburg Lars Borin, University of Gothenburg Simon Hengchen, University of Gothenburg Syrielle Montariol, University Paris-Saclay Haim Dubossarsky, Queen Mary University of London Andrey Kutuzov, University of Oslo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfuccillo at flatironinstitute.org Fri Jan 28 14:32:03 2022 From: sfuccillo at flatironinstitute.org (Serena Fuccillo) Date: Fri, 28 Jan 2022 14:32:03 -0500 Subject: Connectionists: Job Opportunity: Summer Research Assistant/Associate, Center for Computational Neuroscience (Multiple Positions) Message-ID: Hi there, Please see our latest job posting for our Summer Research Assistant/Associate position at the Center for Computational Neuroscience, Flatiron Institute. Summer Research Assistant/Associate, Center for Computational Neuroscience (Multiple Positions) The mission of the Center for Computational Neuroscience (CCN) at the Simons Foundation's Flatiron Institute is to develop theories, models, and computational methods that deepen our knowledge of brain function ? both in health and in disease. CCN takes a ?systems" neuroscience approach, building models that are motivated by fundamental principles, that are constrained by properties of neural circuits and responses, and that provide insights into perception, cognition and behavior. This cross-disciplinary approach not only leads to the design of new model-driven scientific experiments, but also encapsulates current functional descriptions of the brain that can spur the development of new engineered computational systems, especially in the realm of machine learning. CCN currently has research groups in computational vision, neural circuits and algorithms, neuroAI and geometry, and statistical analysis of neural data; interested candidates should review the CCN public website for specific information on CCN?s research areas. POSITION SUMMARY CCN invites applications for paid summer internships by graduate or advanced undergraduate students in physics, electrical engineering, machine learning, computer science or related fields. Knowledge of neuroscience is helpful but not required. Research topics will be based on interests of Center investigators, and include computational vision, neural circuits and algorithms, neuroAI and geometry, statistical analysis of neural data, and related topics. CCN interns are invited to participate as full members of the CCN and Flatiron communities during their internship; CCN interns are assigned a primary mentor and research group within the center, are invited to attend and present within their research group?s meetings, and are also invited to participate in meetings of CCN?s other research groups as well as center- and Flatiron-wide activities such as guest lectures, training on use of the Institute?s robust scientific computing resources, and intern social activities. Finally, interns will be invited to present their work at an Institute-wide symposium held at the end of the internship period. Applicable travel assistance to New York City and a supported temporary housing option may be available to interns based outside of New York City. ESSENTIAL FUNCTIONS/RESPONSIBILITIES - Reporting to the assigned mentor in the CCN, the projects will involve analysis of neuroscience data and modeling networks, applying the skills of analytical calculations and computer programming - Perform any other duties or tasks as assigned or required MINIMUM QUALIFICATIONS Education - Candidates should be enrolled in an undergraduate or graduate degree program majoring in electrical engineering, physics, computer science or related field. Experience - Experience with research projects in electrical engineering, computer science physics or related fields is helpful. Related Skills & Other Requirements - The successful applicant should have experience with programming in Matlab, Python, or C - The applicant should have satisfactorily completed Cal III and Linear Algebra - The applicant should have experience with basic optimization and machine learning techniques - Elementary knowledge of neuroscience is helpful Intern Semesters and Deadlines - Summer semester: June 1 to August 12 - Deadline to apply: February 25, 2022 Please email ccnadmin at flatironinstitute.org with any program or application questions. For more information and to apply, please visit: https://simonsfoundation.wd1.myworkdayjobs.com/en-US/simonsfoundationcareers/job/160-Fifth-Avenue/Summer-Research-Assistant-Associate--Center-for-Computational-Neuroscience_R0000925 Warm Regards, *Serena Fuccillo* Coordinator, Center Administration Center for Computational Neuroscience *FLATIRON INSTITUTE* 162 Fifth Avenue, 3rd Floor New York, NY 10010 -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Sun Jan 30 07:03:46 2022 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sun, 30 Jan 2022 14:03:46 +0200 Subject: Connectionists: 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022): Last Mile for Paper Submission Message-ID: <14DTUHM-PA2J-OB4Y-1YV6-I1KOJHCKUUSR@cs.ucy.ac.cy> *** Last Mile for Paper Submission *** 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022) May 25-27, 2022, Golden Bay Hotel 5*, Larnaca, Cyprus http://cyprusconferences.org/eais2022/ (Proceedings to be published by the IEEE Xplore Digital Library; Special Journal Issue with Evolving Systems, Springer) (*** Submission Deadline: February 7, 2022 (firm) ***) IEEE EAIS 2022 will provide a working and friendly atmosphere and will be a leading international forum focusing on the discussion of recent advances, the exchange of recent innovations and the outline of open important future challenges in the area of Evolving and Adaptive Intelligent Systems. Over the past decade, this area has emerged to play an important role on a broad international level in today's real-world applications, especially those ones with high complexity and dynamic changes. Its embedded modelling and learning methodologies are able to cope with real-time demands, changing operation conditions, varying environmental influences, human behaviours, knowledge expansion scenarios and drifts in online data streams. Conference Topics Basic Methodologies Evolving Soft Computing Techniques. Evolving Fuzzy Systems. Evolving Rule-Based Classifiers. Evolving Neuro-Fuzzy Systems. Adaptive Evolving Neural Networks. Online Genetic and Evolutionary Algorithms. Data Stream Mining. Incremental and Evolving Clustering. Adaptive Pattern Recognition. Incremental and Evolving ML Classifiers. Adaptive Statistical Techniques. Evolving Decision Systems. Big Data. Problems and Methodologies in Data Streams Stability, Robustness, Convergence in Evolving Systems. Online Feature Selection and Dimension Reduction. Online Active and Semi-supervised Learning. Online Complexity Reduction. Computational Aspects. Interpretability Issues. Incremental Adaptive Ensemble Methods. Online Bagging and Boosting. Self-monitoring Evolving Systems. Human-Machine Interaction Issues. Hybrid Modelling, Transfer Learning. Reservoir Computing. Applications of EAIS Time Series Prediction. Data Stream Mining and Adaptive Knowledge Discovery. Robotics. Intelligent Transport and Advanced Manufacturing. Advanced Communications and Multimedia Applications. Bioinformatics and Medicine. Online Quality Control and Fault Diagnosis. Condition Monitoring Systems. Adaptive Evolving Controller Design. User Activities Recognition. Huge Database and Web Mining. Visual Inspection and Image Classification. Image Processing. Cloud Computing. Multiple Sensor Networks. Query Systems and Social Networks. Alternative Statistical and Machine Learning Approaches. Submissions Submitted papers should not exceed 8 pages plus at most 2 pages overlength. Submissions of full papers are accepted online through Easy Chair (https://easychair.org/conferences/?conf=eais2022). The EAIS 2022 proceedings will be published on IEEE Xplore Digital Library. Authors of selected papers will be invited to submit extended versions for possible inclusion in a special issue of Evolving Systems, published by Springer (https://www.springer.com/journal/12530 ). Important Dates ? Paper submission: February 7, 2022 (firm) ? Notification of acceptance/rejection: March 7, 2022 ? Camera ready submission: March 20, 2022 ? Authors registration: March 20, 2022 ? Conference Dates: May 25-27, 2022 Social Media FB: https://www.facebook.com/IEEEEAIS Twitter: https://twitter.com/IEEE_EAIS Linkedin: https://www.linkedin.com/events/2022ieeeconferenceonevolvingand6815560078674972672/ Organization Honorary Chairs ? Dimitar Filev, Ford Motor Co., USA ? Nikola Kasabov, Auckland University of Technology, New Zealand General Chairs ? George A. Papadopoulos, University of Cyprus, Nicosia, Cyprus ? Plamen Angelov, Lancaster University, UK Program Committee Chairs ? Giovanna Castellano, University of Bari, Italy ? Jos? A. Iglesias, Carlos III University of Madrid, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio.bellavia at unifi.it Sat Jan 29 06:21:06 2022 From: fabio.bellavia at unifi.it (Fabio Bellavia) Date: Sat, 29 Jan 2022 12:21:06 +0100 Subject: Connectionists: [CfP] International workshop on "Fine Art Pattern Extraction and Recognition (FAPER 2022)" at ICIAP 2021 !!! DEADLINE UPDATED !!! Message-ID: ???????????????????? Call for Papers -- FAPER 2022 ??????????? ---===== Apologies for cross-postings =====--- ?????????? Please distribute this call to interested parties ________________________________________________________________________ ?International Workshop on Fine Art Pattern Extraction and Recognition ????????????????????????? F A P E R?? 2 0 2 2 ??????? in conjunction with the 21st International Conference on ?????????????? Image Analysis and Processing (ICIAP 2021) ???????????????????? Lecce, Italy, MAY 23-27, 2022 ??????????? >>> https://sites.google.com/view/faper2022 <<< ??????? !UPDATED! Submission deadline: March 15, 2022 !UPDATED! -> Submission link: https://easychair.org/conferences/?conf=faper2022 <- ????????????? [[[ both virtual and in presence event ]]] ________________________________________________________________________ === Aim & Scope === Cultural heritage, especially fine arts, plays an invaluable role in the cultural, historical and economic growth of our societies. Fine arts are primarily developed for aesthetic purposes and are mainly expressed through painting, sculpture and architecture. In recent years, thanks to technological improvements and drastic cost reductions, a large-scale digitization effort has been made, which has led to an increasing availability of large digitized fine art collections. This availability, coupled with recent advances in pattern recognition and computer vision, has disclosed new opportunities, especially for researchers in these fields, to assist the art community with automatic tools to further analyze and understand fine arts. Among other benefits, a deeper understanding of fine arts has the potential to make them more accessible to a wider population, both in terms of fruition and creation, thus supporting the spread of culture. Following the success of the first edition, organized in conjunction with ICPR 2020, the aim of the workshop is to provide an international forum for those wishing to present advancements in the state-of-the-art, innovative research, ongoing projects, and academic and industrial reports on the application of visual pattern extraction and recognition for a better understanding and fruition of fine arts. The workshop solicits contributions from diverse areas such as pattern recognition, computer vision, artificial intelligence and image processing. === Topics === Topics of interest include, but are not limited to: - Application of machine learning and deep learning to cultural heritage and digital humanities - Computer vision and multimedia data processing for fine arts - Generative adversarial networks for artistic data - Augmented and virtual reality for cultural heritage - 3D reconstruction of historical artifacts - Point cloud segmentation and classification for cultural heritage - Historical document analysis - Content-based retrieval in the art domain - Speech, audio and music analysis from historical archives - Digitally enriched museum visits - Smart interactive experiences in cultural sites - Projects, products or prototypes for cultural heritage restoration, preservation and fruition - Visual question answering and artwork captioning - Art history and computer vision === Invited speaker === Eva Cetinic (Digital Visual Studies, University of Zurich, Switzerland) - "Beyond Similarity: From Stylistic Concepts to Computational Metrics" Dr. Eva Cetinic is currently working as a postdoctoral fellow at the Center for Digital Visual Studies at the University of Zurich. She previously worked as a postdoc in Digital Humanities and Machine Learning at the Department of Computer Science, Durham University, and as a postdoctoral researcher and professional associate at the Ru?er Bo?kovic Institute in Zagreb. She obtained her Ph.D. in Computer science from the Faculty of Electrical Engineering and Computing, University of Zagreb in 2019 with the thesis titled "Computational detection of stylistic properties of paintings based on high-level image feature analysis". Besides being generally interested in the interdisciplinary field of digital humanities, her specific interests focus on studying new research methodologies rooted in the intersection of artificial intelligence and art history. Particularly, she is interested in exploring deep learning techniques for computational image understanding and multi-modal reasoning in the context of visual art. === Workshop modality === The workshop will be held in a hybrid form, both virtual and in presence participation will be allowed. === Submission guidelines === Accepted manuscripts will be included in the ICIAP 2021 proceedings, which will be published by Springer as Lecture Notes in Computer Science series (LNCS). Authors of selected papers will be invited to extend and improve their contributions for a Special Issue on IET Image Processing. Please follow the guidelines provided by Springer when preparing your contribution. The maximum number of pages is 10 + 2 pages for references. Each contribution will be reviewed on the basis of originality, significance, clarity, soundness, relevance and technical content. Once accepted, the presence of at least one author at the event and the oral presentation of the paper are expected. Please submit your manuscript through EasyChair: https://easychair.org/conferences/?conf=faper2022 === Important Dates === - Workshop submission deadline: March 15, 2022 - Author notification: April 1, 2022 - Camera-ready submission and registration: April 15, 2022 - Workshop day: May 23-24, 2022 === Organizing committee === Gennaro Vessio (University of Bari, Italy) Giovanna Castellano (University of Bari, Italy) Fabio Bellavia (University of Palermo, Italy) Sinem Aslan (University of Venice, Italy | Ege University, Turkey) === Venue === The workshop will be hosted at Convitto Palmieri, which is located in Piazzetta di Giosue' Carducci, Lecce, Italy ____________________________________________________ ?Contacts: gennaro.vessio at uniba.it ?????????? giovanna.castellano at uniba.it ?????????? fabio.bellavia at unipa.it ?????????? sinem.aslan at unive.it ?Workshop: https://sites.google.com/view/faper2022 ICIAP2021: https://www.iciap2021.org/ From david at irdta.eu Sat Jan 29 05:51:38 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 29 Jan 2022 11:51:38 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring: early registration February 14 Message-ID: <1795112031.1707811.1643453498583@webmail.strato.com> ****************************************************************** 5th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Spring Guimar?es, Portugal April 18-22, 2022 https://irdta.eu/deeplearn/2022sp/ ***************** Co-organized by: Algoritmi Center University of Minho, Guimar?es Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: February 14, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Spring will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, and Bournemouth. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 23 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Spring is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Spring will take place in Guimar?es, in the north of Portugal, listed as UNESCO World Heritage Site and often referred to as the birthplace of the country. The venue will be: Hotel de Guimar?es Eduardo Manuel de Almeida 202 4810-440 Guimar?es http://www.hotel-guimaraes.com/ STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full in vivo online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Kate Smith-Miles (University of Melbourne), Stress-testing Algorithms via Instance Space Analysis Mihai Surdeanu (University of Arizona), Explainable Deep Learning for Natural Language Processing Zhongming Zhao (University of Texas, Houston), Deep Learning Approaches for Predicting Virus-Host Interactions and Drug Response PROFESSORS AND COURSES: Eneko Agirre (University of the Basque Country), [introductory/intermediate] Natural Language Processing in the Pretrained Language Model Era Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision Altan ?ak?r (Istanbul Technical University), [introductory] Introduction to Deep Learning with Apache Spark Rylan Conway (Amazon), [introductory/intermediate] Deep Learning for Digital Assistants Jianfeng Gao (Microsoft Research), [introductory/intermediate] An Introduction to Conversational Information Retrieval Daniel George (JPMorgan Chase), [introductory] An Introductory Course on Machine Learning and Deep Learning with Mathematica/Wolfram Language Bohyung Han (Seoul National University), [introductory/intermediate] Robust Deep Learning Lina J. Karam (Lebanese American University), [introductory/intermediate] Deep Learning for Quality Robust Visual Recognition Xiaoming Liu (Michigan State University), [intermediate] Deep Learning for Trustworthy Biometrics Jennifer Ngadiuba (Fermi National Accelerator Laboratory), [intermediate] Ultra Low-latency and Low-area Machine Learning Inference at the Edge Lucila Ohno-Machado (University of California, San Diego), [introductory] Use of Predictive Models in Medicine and Biomedical Research Bhiksha Raj (Carnegie Mellon University), [introductory] Quantum Computing and Neural Networks Bart ter Haar Romenij (Eindhoven University of Technology), [intermediate] Deep Learning and Perceptual Grouping Kaushik Roy (Purdue University), [intermediate] Re-engineering Computing with Neuro-inspired Learning: Algorithms, Architecture, and Devices Walid Saad (Virginia Polytechnic Institute and State University), [intermediate/advanced] Machine Learning for Wireless Communications: Challenges and Opportunities Yvan Saeys (Ghent University), [introductory/intermediate] Interpreting Machine Learning Models Martin Schultz (J?lich Research Centre), [intermediate] Deep Learning for Air Quality, Weather and Climate Richa Singh (Indian Institute of Technology, Jodhpur), [introductory/intermediate] Trusted AI Sofia Vallecorsa (European Organization for Nuclear Research), [introductory/intermediate] Deep Generative Models for Science: Example Applications in Experimental Physics Michalis Vazirgiannis (?cole Polytechnique), [intermediate/advanced] Machine Learning with Graphs and Applications Guowei Wei (Michigan State University), [introductory/advanced] Integrating AI and Advanced Mathematics with Experimental Data for Forecasting Emerging SARS-CoV-2 Variants Xiaowei Xu (University of Arkansas, Little Rock), [intermediate/advanced] Deep Learning for NLP and Causal Inference Guoying Zhao (University of Oulu), [introductory/intermediate] Vision-based Emotion AI OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by April 10, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. ORGANIZING COMMITTEE: Dalila Dur?es (Braga, co-chair) Jos? Machado (Braga, co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) Paulo Novais (Braga, co-chair) David Silva (London, co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022sp/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will get exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022sp/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Centro Algoritmi, University of Minho, Guimar?es School of Engineering, University of Minho Intelligent Systems Associate Laboratory, University of Minho Rovira i Virgili University Municipality of Guimar?es Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Sat Jan 29 14:20:51 2022 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Sat, 29 Jan 2022 21:20:51 +0200 Subject: Connectionists: Call-for-Participation: Aware Task @ ImageCLEF 2022 (Unveiling Real-Life Effects of Online Photo Sharing) Message-ID: [Apologies for multiple postings] ImageCLEFaware https://www.imageclef.org/2022/aware *** CALL FOR PARTICIPATION *** Images constitute a large part of the content shared on social networks. Their disclosure is often related to a particular context and users are often unaware of the fact that, depending on their privacy status, images can be accessible to third parties and be used for purposes which were initially unforeseen. For instance, it is common practice for employers to search information about their future employees online. Another example of usage is that of automatic credit scoring based on online data. Most existing approaches which propose feedback about shared data focus on inferring user characteristics and their practical utility is rather limited. We hypothesize that user feedback would be more efficient if conveyed through the real-life effects of data sharing. The objective of the task is to automatically score user photographic profiles in a series of situations with strong impact on her/his life. Four such situations were modeled this year and refer to searching for: (i) a bank loan, (ii) an accommodation, (iii) a job as waitress/waiter, and (iv) a job in IT. The inclusion of several situations is interesting in order to make it clear to the end-users of the system that the same image will be interpreted differently depending on the context. The final objective of the task is to encourage the development of efficient user feedback, such as the YDSYO Android app https://ydsyo.app/. *** TASK *** Given an annotated training dataset, participants will propose machine learning techniques which provide a ranking of test user profiles in each situation which is as close as possible to a human ranking of the test profiles. *** DATA SET *** This is the second edition of the task. A data set of 1,000 user profiles with 100 photos per profile was created and annotated with an appeal score for a series of real-life situations via crowdsourcing. Participants to the experiment were asked to provide a global rating of each profile in each situation modeled using a 7-points Likert scale ranging from strongly unappealing to strongly appealing. An averaged and normalized appeal score will be used to create a ground truth composed of ranked users in each modeled situation. User profiles are created by repurposing a subset of the YFCC100M dataset. *** METRICS *** Participants to the task will provide an automatically ranking of user ratings for each situation which will be compared to a ground truth rating obtained by crowdsourcing. The correlation between the two ranked list will be measured using Pearson's correlation coefficient. The final score of each participating team will be obtained by averaging correlations obtained for individual situations. *** IMPORTANT DATES *** - Task registration opens: November 15, 2021 - Run submission: May 6, 2022 - Working notes submission: May 27, 2022 - CLEF 2022 conference: September 5-8, Bologna, Italy *** REGISTER *** https://www.imageclef.org/2022#registration *** OVERALL COORDINATION *** Adrian Popescu, CEA LIST, France J?r?me Deshayes-Chossart, CEA LIST, France Hugo Schindler, CEA LIST, France Bogdan Ionescu, Politehnica University of Bucharest, Romania On behalf of the Organizers, Bogdan Ionescu https://www.AIMultimediaLab.ro/ From steve at bu.edu Sat Jan 29 12:12:59 2022 From: steve at bu.edu (Grossberg, Stephen) Date: Sat, 29 Jan 2022 17:12:59 +0000 Subject: Connectionists: Sharing my happiness upon receiving a 2022 PROSE award for my book In-Reply-To: References: Message-ID: Dear Connectionist colleagues, I am happy to report that my Magnum Opus Conscious Mind, Resonant Brain: How Each Brain Makes a Mind https://lnkd.in/ePfN_K85 # has won the 2022 PROSE award in Neuroscience from the Association of American Publishers [cid:7a54e03b-69d8-4c5f-b779-ca22b25f9e36] Best, Steve Grossberg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 202861 bytes Desc: image.png URL: From george at cs.ucy.ac.cy Sat Jan 29 07:03:48 2022 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sat, 29 Jan 2022 14:03:48 +0200 Subject: Connectionists: 4th International Workshop on Intelligent Systems for the Internet of Things (ISIoT 2022) Message-ID: 4th International Workshop on Intelligent Systems for the Internet of Things (ISIoT 2022) Link: https://sites.google.com/view/isiot2022 Important Dates Paper Submission: 15 March 2022 Acceptance Notification: 10 April 2022 Camera Ready: 20 April 2022 Scope The Internet of Things is a paradigm that assumes a pervasive presence in the environment of many smart things, including sensors, actuators, embedded systems and other similar devices. The development of IoT, however, has reached a crossroads, which means that without intelligence, an IoT system will act as an ordinary information transfer system. Emerging IoT applications in various fields, including smart city, smart home, smart grid, e-health, intelligent transportation systems, etc., require trustworthy networking solutions that are resilient against high mobility, high density, disasters, infrastructure failures, cyber-attacks, and other disruptions. This interdisciplinary landscape of IoT applications demands researchers from different areas such as machine learning, computational intelligence, optimization, distributed computing, embedded systems, and big data to synergize their efforts in better understanding the untapped opportunities to produce highly efficient, deployable, intelligent IoT systems. Topics (Not limited to): Computational Intelligence for the IoT Performance evaluation for the IoT Machine Learning in IoT and Smart Systems Dependability and Fault Tolerance in Smart Systems Big Data and Data Mining for the IoT Intelligent Network Technologies for the IoT Swarm and Multi-Agent Models for IoT Embedded Multiagent Systems Collective and Collaborative Robotics Robotics for the IoT Applications of the IoT Smart Systems in Different industry verticals IoT platforms Blockchain for the IoT IoT Intrusion Detection Trust and Privacy in IoT Cloud Computing for the IoT Data Fusion from the IoT Chairs: Dr. Kyriakos Vamvoudakis, Georgia Institute of Technology Dr. Vasos Vassiliou, University of Cyprus and CYENS Research Center Dr. Zinon Zinonos, Municipality of Pafos and CYENS Research Center Submission instructions Authors are invited to submit original unpublished manuscripts reporting applied or technical research. Accepted and presented papers will be published in the same volume with the DCOSS 2022 conference proceedings. All papers will be reviewed by Technical Program Committee members and selected papers will be organized for presentation at the workshop. All submissions will be exclusively electronic with a maximum length of eight (8) printed pages including title, authors, abstract, figures, diagrams, references and attachments. Articles must be prepared in English following the IEEE two-column Manuscript Templates for Conference Proceedings and submitted in PDF format only. Submission link: https://easychair.org/conference?conf=isiot2022 From ASIM.ROY at asu.edu Sun Jan 30 22:37:28 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Mon, 31 Jan 2022 03:37:28 +0000 Subject: Connectionists: Call for Papers - Cognitive Computation Special Issue - "What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories" - New Submission Deadline - April 15, 2022 Message-ID: Dear Colleagues, This Special Issue is about stepping back and taking a look at where we are in terms of understanding the brain. We want to publish short position papers, maximum 10 pages long. We are aiming for quick reviews, about two weeks. Further details are provided below. The Guest Editors decided to extend the deadline to April 15, 2022. Asim Roy Professor, Information Systems Arizona State University Asim Roy | iSearch (asu.edu) Lifeboat Foundation Bios: Professor Asim Roy ------------------------------------------------------------------------------------------------------------------------------ Special Issue Call for Papers: What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories Guest Editors: * (Lead) Asim Roy, Arizona State University, USA, E-mail: ASIM.ROY at asu.edu * Claudius Gros, Institute for Theoretical Physics, Goethe University Frankfurt, Germany, E-mail: gros at itp.uni-frankfurt.de * Juyang Weng, Brain Mind Institute, USA, Email: weng at msu.edu * Jean-Philippe Thivierge, University of Ottawa, Canada, E-mail: Jean-Philippe.Thivierge at uottawa.ca * Tsvi Achler, Optimizing Mind, Email: achler at optimizingmind.com * Ali A. Minai, University of Cincinnati, USA, E-mail: Ali.Minai at uc.edu Aim and Motivation: Arguments about the brain and how it works are endless. Despite some conflicting conjectures and theories that have existed for decades without resolution, we have made significant progress in creating brain-like computational systems to solve some important engineering problems. It would be a good idea to step back and examine where we are in terms of our understanding of the brain and potential problems with the brain-like AI systems that have been successful so far. For this special issue of Cognitive Computation, we invite thoughtful articles on some of the issues that we have failed to address and comprehend in our journey so far in understanding the brain. We aim for rapid peer-reviews by experts (about two weeks) for all selected submissions and plan to publish the special issue papers on a rolling basis from early 2022. Topics: We plan to publish a collection of short articles on a variety of topics that could be asking new questions, proposing new theories, resolving conflicts between existing theories, and proposing new types of computational models that are brain-like. Deadlines: SI submissions deadline: 15 February 2022 First notification of acceptance: 11 March 2022 Submission of revised papers: 10 April 2022 Final notification to authors: 30 April 2022 Publication of SI: Rolling basis (2022) Submission Instruction: Prepare your paper in accordance with the Journal guidelines: www.springer.com/12559. Submit manuscripts at: http://www.editorialmanager.com/cogn/. Select "SI: AI and Neuroscience" for the special issue under "Additional Information." Your paper must contain significant and original work that has not been published nor submitted to any journals. All papers will be reviewed following standard reviewing procedures of the Journal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From poirazi at imbb.forth.gr Mon Jan 31 08:44:38 2022 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Mon, 31 Jan 2022 15:44:38 +0200 Subject: Connectionists: DENDRITES 2022: abstract submission extended to Feb. 8th, 2022 In-Reply-To: References: Message-ID: Dear friends and colleagues, Due to popular demand we have decided to extend the deadline for abstract submission until *February 8th, 2022*. Abstract submissions have already exceeded our expectations and DENDRITES 2022 promises to be a really exciting meeting! Looking forward to seeing you in May, Yiota, Kristen, Michael and Matthew On Sun, Jan 23, 2022 at 1:59 PM Yiota Poirazi wrote: > DENDRITES 2022 > EMBO Workshop on Dendritic Anatomy, Molecules and Function > Heraklion, Crete, Greece > 23-26 May 2022 > http://meetings.embo.org/event/20-dendrites > > Dear Colleagues, > > We are pleased to announce the solicitation of abstracts for short oral or > poster presentations at the EMBO Workshop on DENDRITES 2022, which will > take place in Heraklion, Crete on 23-26 May 2022. > > This is the 4th of a very successful series of meetings on the island of > Crete that is dedicated to dendrites. The meeting will bring together > scientific leaders from around the globe to present their theoretical and > experimental work on dendrites. The meeting program is designed to > facilitate discussions of new ideas and discoveries, in a relaxed > atmosphere that emphasizes interaction. We have secured an exciting list > of speakers, including: > > Anthony Holtmaat, University of Geneva, CH > Attila Losonczy, Columbia University, US > Christine Grienberger, Brandeis University, US > David DiGregorio, Institut Pasteur, FR > Dan Johnston, University of Texas at Austin, US > Hermann Cuntz, Ernst Str?ngmann Institute for Neuroscience, DE > Holly Cline, The Scripps Research Institute, US > Idan Segev, Hebrew University, IL > Jackie Schiller, Technion, IL > Judit Makara, Institute of Experimental Medicine of the Hungarian Academy > of Sciences, HU > Julijana Gjorgjieva, MPI for Brain Research, DE > Karen Zito, University of California Davis, US > Linnaea Ostroff, University of Connecticut, US > Lisa Topolnik, Universit? Laval, Canada, CA > Mark Harnett, MIT, US > Peter Jonas, Institute of Science and Technology Austria, AT > Terry Sejnowski, Salk Institute, US > Wenbiao Gan, New York University, US > > Please register (no payment required) and submit your abstract online at: > > http://meetings.embo.org/event/20-dendrites > > Submissions of abstracts are due by *February 1st, **2022* > > Notifications will be provided by February 28th, 2022 > Registration payment due by April 15th, 2022 > > Potential attendees are strongly encouraged to submit an abstract as presenters > will have registration priority. > > For more information about the conference, please refer to our web site > or send email to info at mitos.com.gr > > We look forward to seeing you in person at DENDRITES 2022! > > The organizers, > Yiota Poirazi, Kristen Harris, Matthew Larkum, Michael H?usser > -- > Panayiota Poirazi, Ph.D. > Research Director > Institute of Molecular Biology and Biotechnology (IMBB) > Foundation of Research and Technology-Hellas (FORTH) > Vassilika Vouton, P.O.Box 1385, GR 70013, Heraklion, Crete > GREECE > Tel: +30 2810-391139 / -391238 > Fax: +30 2810-391101 > ?mail: poirazi at imbb.forth.gr > Lab site: www.dendrites.gr > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From U.K.Gadiraju at tudelft.nl Mon Jan 31 08:47:59 2022 From: U.K.Gadiraju at tudelft.nl (Ujwal Gadiraju) Date: Mon, 31 Jan 2022 13:47:59 +0000 Subject: Connectionists: [ACM HT2022] Doctoral Consortium Message-ID: <3b1ed768a4c74276a6c171d38e85d5b0@tudelft.nl> ACM Hypertext 2022 Doctoral Consortium The Doctoral Consortium at ACM Hypertext 2022 is open to graduate students (both in the Ph.D. or Master's programs). We welcome submissions representing a broad spectrum of research topics relevant to the Hypertext community. Participants will benefit from the advice of senior researchers in the field and the interaction with peers being at a similar stage of their careers. This track will provide these students an opportunity to: * Present and discuss their research ideas to experienced scholars in a supportive, formative, and yet critical environment; * Explore and develop their research interests under the guidance of distinguished researchers from the field who will provide constructive feedback and advice; * Explore career pathways available after completing their MS or Ph.D. degree; * Network and build collaborations with other members of the community. Students need to document their doctoral research in a brief submission, which the corresponding committee will evaluate. High-quality applications will be selected for presentation at a DC Session as part of the conference. Each student with an accepted submission will be assigned a mentor who will provide feedback on the student?s work and discuss the proposed research with the student and the audience at the consortium. Important Dates ============ - Paper Submission: February 27, 2022 - Notification to authors: April 4, 2022 Submission ========= Students interested in engaging in detailed discussions on their research at the Doctoral Consortium are invited to submit a 5-page paper (maximum) + references describing their work. See all the information about this call on the website: https://ht.acm.org/ht2022/call-for-doctoral-consortium-papers/ Organization: ========== GENERAL CHAIRS Alejandro Bellogin, Universidad Aut?noma de Madrid Ludovico Boratto, University of Cagliari DOCTORAL CONSORTIUM CHAIR Tommaso Di Noia, Politecnico di Bari We look forward to your submissions! Best, Ujwal Publicity Chair, ACM HT'22 ____________________________________ Dr. Ir. Ujwal Gadiraju Assistant Professor Web Information Systems Delft University of Technology The Netherlands W: https://wis.ewi.tudelft.nl/gadiraju W: https://www.ujwalgadiraju.com E: u.k.gadiraju at tudelft.nl https://www.uncage.info https://www.academicfringe.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentthunder2011 at gmail.com Mon Jan 31 12:51:04 2022 From: vincentthunder2011 at gmail.com (Jui-Yi Tsai) Date: Tue, 1 Feb 2022 01:51:04 +0800 Subject: Connectionists: =?utf-8?q?Call_for_Paper_=E2=80=93_The_7th_Intern?= =?utf-8?q?ational_Workshop_on_Mobile_Data_Management=2C_Mining=2C_?= =?utf-8?q?and_Computing_on_Social_Networks_=28MobiSocial_2022=29?= Message-ID: *Call for Paper ? The 7th International Workshop on Mobile Data Management, Mining, and Computing on Social Networks (MobiSocial 2022)* Social network and mining research has advanced rapidly with the prevalence of the online social websites and instant messaging social communications systems. In addition, thanks to the recent advances in deep learning, many novel applications with mobile devices and social networks have been proposed and deployed. These social network systems are usually characterized by complex network structures and abundant contextual information. Moreover, by incorporating the spatial dimension, mobile and location-based social networks are now immersed in people?s everyday life via numerous innovative websites. In addition, mobile social networks can be exploited to foster many interesting applications and analysis, such as recommendations of locations and travel planning of friends, location-based viral marketing, community discovery, group mobility and behavior modeling. The 7th International Workshop on Mobile Data Management, Mining, and Computing on Social Networks (MobiSocial 2022) will serve as a forum for researchers and technologists to discuss the state-of-the-art, present their contributions, and set future directions in data management, machine learning and knowledge mining for mobile social networks. The topics of interest related to this workshop include, but are not limited to: - Mobile sensing - Deep learning for mobile social networks - Blockchain for social networks - Graph mining - Contextual mobile social network analysis - Storing, indexing and querying of graph data - Distributed graph processing - Mobile social interaction and personalized search - Dynamics and evolution patterns of social networks, trend prediction - Analysis and mining of location-based social networks - Classification models and their applications in social recommender systems. - Processing of social media stream - Influence models and their applications in social environment. - Competitive viral marketing - Privacy and security in social networks - Privacy-preserving and precision-aware protocol/algorithm design for epidemic data collection - Mobile and distributed learning (e.g., federated learning, gossip learning, etc.) on Social Internet-o-Things (SIoTs) for epidemic source identification - Advanced AI techniques to support timely interactivity data analysis for Metaverse - Mobile healthcare - Modeling trust and reputation in mobile social networks. - Moving object tracking, indexing and retrieval for social applications - Location and trajectory mining of social data - Opinion mining for location related information - Location privacy, data sharing and security - Mobile and ubiquitous computing for location- based social networks - Cloud computing for location-based social data - Innovative mobile social networking applications - Multidisciplinary and interdisciplinary research on mobile social networks - Model compression - Cross-layer design for SIoT and contact networks, e.g., cross-layer interaction mining, cross-layer inference, federated multimodal search from interactive SIoT and contact networks, etc. - Epidemiology-aware SIoT alarm to preventing and containing the epidemic spread, and mobility- and social-aware recommendation for trip, activity, and route with social distancing Important Dates: Paper submission deadline: March 25, 2022 Notification of acceptance: April 11, 2022 Camera-ready due: April 22, 2022 Workshop: June 6 (tentative) Workshop website: https://sites.google.com/view/mobisocial-2022/home -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Mon Jan 31 11:38:12 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Mon, 31 Jan 2022 16:38:12 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> Message-ID: Steve, do you really want to erase the very origins of shallow learning (Gauss & Legendre ~1800) and deep learning (DL, Ivakhnenko & Lapa 1965) from the field's history? Why? Because they did not use modern terminology such as "artificial neural nets (NNs)" and "learning internal representations"? Names change all the time like fashions; the only thing that counts is the math. Not only mathematicians but also psychologists like yourself will agree. Again: the linear regressor of Legendre & Gauss is formally identical to what was much later called a linear NN for function approximation (FA), minimizing mean squared error, still widely used today. No history of "shallow learning" (without adaptive hidden layers) is complete without this original shallow learner of 2 centuries ago. Many NN courses actually introduce simple NNs in this mathematically and historically correct way, then proceed to DL NNs with several adaptive hidden layers. And of course, no DL history is complete without the origins of functional DL in 1965 [DEEP1-2]. Back then, Ivakhnenko and Lapa published the first general, working DL algorithm for supervised deep feedforward multilayer perceptrons (MLPs) with arbitrarily many layers of neuron-like elements, using nonlinear activation functions (actually Kolmogorov-Gabor polynomials) that combine both additions (like in linear NNs) and multiplications (basically they had deep NNs with gates, including higher order gates). They incrementally trained and pruned their DL networks layer by layer to learn internal representations, using regression and a separate validation set (network depth > 7 by 1971). They had standard justifications of DL such as: "a multilayered structure is a computationally feasible way to implement multinomials of very high degree" [DEEP2] (that cannot be approximated by simple linear NNs). Of course, their DL was automated, and many people have used it up to the 2000s - just follow the numerous citations. I don't get your comments about Ivakhnenko's DL and function approximation (FA). FA is for all kinds of functions, including your "cognitive or perceptual or motor functions." NNs are used as FAs all the time. Like other NNs, Ivakhnenko's nets can be used as FAs for your motor control problems. You boldly claim: "This was not in the intellectual space" of Ivakhnenko's method. But obviously it was. Interestingly, 2 years later, Amari (1967-68) [GD1-2] trained his deep MLPs through a different DL method, namely, stochastic gradient descent (1951-52)[STO51-52]. His paper also did not contain the "modern" expression "learning internal representations in NNs." But that's what it was about. Math and algorithms are immune to rebranding. You may not like the fact that neither the original shallow learning (Gauss & Legendre ~1800) nor the original working DL (Ivakhnenko & Lapa 1965; Amari 1967) were biologically inspired. They were motivated through math and problem solving. The NN rebranding came later. Proper scientific credit assignment does not care for changes in terminology. BTW, unfortunately, Minsky & Papert [M69] made some people think that Rosenblatt [R58-62] had only linear NNs plus threshold functions. But actually he had much more interesting MLPs with a non-learning randomized first layer and an adaptive output layer. So Rosenblatt basically had what much later was rebranded as "Extreme Learning Machines (ELMs)." The revisionist narrative of ELMs (see this web site https://elmorigin.wixsite.com/originofelm) is a bit like the revisionist narrative of DL criticized by my report. Some ELM guys apparently thought they can get away with blatant improper credit assignment. After all, the criticized DL guys seemed to get away with it on an even grander scale. They called themselves the "DL conspiracy" [DLC]; the "ELM conspiracy" is similar. What an embarrassing lack of maturity of our field. Fortunately, more and more ML researchers are helping to set things straight. "In science, by definition, the facts will always win in the end. As long as the facts have not yet won it's not yet the end." [T21v1] References as always under https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html J?rgen > On 27 Jan 2022, at 17:37, Stephen Jos? Hanson wrote: > > > > Juergen, I have read through GMHD paper and a 1971 Review paper by Ivakhnenko. These are papers about function approximation. The method proposes to use series of polynomial functions that are stacked in filtered sets. The filtered sets are chosen based on best fit, and from what I can tell are manually grown.. so this must of been a tedious and slow process (I assume could be automated). So are the GMHDs "deep", in that they are stacked 4 deep in figure 1 (8 deep in another). Interestingly, they are using (with obvious FA justification) polynomials of various degree. Has this much to do with neural networks? Yes, there were examples initiated by Rumelhart (and me: https://www.routledge.com/Backpropagation-Theory-Architectures-and-Applications/Chauvin-Rumelhart/p/book/9780805812596), based on poly-synaptic dendrite complexity, but not in the GMHD paper.. which was specifically about function approximation. Ivakhnenko, lists four reasons for the approach they took: mainly reducing data size and being more efficient with data that one had. No mention of "internal representations" > > So when Terry, talks about "internal representations" --does he mean function approximation? Not so much. That of course is part of this, but the actual focus is on cognitive or perceptual or motor functions. Representation in the brain. Hidden units (which could be polynomials) cluster and project and model the input features wrt to the function constraints conditioned by training data. This is more similar to model specification through function space search. And the original Rumelhart meaning of internal representation in PDP vol 1, was in the case of representation certain binary functions (XOR), but more generally about the need for "neurons" (inter-neurons) explicitly between input (sensory) and output (motor). Consider NETTALK, in which I did the first hierarchical clustering of the hidden units over the input features (letters). What appeared wasn't probably surprising.. but without model specification, the network (w.hidden units), learned VOWELS and CONSONANT distinctions just from training (Hanson & Burr, 1990). This would be a clear example of "internal representations" in the sense of Rumelhart. This was not in the intellectual space of Ivakhnenko's Group Method of Handling Data. (some of this is discussed in more detail in some recent conversations with Terry Sejnowski and another one to appear shortly with Geoff Hinton (AIHUB.org look in Opinions). > > Now I suppose one could be cynical and opportunistic, and even conclude if you wanted to get more clicks, rather than title your article GROUP METHOD OF HANDLING DATA, you should at least consider: NEURAL NETWORKS FOR HANDLING DATA, even if you didn't think neural networks had anything to do with your algorithm, after all everyone else is! Might get it published in this time frame, or even read. This is not scholarship. These publications threads are related but not dependent. And although they diverge they could be informative if one were to try and develop polynomial inductive growth networks (see Falhman, 1989; Cascade correlation and Hanson 1990: Meiosis nets) to motor control in the brain. But that's not what happened. I think, like Gauss, you need to drop this specific claim as well. > > With best regards, > > Steve On 25 Jan 2022, at 20:03, Schmidhuber Juergen wrote: PS: Terry, you also wrote: "Our precious time is better spent moving the field forward.? However, it seems like in recent years much of your own precious time has gone to promulgating a revisionist history of deep learning (and writing the corresponding "amicus curiae" letters to award committees). For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2] nor those learnt by Amari?s stochastic gradient descent for MLPs in 1967-1968 [GD1-2]. Nor did your recent survey [S20] attempt to correct this as good science should strive to do. On the other hand, it seems you celebrated your co-author's birthday in a special session while you were head of NeurIPS, instead of correcting these inaccuracies and celebrating the true pioneers of deep learning, such as ! Ivakhnenko and Amari. Even your recent interview https://blog.paperspace.com/terry-sejnowski-boltzmann-machines/ claims: "Our goal was to try to take a network with multiple layers - an input layer, an output layer and layers in between ? and make it learn. It was generally thought, because of early work that was done in AI in the 60s, that no one would ever find such a learning algorithm because it was just too mathematically difficult.? You wrote this although you knew exactly that such learning algorithms were first created in the 1960s, and that they worked. You are a well-known scientist, head of NeurIPS, and chief editor of a major journal. You must correct this. We must all be better than this as scientists. We owe it to both the past, present, and future scientists as well as those we ultimately serve. The last paragraph of my report https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html quotes Elvis Presley: "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.? I wonder how the future will reflect on the choices we make now. J?rgen > On 3 Jan 2022, at 11:38, Schmidhuber Juergen wrote: > > Terry, please don't throw smoke candles like that! > > This is not about basic math such as Calculus (actually first published by Leibniz; later Newton was also credited for his unpublished work; Archimedes already had special cases thereof over 2000 years ago; the Indian Kerala school made essential contributions around 1400). In fact, my report addresses such smoke candles in Sec. XII: "Some claim that 'backpropagation' is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970 [BP1]." > > You write: "All these threads will be sorted out by historians one hundred years from now." To answer that, let me just cut and paste the last sentence of my conclusions: "However, today's scientists won't have to wait for AI historians to establish proper credit assignment. It is easy enough to do the right thing right now." > > You write: "let us be good role models and mentors" to the new generation. Then please do what's right! Your recent survey [S20] does not help. It's mentioned in my report as follows: "ACM seems to be influenced by a misleading 'history of deep learning' propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. XIII). It goes more or less like this: 'In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s [S20].' However, as mentioned above, the 1969 book [M69] addressed a 'problem' of Gauss & Legendre's shallow learning (~1800)[DL1-2] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method [DEEP1-2][DL2] (and then also by Amari's SGD for MLPs [GD1-2]). Minsky was apparently unaware of this and failed to correct it later [HIN](Sec. I).... deep learning research was alive and kicking also in the 1970s, especially outside of the Anglosphere." > > Just follow ACM's Code of Ethics and Professional Conduct [ACM18] which states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." No need to wait for 100 years. > > J?rgen > > > > > >> On 2 Jan 2022, at 23:29, Terry Sejnowski wrote: >> >> We would be remiss not to acknowledge that backprop would not be possible without the calculus, >> so Isaac newton should also have been given credit, at least as much credit as Gauss. >> >> All these threads will be sorted out by historians one hundred years from now. >> Our precious time is better spent moving the field forward. There is much more to discover. >> >> A new generation with better computational and mathematical tools than we had back >> in the last century have joined us, so let us be good role models and mentors to them. >> >> Terry