From cassio at idsia.ch Thu Jan 2 16:19:54 2014 From: cassio at idsia.ch (Cassio P. de Campos) Date: Thu, 2 Jan 2014 22:19:54 +0100 Subject: Connectionists: 12th Brazilian Meeting on Bayesian Statistics - LAST CALL for Papers Message-ID: LAST CALL FOR PAPERS 12th Brazilian Meeting on Bayesian Statistics Atibaia, March 10-14, 2014 - http://www.ime.usp.br/~isbra/ebeb/ ABSTRACT DEADLINE: January 10, 2014 PAPER DEADLINE: February 10, 2014 (for accepted abstracts) The Brazilian Meeting on Bayesian Statistics (EBEB 2014) invites submissions of papers on all topics related to Bayesian Statistics. Full papers accepted to the proceedings will be published in a book from the series "Springer Proceedings in Mathematics & Statistics" (http://www.springer.com/series/10533). Submissions must be formatted accordingly. Papers (including figures and text) are limited to 10 pages in length. EBEB 2014 will take place in Atibaia (Brazil) from March 10 to 14, in a beautiful country-side hotel specialized in horse riding. This is precisely one week after the famous Brazilian carnival. The venue is conveniently placed near Sao Paulo-Guarulhos international airport (40-minutes trip; the organization will coordinate means to help participants to reach the venue). These meetings aim at strengthening the research on Bayesian methods and widening their application. They also provide an environment where Brazilian and international researchers collaborate, present their most recent developments and discuss on open problems. Guest speakers from many countries in Europe, USA and South America will give presentations during the event. This year's meeting has a particular focus on discussing recent developments in the many viewpoints of Bayesian statistics, such as computational, theoretical, methodological and applied views. The EBEB 2014 is organized by ISBrA - the Brazilian Chapter of ISBA. Normal Submission Track: January 10, 2014: Abstract Submission. January 25, 2014: Abstract Decision Notification. January 31, 2014: Early Registration. February 10, 2014: Full Paper Submission (for accepted abstracts). April 07, 2014: Decision Notification (for the proceedings). July 01, 2014: Final camera-ready version. More details about the submission procedure are available online. ++++++++++++++++++++++++++++++++++++++++ (We apologize in case you receive multiple copies of this announcement, but yet we hope to reach the greatest possible number of people. Finding a trade-off is not an easy task.) From cie.conference.series at gmail.com Fri Jan 3 09:26:14 2014 From: cie.conference.series at gmail.com (CiE Conference Series) Date: Fri, 3 Jan 2014 14:26:14 +0000 (GMT) Subject: Connectionists: CiE 2014: Language, Life, Limits - Budapest, 23-27 June 2014 Message-ID: 3rd CALL FOR PAPERS: CiE 2014: Language, Life, Limits Budapest, Hungary June 23 - 27, 2014 http://cie2014.inf.elte.hu IMPORTANT DATES: Submission Deadline for LNCS: 10 January 2014 Notification of authors: 3 March 2014 Deadline for final revisions: 31 March 2014 FUNDING and AWARDS: CiE 2014 has received funding for student participation from the European Association for Theoretical Computer Science EATCS. Please contact the PC chairs if you are interested. The best student paper will receive an award sponsored by Springer. CiE 2014 is the tenth conference organized by CiE (Computability in Europe), a European association of mathematicians, logicians, computer scientists, philosophers, physicists and others interested in new developments in computability and their underlying significance for the real world. Previous meetings have taken place in Amsterdam (2005), Swansea (2006), Siena (2007), Athens (2008), Heidelberg (2009), Ponta Delgada (2010), Sofia (2011), Cambridge (2012), and Milan (2013). The motto of CiE 2014 "Language, Life, Limits" intends to put a special focus on relations between computational linguistics, natural and biological computing, and more traditional fields of computability theory. This is to be understood in its broadest sense including computational aspects of problems in linguistics, studying models of computation and algorithms inspired by physical and biological approaches as well as exhibiting limits (and non-limits) of computability when considering different models of computation arising from such approaches. As with previous CiE conferences the allover glueing perspective is to strengthen the mutual benefits of analyzing traditional and new computational paradigms in their corresponding frameworks both with respect to practical applications and a deeper theoretical understanding. We particularly invite papers that build bridges between different parts of the research community. For topics covered by the conference, please visit http://cie2014.inf.elte.hu/?Topics We particularly welcome submissions in emergent areas, such as bioinformatics and natural computation, where they have a basic connection with computability. TUTORIAL SPEAKERS: Wolfgang Thomas (RWTH Aachen) Peter Gruenwald (CWI, Amsterdam) INVITED SPEAKERS: Lev Beklemishev (Steklov Mathematical Institute, Moscow) Alessandra Carbone (Universite Pierre et Marie Curie and CNRS Paris) Maribel Fernandez (King's College London) Przemyslaw Prusinkiewicz (University of Calgary) Eva Tardos (Cornell University Albert Visser (Utrecht University) SPECIAL SESSIONS: History and Philosophy of Computing (organizers: Liesbeth de Mol, Giuseppe Primiero) Computational Linguistics (organizers: Maria Dolores Jimenez-Lopez, Gabor Proszeky) Computability Theory (organizers: Karen Lange, Barbara Csima) Bio-inspired Computation (organizers: Marian Gheorghe, Florin Manea) Online Algorithms (organizers: Joan Boyar, Csanad Imreh) Complexity in Automata Theory (organizers: Markus Lohrey, Giovanni Pighizzini) Contributed papers will be selected from submissions received by the PROGRAM COMMITTEE consisting of: * Gerard Alberts (Amsterdam) * Sandra Alves (Porto) * Hajnal Andreka (Budapest) * Luis Antunes (Porto) * Arnold Beckmann (Swansea) * Laurent Bienvenu (Paris) * Paola Bonizzoni (Milan) * Olivier Bournez (Palaiseau) * Vasco Brattka (Munich) * Bruno Codenotti (Pisa) * Erzsebet Csuhaj-Varju (Budapest, co-chair) * Barry Cooper (Leeds) * Michael J. Dinneen (Auckland) * Erich Graedel (Aachen) * Marie Hicks (Chicago IL) * Natasha Jonoska (Tampa FL) * Jarkko Kari (Turku) * Elham Kashefi (Edinburgh) * Viv Kendon (Leeds) * Satoshi Kobayashi (Tokyo) * Andras Kornai (Budapest) * Marcus Kracht (Bielefeld) * Benedikt Loewe (Amsterdam & Hamburg) * Klaus Meer (Cottbus, co-chair) * Joseph R. Mileti (Grinnell IA) * Georg Moser (Innsbruck) * Benedek Nagy (Debrecen) * Sara Negri (Helsinki) * Thomas Schwentick (Dortmund) * Neil Thapen (Prague) * Peter van Emde Boas (Amsterdam) * Xizhong Zheng (Glenside PA) The PROGRAMME COMMITTEE cordially invites all researchers (European and non-European) in computability related areas to submit their papers (in PDF format, max 10 pages using the LNCS style) for presentation at CiE 2014. The submission site https://www.easychair.org/conferences/?conf=cie2014 is open. For submission instructions consult http://cie2014.inf.elte.hu/?Submission_Instructions The CONFERENCE PROCEEDINGS will be published by LNCS, Springer Verlag. Contact: Erzsebet Csuhaj-Varju - csuhaj[at]inf.elte.hu Website: http://cie2014.inf.elte.hu/ __________________________________________________________________________ ASSOCIATION COMPUTABILITY IN EUROPE http://www.computability.org.uk CiE Conference Series http://www.illc.uva.nl/CiE CiE 2014: Language, Life, Limits http://cie2014.inf.elte.hu CiE Membership Application Form http://www.lix.polytechnique.fr/CIE AssociationCiE on Twitter http://twitter.com/AssociationCiE __________________________________________________________________________ From ahu at cs.stir.ac.uk Thu Jan 2 22:28:46 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Fri, 3 Jan 2014 03:28:46 +0000 Subject: Connectionists: Call for Papers: International Workshop on Autonomous Cognitive Robotics, Stirling, Scotland, UK, 27-28 March 2014 Message-ID: Dear friends **with advance apologies for any cross-postings** Happy New Year! The COGROB'2014 Call for Papers below may be of interest - we would very much appreciate if you could also kindly help circulate the Call to any interested colleagues and friends. Details of the Workshop and invited Speakers can also be found here: http://www.cs.stir.ac.uk/~ahu/COGROB2014 Prospective contributors are required to submit an abstract of no more than 300 words (by the 1 Feb 2014 deadline) to: eya at cs.stir.ac.uk PhD/research students will benefit from a 50% registration fee discount. We look forward to seeing you soon in Stirling! Kindest regards Prof Amir Hussain, University of Stirling, UK & Prof Kevin Gurney, University of Sheffield, UK (Workshop Organisers & Co-Chairs) Important Dates: Abstract submissions deadline: 1 Feb 2014; Decisions Due: 15 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 ------- Call for Papers/Participation International IEEE/EPSRC Workshop on Autonomous Cognitive Robotics University of Stirling, Stirling, Scotland, UK, 27-28 March 2014 http://www.cs.stir.ac.uk/~ahu/COGROB2014 Cognitive Robotics is an emerging discipline, fusing ideas across several traditional domains and seeks to further our understanding in two problem domains. First, by instantiating brain models into an embodied form, it supplies a strong test of those models, thereby furthering out understanding of neurobiology and cognitive psychology. Second, by harnessing the insights we have about cognition, it is a potentially fruitful source of engineering solutions to a range of problems in robotics, and in particular, in areas such as intelligent autonomous vehicles and assistive technology. It therefore promises next generation solutions in the design of urban autonomous vehicles, planetary rovers, and artificial social (e)companions. The aim of this 2-day workshop is to bring together leading international and UK scientists, engineers and industry representatives, alongside European research network and EU funding unit leaders, to present state-of-the-art in autonomous cognitive systems and robotics research, and discuss future R&D challenges and opportunities. We welcome contributions from people working in: neurobiology, cognitive psychology, artificial intelligence, control engineering, and computer science, who embrace the vision outlined above. If you wish to contribute, please email an abstract of not more than 300 words (by 1 Feb 2014) to: eya at cs.stir.ac.uk Both ?works-in-progress? and fully-developed ideas are welcome. Selected abstracts will be invited for oral presentation but there will also be poster sessions. We also welcome people at all stages of their career to submit. Authors of selected best presentations will be invited to submit extended papers for publication in a special issue of Springer?s Cognitive Computation journal (http://www.springer.com/12559) Invited Speakers: Juha Heikkil?, Deputy Head of Unit: Robotics & Cognitive Systems, European Commission Prof Vincent Muller, Co-ordinator, EU-Cognition-III: European Network Dr Ingmar Posner, The Oxford Mobile Robotics Group, University of Oxford, UK Prof Tony Pipe, Bristol Robotics Laboratory, UK Prof David Robertson, University of Edinburgh, UK Prof Mike Grimble, Industrial Systems & Control Ltd., & University of Strathclyde, UK Dr Tony Dodd, Dept. of Automatic Control Systems Engineering, Sheffield University, UK Prof Derong Liu, University of Illinois, USA & Chinese Academy of Sciences, Beijing Workshop Organisers & Co-Chairs: Prof Amir Hussain, University of Stirling, UK & Prof Kevin Gurney, University of Sheffield, UK Important Dates: Abstract submissions deadline: 1 Feb 2014; Decisions Due: 15 Feb 2014 Workshop dates: Thurs 27- Fri 28 March 2014 Registration: Registration fees will include lunches, refreshments and a copy of the Workshop Abstract Proceedings Early Registration Fee: ?100 Early Deadline: 21 Feb 2014 Late Registration Fee: ?150 Final deadline: 28 Feb 2014 Registration payment details will be sent on acceptance of Abstract, or can be obtained by emailing: eya at cs.stir.ac.uk They will also be available on-line: http://www.cs.stir.ac.uk/~ahu/COGROB2014 Research students are entitled to a 50% discount (proof of registration is required), and IEEE Members can benefit from a 15% discount. Venue, Travel & Accommodation: The Workshop will be held in the Cottrell Building, Division of Computing Science and Maths, School of Natural Sciences, at the University of Stirling. Travel directions and maps can be found at: http://www.stir.ac.uk/about/getting-here/ Accommodation options include the on-Campus Stirling Management Centre ( http://www.smc.stir.ac.uk/), as well as numerous local B&Bs, for examples, see: http://www.stirling.co.uk/accommodation/guesthouse.htm Local Organizing Team: Dr Erfu Yang, Mr. Zeeshan Malik & Ms Grace McArthur Division of Computing Science & Maths, School of Natural Sciences, University of Stirling, UK E-mail: eya at cs.stir.ac.uk -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akozlov at nada.kth.se Fri Jan 3 04:19:28 2014 From: akozlov at nada.kth.se (akozlov at nada.kth.se) Date: Fri, 3 Jan 2014 10:19:28 +0100 (CET) Subject: Connectionists: PhD positions in Computational Neuroscience Message-ID: EXTENDED DEADLINE: the deadline for the application has been extended to January 20, 2014. The Erasmus Mundus Joint Doctoral Program "EuroSPIN" (European Study Programme in Neuroinformatics) is inviting applications from students having a solid background in mathematics, physics, computer sciences, biochemistry or neuroscience (on a master level or equivalent), in all cases with computer science skills. Documented interest in research like activities (e.g. demonstrated in the form of master thesis work, or participation in research related activities) is of large importance. Also fluency in English is requested. Four partners participate: - Bernstein Center Freiburg, Germany - KTH Royal Institute of Technology, Sweden - National Centre for Biological Science, India - University of Edinburgh (UoE), UK They are all research leaders in the Neuroinformatics field, but they have complementary strengths. Each student will spend most of the time at two of the partner universities, and also receive a joint (or double) PhD degree following a successful completion of the studies. The mobility periods, as well as the courses a student will follow, are tailored individually based on: a) the PhD students background; b) which constellations of partners that are involved, as well as c) the specific research project. During the PhD period each student has one main supervisor from each of the two universities that grant the PhD degree. There are excellent scholarship opportunities for students accepted to an Erasmus Mundus Joint Doctorate programme. An employment contract will be given to all selected PhD students during the study time, which is 4 years. If you are interested, go to our webpage: http://www.kth.se/eurospin If you have questions, contact us at EuroSPIN Coordinators, Stockholm, SWEDEN. From carla.piazza at uniud.it Fri Jan 3 09:48:49 2014 From: carla.piazza at uniud.it (Carla Piazza) Date: Fri, 03 Jan 2014 15:48:49 +0100 Subject: Connectionists: FMMB 2014 - Call for Papers Message-ID: <20140103154849.49165lj4fpns015d@webmail.uniud.it> FMMB 2014 - 1st INTL. CONFERENCE ON FORMAL METHODS IN MACRO-BIOLOGY Call for Papers September 22-24, 2014 Noumea, New Caledonia http://fmmb2014.sciencesconf.org/ * AIMS AND SCOPE The purpose of FMMB is to bring together researchers, developers, and students in theoretical computer science, applied mathematics, mathematical and computational biology, interested in studying the application of formal methods to the construction and analysis of models describing biological processes at both micro and macro levels. The topics of interest include (but are not limited to): - representation and analysis of biological systems in formal systems such as: o ordinary and partial differential equation systems, o discrete event systems, infinite state systems, o hybrid discrete-continuous systems, hybrid automata, o cellular automata, multi-agent systems, o stochastic processes, stochastic games, o statistical physics models, o process algebras, process calculi, o rewriting systems, graph grammars, - coupling models and data, inference of models from data, - computability and complexity issues, - modelling and analysis tools, case studies. Application areas particularly solicited include: - environmental biology, ecology, marine science, - agriculture and forestry, - developmental biology, population biology, - epidemiology, medicine, - systems biology, synthetic biology. * KEYNOTE SPEAKERS Pieter Collins, Maastricht University, Netherlands, Saso Dzeroski, Jozef Stefan Institute, Slovenia, Radu Grosu, Vienna Technical University, Austria, Steffen Klamt, Max Planck Institute Magdeburg, Germany, Pietro Lio', Cambridge University, UK, H?l?ne Morlon, Ecole Polytechnique and CNRS, France. * PC CO-CHAIRS Fran?ois Fages, Inria Paris-Rocquencourt, France, Carla Piazza, University of Udine, Italy. * LOCAL CHAIR Teodor Knapik, University of New Caledonia. * IMPORTANT DATES Submission deadline: April 25 Author notification: June 4 Camera-ready copy due: June 25 * SUBMISSION AND PUBLICATION Regular submissions of 12-20 pages or short submissions of 2 pages in LNCS style should be submitted as PDF files via EasyChair https://www.easychair.org/conferences/?conf=fmmb2014 . All accepted papers will be published in a book in the LNBI series of Springer-Verlag. Authors of the most significant contributions will be invited to submit extended versions for a journal special issue. ----------------------------------------------------------------- Prof. Carla Piazza Dip. di Matematica e Informatica Universita' di Udine Via le Scienze 206, I-33100 Udine, Italy Phone: +39.0432.55.8497 Fax: +39.0432.55.8499 Email: carla.piazza at uniud.it Home: http://www.dimi.uniud.it/piazza ----------------------------------------------------------------- ---------------------------------------------------------------------- SEMEL (SErvizio di Messaging ELettronico) - AINF, Universita' di Udine From frank.ritter at psu.edu Sat Jan 4 17:09:43 2014 From: frank.ritter at psu.edu (Frank Ritter) Date: Sat, 4 Jan 2014 17:09:43 -0500 Subject: Connectionists: CogModel notes: ICCM15/BRIMS14/outlets/RFPs/TV show/Jobs Message-ID: [please send this on to your members] The first announcement is driving this email -- ICCM 2015 will be in Gronigen, the Netherlands, near April 2015, on its regular (15&21 month) schedule. The rest indicate new publication outlets, resources, and jobs in or related to Cog Sci and in modeling. I have also included an unusual item, this time in the middle. I may send this more often with so many announcements.... If you would like to be removed, please just let me know. I maintain it by hand to keep it small. [Hypertext version available at http://acs.ist.psu.edu/iccm2015/iccm-mailing-jan2014.html] cheers, Frank Ritter frank.e.ritter at gmail.com http://www.frankritter.com **************************************************************** 1. International Conf. on Cognitive Modeling, April 2015 in Gronigen, NL 2. BRiMS 2014 Call for Papers, due January 6, 2014 http://cc.ist.psu.edu/BRIMS2014/ 3. Annual Meeting of the Cognitive Science Society http://cognitivesciencesociety.org/conference_future.html submissions due 1 feb 2014 4. BRIMS 2013 Proceedings available online http://cc.ist.psu.edu/BRIMS2013/archives/2013/ 5. CMOT special issue on BRIMS 2011 published http://acs.ist.psu.edu/papers/ritterKBip.pdf 6. Conference on Advances in Cognitive Systems, Proceedings online http://www.cogsys.org/proceedings/2013 7. BICA 2013 program available online http://bicasociety.org/meetings/2013/bica2013program.pdf 8. 6th International Conference on Agents and AI, 6-8 March 2014 http://www.icaart.org/ 9. KOGWIS 2014 call for papers and symposia http://www.ccs.uni-tuebingen.de/kogwis14 Submissions due: 7 May 2014 10. AISB 50th conference, 1-4 April 2014 http://www.aisb50.org/ 11. Fourth ACT-R Spring School and Master Class 2014, April 7-12, 2014 http://www.ai.rug.nl/actr-springschool/ applications due 27 jan 2014 11b. Numerous books to review 12. Call for papers: Mental model ascription by intelligent agents, in Interaction Studies 14 jan 2014 deadline 13. Publication policy for Advances in Cognitive Systems [Journal] http://www.cogsys.org/ http://cogsys.org/pdf/paper-3-2-141.pdf 14. Editor change at J. of Interaction Science, and call for papers http://www.journalofinteractionscience.com/ 15. IEEE SMC: Transactions on Human-Machine Systems seeking papers http://www.ieeesmc.org/publications/index.html 16. Nat. Inst. on Dis. and Rehabilitation Res., for Computer Scientists 17. New Perspectives on the Psychology of Understanding http://www.varietiesofunderstanding.com Letters of Intent due March 1, 2014 18. Cognitive scientist to co-host TV show 19. Minding Norms, and Social Emotions, two new books http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf 20. Cambridge U. Press, Winter Sale 2013, 20% off. 21. Assistant or Associate Professor, College of IST http://recruit.ist.psu.edu 22. [Comp-neuro] Faculty positions at Imperial College London 23. Lecturer and research Position (Assistent/in) in Neuro-Robotics, TU/Chemnitz http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php 24. Director of Human MRI Facility, Penn State http://www.la.psu.edu/facultysearch/ 25. U. of Iowa- Assistant Professor Positions, CS v 1 jan 2014 applications get full consideration 26. Rowan U., associate professor level in neuroscience 27. Wright State, Assistant Professor in Human Cognitive Neuroscience 28. Post doctoral position in systems neuroscience and connectivity modeling Hershey Medical Center, Hershey, PA 29. Postdoctoral Research Fellow, Wright State https://jobs.wright.edu/postings/7173 30. Postdoc in modeling with Townsend & Wenger 31. Post-PhD Opportunities for US citizens at Fort Belvoir, VA deadline 1 feb 2014 32. PhD program, Applied Cognitive and Brain Sciences (ACBS), Drexel [closed, but will be interesting next year] **************************************************************** 1. International Conf. on Cognitive Modeling, April 2015 in Gronigen, NL The International Conference on Cognitive Modeling will take place in April 2015 (approx. date) at RU/Gronigen, in the Netherlands. The deadline date for submissions will be in the fall of 2014. Further announcements will provide more details. This was announced at the conference in Ottawa this summer. **************************************************************** 2. BRiMS 2014 Call for Papers, due January 6, 2013 http://cc.ist.psu.edu/BRIMS2013/ On behalf of the BRIMS Society, we are proud to announce the 23rd Annual Conference on Behavior Representation in Modeling & Simulation to be held at the University of California, DC campus in Washington, DC from April 1 to April 4, 2014. It's our great fortune to co-locate with the 2014 International Social Computing, Behavioral Modeling and Prediction Conference (SBP14). Arrangements still need to be made, but we hope to offer complimentary BRiMS registration for those who purchase SBP registration. Please visit the BRiMS web site at http://cc.ist.psu.edu/BRIMS2014/ where updates will be made periodically. Please visit SBP's web site at http://sbp-conference.org/ for their conference details. The submission deadlines reflect careful thinking on our part about your needs to have plenty of lead time to plan your submission, while providing enough time to assemble a BRiMS Proceedings available to all in time for the conference. This year, BRiMS evolved into a co-Chair arrangement. Please welcome Dr. Bill Kennedy of George Mason University, who joined me as I entered my second year of chairing duties. Bill will be our in-person administrator and host of the conference, while I will be the administrative point of contact leading up to the conference. Please direct your question to Bill (wkennedy at gmu.edu) or me (daniel.n.cassenti.civ at mail.mil) depending on the content of your query. We look forward to receiving your submission and to seeing you at the conference! Best Regards, Dan Cassenti & Bill Kennedy The BRIMS Executive Committee invites papers, posters, demos, symposia, panel discussions, and tutorials on topics related to the representation of individuals, groups, teams, and organizations in models and simulations. All submissions are peer-reviewed. Submissions are handled on-line at: http://cc.ist.psu.edu/BRIMS2013/ Please see the guidelines on the BRiMS website for format requirements and content suggestions. If you have any questions about the submission process or are unable to submit to the web site, please contact Daniel Cassenti by email (daniel.n.cassenti.civ at mail.mil) or phone 410-278- 5859. ACCOMMODATIONS and REGISTRATION The conference will be held at the University of California campus in Washington, DC [!]. We are pleased to co-locate BRIMS with the 2014 International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction. Please see their web site at http://sbp-conference.org/ for more information on the conference. CONFERENCE CO-CHAIRS Daniel N. Cassenti, U.S. Army Research Laboratory William G. Kennedy, George Mason University PROGRAM CHAIRS Robert St. Amant, North Carolina State University David Reitter, Penn State University Webb Stacy, Aptima, Inc. **************************************************************** 3. Annual Meeting of the Cognitive Science Society http://cognitivesciencesociety.org/conference_future.html submissions due 1 feb 2014 All Submissions Due - February 1, 2014 Authors will be notified of decisions by April 1, 2014 Camera-ready copy for inclusion in the proceedings due on May 1, 2014 CogSci 2014 - Cognitive Science Meets Artificial Intelligence: Human and Artificial Agents in Interactive Contexts Quebec City, CA July 23 - July 26, 2014 Website URL for future conference: http://cognitivesciencesociety.org/conference_future.html ---- Highlights Include: Plenary Speakers: Dedre Gentner, Steven Harnad, & Minoru Asada 13th Rumelhart Prize Recipient: Ray Jackendoff Symposia: "Foundations of Social Cognition", "Moral Cognition and Computation", "The Future of Human-Agent Interaction" Cognitive scientists from around the world are invited to attend CogSci 2014, the world's premiere annual conference on cognitive science. The conference represents a broad spectrum of disciplines, topics, and methodologies from across the cognitive sciences. In addition to the invited presentations, the program will be filled with reviewed submissions from the following categories: papers, symposia, presentation-based talks, member abstracts, tutorials, and workshops. Submissions must be completed electronically through the conference submissions web site. Submissions may be in any area of the cognitive sciences, including, but not limited to, anthropology, artificial intelligence, computational cognitive systems, cognitive development, cognitive neuroscience, cognitive psychology, education, linguistics, logic, machine learning, neural networks, philosophy, robotics and social network studies. Information regarding the submission process, including opening dates for the submission website will be posted shortly. http://cognitivesciencesociety.org/conference2014/submissions.html We look forward to seeing you in Quebec City. Conference Co-Chairs: Paul Bello, Marcello Guarini, Marjorie McShane, and Brian Scassellati Cognitive Science Society *********************************************************** 4. BRIMS 2013 Proceedings available online http://cc.ist.psu.edu/BRIMS2013/archives/2013/ The BRIMS 2013 proceedings are available online. They include: Modelling the Security Analyst's Role: Effects of Similarity and Past Experience on Cyber Attack Detection Accounting for the integration of descriptive and experiential information in a repeated prisoner's dilemma using an instance-based learning model Decision Criteria for Model Comparison Using Cross-Fitting A Model-based Evaluation of Trust and Situation Awareness in the Diner's Dilemma Game A concise model for innovation diffusion combining curvature-based opinion dynamics and zealotry A Trust-Based Framework for Information Sharing Behavior in Command and Control Environments Advantages of ACT-R over Prolog for Natural Language Analysis The Relational Blackboard Differences in Performance with Changing Mental Workload as the Basis for an IMPRINT Plug-in Proposal An ACT-R Model of Sensemaking in a Geospatial Intelligence Task Using the Immersive Cognitive Readiness Simulator to Validate the ThreatFireTM Belt as an Operational Stressor: A Pilot Study Integrated Simulation of Attention Distribution and Driving Behavior Modeling trust in multi-agent systems Simulating aggregate player behavior with learning behavior trees Declarative to procedural tutors: A family of cognitive architecture-based tutors Architectural considerations for modeling cognitive-emotional decision making Trust definitions and metrics for social media analysis Architecture for goal-driven behavior of virtual opponents in fighter pilot combat training Examining Model Scalability through Virtual World Simulations **************************************************************** 5. CMOT special issue on BRIMS 2011 published http://acs.ist.psu.edu/papers/ritterKBip.pdf The special issue of omputational Mathematical and Organizational Theory (CMOT) based on the best papers of BRIMS 2011 has been published. Kennedy, G. W., Ritter, F. E., & Best, B. J. (2013). Behavioral representation in modeling and simulation introduction to CMOT special issue-BRiMS 2011. Computational Mathematical and Organizational Theory, 19(3), 283-287. Abstract: This special issue is similar to our previous special issues (Kennedy et al. in Comput. Math. Organ. Theory 16(3):217-219, 2010; 17(3):225-228, 2011) in that it includes articles based on the best conference papers of the, here, 2011 BRiMS Annual Conference. These articles were reviewed by the editors, extended to journal article length, and then peer-reviewed and revised before being accepted. The articles include: a new way to evaluate designs of interfaces for safety critical systems (Bolton) an article that extends our understanding of how to model situation awareness (SA) in a cognitive architecture (Rodgers et al.) an article that presents elec- troencephalography (EEG) data used to derive dynamic neurophysiologic models of engagement in teamwork (Stevens et al.), and an article that demonstrates using machine learning to generate models and an ex- ample application of that tool (Best) After presenting a brief summary of each paper we will see some recurrent themes of task analysis, team and individual models, spatial reasoning, usability issues, and particu- larly that they are models that interact with each other or systems. **************************************************************** 6. Conference on Advances in Cognitive Systems, Proceedings online http://www.cogsys.org/proceedings/2013 Paper titles: Fractal Representations and Core Geometry A Cognitive Systems Approach to Tailoring Learner Practice Understanding Social Interactions Using Incremental Abductive Inference Changing Minds by Reasoning About Belief Revision: A Challenge for Cognitive Systems Anomaly-Driven Belief Revision by Abductive Metareasoning CRAMm - Memories for Robots Performing Everyday Manipulation Activities A Cognitive System for Human Manipulation Action Understanding Toward Learning High-Level Semantic Frames from Definitions On the Representation of Inferences and their Lexicalization Towards an Indexical Model of Situated Language Comprehension for Real-World Cognitive Agents Integrating Meta-Level and Domain-Level Knowledge for Interpretation and Generation of Task-Oriented Dialogue Three Lessons for Creating a Knowledge Base to Enable Explanation, Reasoning and Dialog X Goes First: Teaching Simple Games through Multimodal Interaction Learning Task Formulations through Situated Interactive Instruction Narrative Fragment Creation: An Approach for Learning Narrative Knowledge Conceptual Models of Structure and Function Reasoning from Radically Incomplete Information: The Case of Containers Am I Really Scared? A Multi-phase Computational Model of Emotions Three Challenges for Research on Integrated Cognitive Systems **************************************************************** 7. BICA 2013 program available online http://bicasociety.org/meetings/2013/bica2013program.pdf The program with abstracts for the 2013 meeting in Kiev are available online. The web site notes that it will be in Boston in 2014. **************************************************************** 8. 6th International Conference on Agents and AI, 6-8 March 2014 http://www.icaart.org/ This is a European conference on topics related to cognitive modeling. Its paper deadline was in September, but is probably a recurrent event. **************************************************************** 9. KOGWIS 2014 call for papers and symposia http://www.ccs.uni-tuebingen.de/kogwis14 Submissions due: 7 May 2014 ==== FIRST CALL FOR PAPERS AND SYMPOSIA ==== KOGWIS 2014: HOW LANGUAGE AND BEHAVIOUR CONSTITUTE COGNITION 12th Biannual Conference of the German Society for Cognitive Science 29th of September - 2nd of October 2014 Submission deadline: 7 May 2014 http://www.ccs.uni-tuebingen.de/kogwis14 ---- KogWis 2014 invites submissions of extended abstracts on current work in cognitive science. Generally *all topics related to cognitive science* are welcome. Contributions that address the focus of this meeting, that is, language and behaviour and the construction of cognition due to language and behaviour are particularly encouraged. Submissions will be sorted by topic and paradigms and will be independently reviewed. Notifications of acceptance will depend on the novelty of the research, the significance of the results, and the presentation of the work. Submissions will be published in the form of online conference proceedings. Submissions of extended abstracts should not exceed 4000 characters (including spaces, references, etc.). Additional figures may be included. The document should not exceed 4 pages in total. Call for symposia: Submissions are also invited for symposium proposals on specific themes in cognitive science. Symposia should be interdisciplinary with 4-6 speakers who can offer different perspectives on the proposed topic. KogWis cannot provide any financial support for participants, so all the costs need to be covered by the participants, including registration. Submissions of extended abstracts should present the symposium theme along with moderator and speakers names and short summaries of the single contributions. General chair: Martin V. Butz Local organizers: Anna Belardinelli, Elisabeth Hein and Jan Kneissler Anna Belardinelli, PhD Cognitive Modeling, Department of CS University of Tuebingen http://www.wsi.uni-tuebingen.de/lehrstuehle/cognitive-modeling/staff/staff/anna-belardinelli.html **************************************************************** 10. AISB 50th Convention, 1-4 April 2014 http://www.aisb50.org/ The AISB 2014 Convention at Goldsmiths, University of London (hereafter AISB-50) will commemorate both 50 years since the founding of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (the AISB) and sixty years since the death of Alan Turing, founding father of both Computer Science and Artificial Intelligence. The convention will be held at Goldsmiths, University of London, UK from the 1st to the 4th April 2014, and will follow the same overall structure as previous conventions. This will include a set of co-located symposia hosting events that include talks, posters, panels, discussions, demonstrations, and outreach sessions. A FULL COLOUR COMMEMORATIVE POSTER: https://www.dropbox.com/s/xoxao5773i6d1t0/AISB50-Poster.pdf PLENARY SPEAKERS * Terrence Deacon * John Searle * Susan Stepney * Lucy Suchmann PUBLIC LECTURES * John Barnden * Simon Colton GENERAL CHAIR: J. Mark Bishop SECRETARY AND LOCAL CHAIR: Andrew Martin (mr.a.martin at gmail.com) CONFERENCE WEB SITE: http://www.aisb50.org/ SYMPOSIA CONTACTS: * A-EYE: An exhibition of art and nature inspired computation: Mohammad Majid al-Rifaie * AI & Animal welfare: Anna Zamansky * AI & Games : Daniela M. Romano * Art, Graphics, Perception, Embodiment: modelling the creative mind : Frederic Leymarie * Artificial Ethics & medicine: Steve Torrance * Computational Creativity : Mohammad Majid al-Rifaie * Computational Intelligence : Edward Keedwell * Computational Scientific Discovery : Mark.Addis at bcu.ac.uk * Computing and Philosophy: is computation observer-relative? : Yasemin Erden * Consciousness without inner models : jkevin.oregan at gmail.com * Culture of the Artificial : Matthew Fuller * Embodied Cognition, Acting and Performance: deirdre.mclaughlin at cssd.ac.uk * Evolutionary Computing : larry.bull at uwe.ac.uk * Future of Art & Computing : AnnaDumitriu at hotmail.com> * History & Philosophy of Programming : Giuseppe Primiero * Killer robots : Noel Sharkey * Learning, gesture and interaction : Marco Gillies * Live algorithms : Tim Blackwell * New Frontiers in Human-Robot Interaction: Kerstin Dautenhahn * New perspectives on colour : Kate Devlin * Questions, discourse and dialogue: 20 years after Making it Explicit: Rodger Kibble * Reconceptualising mental illness: Joel.Parthemore at semiotik.lu.se * Representation of reality: humans, animals and machines: raffa.giovagnoli at tiscali.it * Robotics and Language : Katerina Pastra * Sex with Robots: davidlevylondon at yahoo.com * Varieties of Enactivism : Mario Eduardo Villalobos * Virtual Worlds & Ecosystems : Frederic Leymarie **************************************************************** 11. Fourth ACT-R Spring School and Master Class 2014, April 7-12, 2014 http://www.ai.rug.nl/actr-springschool/ applications due 27 jan 2014 Organizers: Niels Taatgen, Hedderik van Rijn, Jelmer Borst and Stefan Wierda University of Groningen, Netherlands April 7-12, 2013 [2014] ACT-R is a cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time experiments to driving a car, learning algebra and air traffic control. Following previous ACT-R events in 2010, 2011 and 2013, the University of Groningen will host a spring school and master class. Spring School Participants will follow a compressed five-day version of the traditional summer school curriculum. The standard curriculum is structured as a set of six units, of which we will cover five in the course of the week. Each unit lasts a day and involves a morning theory lecture, an afternoon discussion session on advanced topics, and an assignment which participants are expected to complete during the day. The last day will be devoted to discussions and presentations by participants. Computing facilities will be provided or attendees can bring their own laptop on which the ACT-R software will be installed. To provide an optimal learning environment, admission is limited. Prospective participants should submit an application by January 27, consisting of a curriculum vitae and a statement of purpose. Demonstrated experience with a modeling formalism similar to ACT-R will strengthen the application, as well as general programming experience. Applicants will be notified of admission by February 3. Master Class: Work on your own project Organized parallel to the spring school, the master class offers the opportunity for modelers to work on their own projects with guidance from experienced ACT-R modelers. Note that signing up for the Master Class assumes some prior ACT-R experience, either through self-study, or having followed an earlier ACT-R spring or summer school. Participation in the Master Class is open to all who have some prerequisite ACT-R experience through prior Summer or Spring schools. Please also register before January 27th. PRIM/Actransfer tutorial For those interested in modeling transfer, a tutorial will be offered about the new Actransfer extension to ACT-R. See http://www.ai.rug.nl/~niels/actransfer.html for more information. Registrations fees and housing Registration fee: Euro 200 Housing will be offered in the university guesthouse for approximately 61 Euro /day (single, double rooms are around Euro 78.50, Breakfast is 10 Euro/person). Registration To apply to the 2014 Spring School or Master Class, send an email to Hedderik van Rijn (hedderik at van-rijn.org) and attach the requested documents before January 27, 2014. **************************************************************** 11b. Numerous books to review If you would like to review a book, there are several books recently published that their publishers would give you a copy to facilitate a review in a magazine or journal. If you are interested, email me and I'll foward you to the publisher. Game Analytics: Maximizing the Value of Player Data [Hardcover] Magy Seif El-Nasr, Anders Drachen, Alessandro Canossa (Eds) http://www.amazon.com/Game-Analytics-Maximizing-Value-Player/dp/1447147685 http://www.springer.com/computer/hci/book/978-1-4471-4768-8 How to build a brain, A Neural Architecture for Biological Cognition Chris Eliasmith, OUP. http://nengo.ca/build-a-brain Minding Norms: Mechanisms and dynamics of social order in agent societies Rosaria Conte, Giulia Andrighetto, and Marco Campenn? (eds) http://global.oup.com/academic/product/minding-norms-9780199812677? Social Emotions in Nature and Artifact Jonathan Gratch and Stacy Marsella (eds). http://global.oup.com/academic/product/social-emotions-in-nature-and-artifact-978019538764314 Running Behavioral studies with human participants Ritter, Kim, Morgan, & Carlson, Sage http://www.sagepub.com/textbooks/Book237263 **************************************************************** 12. Call for papers: Mental model ascription by intelligent agents, in Interaction Studies 14 jan 2014 deadline Call for Papers Interaction Studies: Special Issue on Mental Model Ascription by Intelligent Agents Mental model ascription, otherwise known as "mindreading", involves inferring features of another human or artificial agent that cannot be directly observed, such as that agent's beliefs, plans, goals, intentions, personality traits, mental and emotional states, and knowledge about the world. This capability is an essential functionality of intelligent agents if they are to engage in sophisticated collaborations with people. The computational modeling of mindreading offers an excellent opportunity to explore the interactions of cognitive capabilities, such as high-level perception (including language understanding and vision), theory of mind, decision-making, inferencing, reasoning under uncertainty, plan recognition and memory management. Contributions are sought that will advance our understanding of mindreading, with priority being given to algorithmic and implemented (or under implementation) approaches that address the practical necessity of computing prerequisite inputs. This volume was inspired by successful workshops at CogSci 2012 (Modeling the Perception of Intentions) and CogSci 2013 (Mental Model Ascription by Language-Enabled Intelligent Agents). The deadline for submissions is January 14, 2014. Submission requirements and instructions can be found at http://benjamins.com/#catalog/journals/is/submission. Please address questions to the special edition editor, Marge McShane, at mcsham2 at rpi.edu. **************************************************************** 13. Publication policy for Advances in Cognitive Systems [Journal] http://www.cogsys.org/ http://cogsys.org/pdf/paper-3-2-141.pdf Advances in Cognitive Systems (http://www.cogsys.org/) is a new electronic journal, now in its second year, that publishes contributions in the original spirit of AI, which aimed to explain the mind in computational terms and reproduce the entire range of human cognition in computational artifacts. Advances in Cognitive Systems is associated with an annual conference of the same name, the first instance of which took place last December. The second volume of the journal served as the electronic proceedings of that meeting. I am writing to tell about a new policy that alters the relationship between journal and conference slightly: - Authors are still welcome to submit papers for publication either to the electronic journal or to the annual conference. - If a journal submission (16 pages) is received at least one month before the conference deadline and is accepted for publication, its authors will be invited to present a talk at the conference. - If a long submission (16 pages) to the conference is accepted for presentation at the meeting, it may either be included in the annual proceedings or be invited to appear in the journal. - If a short submission (8 pages) to the conference is accepted for presentation at the meeting, it will be included in the annual proceedings but not in the journal. This policy should spread submissions across the year, giving more flexibility to authors and reducing the load on reviewers while letting more researchers present their results at the conference. You can find more details at http://www.cogsys.org/instructions/ and in the call for papers to the Annual Conference on Advances in Cognitive Systems, which should be available shortly. Sincerely, Pat Langley, Editor Advances in Cognitive Systems **************************************************************** 14. Editor change at J. of Interaction Science, and call for papers http://www.journalofinteractionscience.com/ Dr. Ray Adams has resigned as editor in chief. The editorial director at Springer, Beverley Ford, asked me to serve as the sole editor-in-chief. It is not an easy job to fill Ray's shoes but with the help of colleagues and great editors, we are working as a team and we are up to the job. The good news is that after a slow start, JoIS is very close to the first issue. As soon as one more paper has been accepted after peer review, we will be online. We are always looking for submissions and expecially right now: Would you share the journal address with potential authors ? Many thanks! Best wishes, Susanne Professor Susanne Bahr and I would like to tell you that our dynamic, new "high-impact" journal, the "Journal of Interaction Science" will be published by Springer next year. We would also like to invite you to contribute a paper to our Journal, possibly on how to introduce more rigour into interaction design. The Journal of Interaction Science (JoIS) has a unique focus to promote research based on: Scientific approaches to interaction design, The use of interaction paradigms for fundamental research in human and computer centred sciences The use of computer science to support scientific research. On the one hand, our scope is narrowly focused specifically on promoting scientific HCI; on the other hand, it is broadly focused, being designed to attract a wide audience and research by the large number of human-centred scientists who do not consider current HCI journals to be sufficiently relevant to them. Interaction science and focuses on the integration of the study of people, with that of artifacts and the sciences involved. We are looking for significant, scientific papers that report empirical results, substantial new theories, methodological innovations and important meta-analyses. With best wishes, Susanne Bahr Gisela Susanne Bahr, Associate Professor Florida Institute of Technology Ph.D. Experimental Psychology - Cognition ABD Ph.D. Computer Sciences Editor in Chief Journal of Interaction Science School of Psychology, Florida Institute of Technology Melbourne, Florida 32901-6975 gbahr at fit.edu, 321.674.8104 **************************************************************** 15. IEEE SMC: Transactions on Human-Machine Systems seeking papers http://www.ieeesmc.org/publications/index.html IEEE Systems Man and Cybernetics Society has a journal, IEEE SMC: Transactions on Human-Machine Systems. This used to be IEEE SMC: Part C: Applications & Reviews, but has recently been rebadged. It is asking its editorial board to ask others to submit high quality papers. So, I'm noting to you that many of you will find that your research, I believe, fits into this journal. Details are available at: http://www.ieeesmc.org/publications/index.html http://mc.manuscriptcentral.com/thms [If you have questions, please PM me, email me, see me in the hall, or call. -Frank] **************************************************************** 16. Nat. Inst. on Dis. and Rehabilitation Res., for Computer Scientists [There may be some proposal calls in here. This announcement is not itself a call, but a pointer to an area. I have a lot of time for Clayton, and while I don't know about this much, I know him and there is at least an audience if not money.] From: "Lewis, Clayton (Contractor)" To: Clayton Lewis Date: Wed, 8 May 2013 16:09:49 -0500 Subject: NIDRR for Computer Scientists In 2011 I began working as a consultant to the National Institute on Disability and Rehabilitation Research (NIDRR), helping to develop an initiative on cloud computing for people with disabilities (NIDRR provided early funding for the Global Public Inclusive Infrastructure, gpii.net, of which you may have heard). In that role I've become aware that there are significant needs and opportunities for computer science research in support of NIDRR's goals, but not many computer scientists know about NIDRR and its programs. I'm writing to you to call your attention to these opportunities, including a brand new announcement on funding for research on inclusive cloud and web computing, . I've attached a writeup, " NIDRR for Computer Scientists". Please pass it along to any colleagues who may be interested. NIDRR for Computer Scientists. What is NIDRR? NIDRR (the National Institute on Disability and Rehabilitation Research) is an agency within the US Department of Education that funds about $110M of research per year (of course this amount varies) aimed at improving the lives of people with disabilities. Why am I writing this? I'm a long-time computer science faculty member at the University of Colorado, Boulder. In 2011 I began working as a consultant to NIDRR, helping to develop an initiative on cloud computing for people with disabilities (NIDRR provided early funding for the Global Public Inclusive Infrastructure, gpii.net, of which you may have heard). In that role I've become aware that there are significant needs and opportunities for computer science research in support of NIDRR's goals, but not many computer scientists know about NIDRR and its programs. Why should computer scientists be interested in NIDRR? As information and communication technologies become important in more aspects of life, and as the ability of these technologies to provide useful assistance grows, there are more and more opportunities for computer science research to contribute to NIDRR's programs. Cloud computing, data integration and data analytics for service effectiveness improvement, recognition technology, social software, accessible and autonomous transportation systems, natural language processing, and configurable user interface technology all have a role in enabling people with disabilities to participate in society more fully and independently. In a typical year NIDRR funds a wide range of projects, from multi-year research and engineering centers, aimed at designated aspects of disability research, to smaller "field initiated" projects proposed by investigators. If you are not already active in disability research, your chances of success will likely be greater if you collaborate with other investigators who have knowledge of and experience in disability research, though you are free to apply on your own. Many of NIDRR's peer reviewers are disability researchers, and of course must judge that proposals are well conceived as contributions to that field. Proposals that represent excellent computer science, but are weak in connecting to the needs of people with disabilities, are unlikely to be competitive. Collaboration can solve this problem. You could develop such collaboration in more than one way. You could approach local colleagues who have the necessary experience, or you could reach out to investigators nationally who work on problems to which the computing technology on which you work could be relevant. For NIDRR-funded projects, current and completed, there is a convenient search facility at http://www.naric.com/?q=en/ProgramDatabase, where you can find projects whose descriptions mention your institution or your state, or particular topics of interest to you. Here are some professional associations that publish papers on technology and disability that you can explore to see who is doing what on the national and international scene: ACM Special Interest Group on Accessible Computing (SIGACCESS), proceedings searchable at http://dl.acm.org/sig.cfm?id=SP1530&CFID=155317526&CFTOKEN=59939155SIGACCESS; IEEE Engineering in Medicine and Biology Society and IEEE Systems, Man, and Cybernetics Society, publications searchable in the IEEE Xplore Digital Library (http://ieeexplore.ieee.org); The Rehabilitation and Assistive Technology Association of North America (RESNA), proceedings at http://www.resna.org/conference/proceedings/index.dot Another good way to familiarize yourself with NIDRR and its programs is to serve as a reviewer. NIDRR is always looking for peer reviewers with a variety of specific subject-matter expertise. See http://www2.ed.gov/about/offices/list/osers/nidrr/nidrrpeerreview.html for more information. FAQ How can I find out more about NIDRR funding programs? NIDRR has a nice Web resource for potential applicants at http://www2.ed.gov/programs/nidrr/applyingforanidrrgrant.html. This includes a summary of its programs, and suggestions about how to track new funding opportunities as they arise. What NIDRR programs are likely to be of most interest to computer scientists? Field Initiated Projects can address any of a wide range of issues relating to people with disabilities, including development of new technologies, employment, independent living, and medical rehabilitation, for any disability populations, with a wide range of research approaches. Disability and Rehabilitation Research Projects are invited to address particular topic areas, or "priorities", and an increasing number of these include topics in computer science. NIDRR also participates in the SBIR (Small Business Innovation Research) program. How do I apply? The Web resource mentioned above, http://www2.ed.gov/programs/nidrr/applyingforanidrrgrant.html , has detailed information for applicants, including tips on writing a strong proposal. How does NIDRR evaluate proposals? NIDRR uses a peer review process. Unlike NSF, but like NIH, NIDRR evaluation is based on numeric scoring guided by a rubric, so it is very important that all proposal requirements are carefully addressed. Here, too, collaboration with an experienced disability researcher can be a big help. Being at NIDRR has made me a big fan of the agency and the contributions its grantees have made and are making. I hope you'll investigate the opportunities NIDRR offers for computer scientists to participate in this important and satisfying work. Please let me know if I can help. Sincerely, Clayton Lewis, NIDRR Consultant **************************************************************** 17. New Perspectives on the Psychology of Understanding http://www.varietiesofunderstanding.com Letters of Intent due March 1, 2014 New Perspectives on the Psychology of Understanding. Fordham University, with the support of a grant from the John Templeton Foundation, invites proposals for the "New Perspectives on the Psychology of Understanding" funding initiative. Our aim is to encourage research from both new and established scholars working on projects related to understanding in its many forms. This $1.2 million RFP is intended to support empirical work in cognitive, developmental, educational, and other areas of psychology. Proposals can request between $50,000 and $225,000 for projects not to exceed two years in duration. We intend to make 7-8 awards. Timeline Letters of Intent due March 1, 2014 Invited full proposals due April 15, 2014 For more information, visit: www.varietiesofunderstanding.com All questions should be directed to: psychology at varietiesofunderstanding.com Psychology Director - Tania Lombrozo, Assistant Professor of Psychology, University of California, Berkeley Project Leader - Stephen Grimm, Associate Professor of Philosophy, Fordham University Research begins July 1, 2014 **************************************************************** 18. Cognitive scientist to co-host TV show [!] http://bellmediapr.ca/Network/Discovery-World/Press/STEPHEN-HAWKINGS-BRAVE-NEW-WORLD-Glimpses-into-the-Future-Nov-15-on-Discovery-World- Science is about to change the world. [!] Professor Stephen Hawking examines cutting-edge breakthroughs and their implications for the future in STEPHEN HAWKING'S BRAVE NEW WORLD. The six-part, original Canadian series returns for a second season, premiering Friday, November 15 at 8 p.m. ET/10 p.m. PT on Discovery World. Hawking is joined by a crack team of scientists - including Dr. Carin Bondar and Professor Chris Eliasmith from Canada, Dr. Daniel Kraft from the U.S., and Professor Jim Al-Khalili and Dr. Aarathi Prasad from the U.K. They travel the world to investigate the latest innovations, from gecko skin-like material with adhesive qualities that mimic Spiderman's ability to scale buildings; to realistic digital avatars and the latest gaming technology that will change the face of the entertainment industry; to a 3D printer that can generate live human tissue; and a Canadian innovation using submarines for deep space training. [Haven't had something this to announce before. These aired in the Fall in Canada. There are several more episodes, and will appear, I believe, in the US (Scienc Discovery) and around the world (National Geographic Channel), and other locations later.] **************************************************************** 19. Minding Norms, and Social Emotions, two new books http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf Two new books have been published in the Oxford Series on Cognitive Models and Architectures: Minding Norms: Mechanisms and dynamics of social order in agent societies Rosaria Conte, Giulia Andrighetto, and Marco Campenn? (eds) http://global.oup.com/academic/product/minding-norms-9780199812677 Social Emotions in Nature and Artifact, Jonathan Gratch and Stacy Marsella (eds). http://global.oup.com/academic/product/social-emotions-in-nature-and-artifact-9780195387643 A flyer describing them and the series, offering a 20% discount http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf **************************************************************** 20. Cambridge U. Press, Winter Sale 2013, 20% off. http://www.cambridge.org/us/knowledge/academic_discountpromotion/?site_locale=en_US&code=OURWINTERBEST Cambridge U. Press also has a 20% discount going. **************************************************************** 21. Assistant or Associate Professor, College of IST http://recruit.ist.psu.edu The College of Information Sciences and Technology at The Pennsylvania State University is a College that emphasizes a) systems-level thinking to approach global, societal problems, b) multiple methodologies in the pursuit of interdisciplinary research and design, and c) active, collaborative learning to support transformative teaching. To learn more about our vision, mission, goals, structure, faculty and students, please go to http://ist.psu.edu. We are searching to fill multiple positions at the Assistant or Associate Professor level in our ranks of tenure-track faculty members, who will assist our college in attaining its goals in education, research and service to the community. The College has strengths in six key areas including: 1) Computational Informatics and Science; 2) Organizational Informatics; 3) Social Policy, Economics and Informatics; 4) Human- Computer Interaction; 5) Cognition and Networked Intelligent Systems and 6) Security, Privacy and Informatics. We seek applicants who show clear evidence that they will become or are leading scholars and premier teachers in their fields and are interested in being part of a vibrant, civil and diverse academic community. Although we welcome applications from a broad variety of areas that match the research interests in the college, we are particularly interested in applicants who would like to pursue research and teaching in the following areas: 1) Enterprise Architecture; 2) Biomedical/Health Informatics; 3) Computational Informatics; 4) Security & Risk Analysis. We are interested in applicants who approach these areas from either a social, cognitive, or computational perspective or a combination of these perspectives. Qualified candidates are invited to submit their curriculum vitae, summary of research and teaching plans, as well as the contact information of four persons who will write letters of recommendations at http://recruit.ist.psu.edu. For questions, please contact Dr. Prasenjit Mitra, Faculty Search Committee Chair, 313F IST Building, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802-6823 or via email facsearch at ist.psu.edu. Review of applications will begin on October 15, 2013 [informal enquiries to David Reitter or to me] **************************************************************** 22. [Comp-neuro] Faculty positions at Imperial College London Date: Tue, 10 Dec 2013 10:32:13 +0000 From: Simon Schultz Subject: [Comp-neuro] Faculty positions at Imperial College London There is currently an open round for Lecturers/Senior Lecturers (US equivalent Assistant/Associate Professors) in the Department of Bioengineering at Imperial College London. I would like to encourage applications in the area of neurotechnology, and particularly translational/clinical neurotechnology, to apply for this round. We have recently been awarded a ?10M EPSRC Centre for Doctoral Training in Neurotechnology, which will award approximately fourteen 4-year PhD studentships per year. Successful faculty applicants in this area would become members of the new Imperial College Centre for Neurotechnology. ---- Lecturers/Senior Lecturers (open call), Imperial College, Bioengineering Imperial College London - Department of Bioengineering Lecturer salary range ?45,040 to ?50,190 pa Senior Lecturer minimum salary ?55,340 pa Due to rapid growth and the strategic importance of the discipline of bioengineering at Imperial College London, the Department of Bioengineering wishes to appoint a number of individuals at Lecturer or Senior Lecturer grade. This is an open call in which we are looking for outstanding candidates in any field of Bioengineering. Additionally, as the department is making multiple appointments for strategic reasons, it particularly welcomes applications from individuals working in one of the two areas listed below: * Regenerative Medicine/Tissue Engineering * Bioengineering of cancer detection, diagnosis, pathogenesis and/or treatment Candidates should have an advanced degree (PhD or equivalent) in an appropriate science or engineering discipline and will have demonstrated an ability to generate and execute research at an internationally-leading level. Evidence of teaching at all levels is also required. Professor Anthony Bull: telephone 020 7594 5186 Closing date: 05 January 2014. **************************************************************** 23. Lecturer and research Position (Assistent/in) in Neuro-Robotics, TU/Chemnitz http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php Lecturer and research Position (Assistent/in) in Neuro-Robotics Lecturer and research Position (Akademische/r Assistent/in) in Neuro-Robotics The position is available at Chemnitz University of Technology in the Department of Computer Science within the Professorship of Artificial Intelligence. It requires teaching and research. Teaching is required about 4 hours per week within the semester and involves lectures and exercises in robotics and neuro-robotics as well as exercises in artificial intelligence or image processing. The candidate is expected to contribute to research in neuro-robotics, e.g. to develop brain inspired models of motor or cognitive processes run on robotic platforms. He or she should have a PhD in computer science or related fields, e.g. electrical engineering. Prior experience in robotics or neuro-computational modeling is advantageous. Good English language skills are necessary. Good German is initially not required, but the candidate should have an interest to learn the German language. [This is true, it is useful to know some or all German or a German when living in Chemnitz!] We offer a stimulating international and interdisciplinary environment. Available and recently ordered robotic platforms include an iCub head, two Nao, a Koala with stereo pan-tilt vision and several K-Junior V2 robots. The salary is according to German standards (E 13 TV-L or A 13). The position is initially for 4 years, but can be extended. The starting date is April 2014 or earlier. Chemnitz is the third-largest city of the state of Saxony and close to scenic mountains. Major cities nearby are Leipzig and Dresden with a rich tradition of music and culture. Further details (in German) can be found here: http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php Applications should be sent by email (preferably in PDF format) to (fred.hamker at informatik.tu-chemnitz.de). The deadline was on 30.09.2013, but applications will be considered until the position is filled. In addition to a CV the candidate should provide an overview of his planned research for the next 4 years. **************************************************************** 24. Director of Human MRI Facility, Penn State http://www.la.psu.edu/facultysearch/ NEUROSCIENCE, Director of Human MRI Facility The Department of Psychology at Penn State, http://psych.la.psu.edu/ is recruiting for a neuroscientist (associate professor or professor) who also will serve as Director of the Human MRI Facility at the Social, Life, and Engineering Sciences Imaging Center, or SLEIC (http://www.imaging.psu.edu). The substantive focus of research is open; expertise in advanced data analysis techniques for fMRI data is welcome, but not required. The Director will affiliate with one or more of the Department's graduate program areas (cognitive, developmental, social, clinical, and industrial/organizational psychology). Rich opportunities exist for collaboration across the department and across the campus, including a range of centers such as the Center for Language Science (http://cls.psu.edu/), the Child Study Center (http://csc.psych.psu.edu/), and the Center for Brain, Behavior, and Cognition (http://www.huck.psu.edu/center/brain-behavior-cognition). The Director position is co-funded by the Social Science Research Institute (http://www.ssri.psu.edu/). Candidates are expected to have a record of excellence in research and teaching, and a history of external funding. Review of applications for the position begins immediately and will continue until the position is filled. Candidates should submit a letter of application including concise statements of research and teaching interests, a CV, and selected (p) reprints. Letters of recommendation will be requested from applicants selected as finalists. Electronic submission is strongly preferred; please submit materials at http://www.la.psu.edu/facultysearch/ If unable to submit electronically, mail materials to the Neuroscience Faculty Search Committee - Box A, Department of Psychology, Penn State, University Park, PA 16802. Questions regarding the application process and the position can be emailed to Judy Bowman, jak8 at psu.edu and questions regarding the position can be sent to Sheri Berenbaum sab31 at psu.edu, Chair. We especially encourage applications from individuals of diverse backgrounds. Employment will require successful completion of background check(s) in accordance with University policies. Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce. **************************************************************** 25. U. of Iowa- Assistant Professor Positions, CS 1 jan 2014 applications get full consideration Pending final budget approval, the Computer Science Department seeks three tenure-track faculty at the level of assistant professor starting August of 2014 as part of a new institution-wide initiative in informatics. The new initiative is intended to strengthen expertise and infrastructure in informatics, more specifically, data analytics, systems software, machine learning, theory and algorithms, embedded systems, networks and smart sensors, computer graphics and visualization as well as areas that bridge core informatics to other disciplines. Areas of particular interest in this first round of hiring include i) machine learning, ii) data science and visualization, and iii) device and network centric software. Applications received by January 1, 2014, are assured of full consideration. Education Requirement: Candidates must hold a PhD in computer science, informatics, or a closely related discipline. Appointments will be made within the Computer Science Department, which offers BA, BS, MCS, and PhD degrees in computer science, and BA and BS degrees in Informatics. Required Qualifications: Duties include conducting externally funded research in the candidate's area of expertise, teaching undergraduate and graduate computer science and/or informatics courses, supervising graduate student research, and making service contributions to the Department, the College, the University, and the discipline. Successful candidates must demonstrate potential for research and teaching excellence in the environment of a major research university, and will be expected to participate in collaborative research as part of the interdisciplinary informatics initiative. Desirable Qualifications: Demonstrated interest in solving interdisciplinary problems, the ability to work with interdisciplinary teams, and prior teaching experience. Both the Department and the College of Liberal Arts and Sciences are strongly committed to gender and ethnic diversity; the strategic plans of the University, College, and Department reflect this commitment. Women and minorities are encouraged to apply. Applications should contain a CV, research and teaching statements, and three letters of recommendation. Apply at http://vinci.cs.uiowa.edu/recruit From: Juan Pablo Hourcade To: CHI-JOBS at LISTSERV.ACM.ORG **************************************************************** 26. Rowan U., associate professor level in neuroscience [From a former PSU grad student:] I am now a faculty member at Rowan University (in Glassboro, NJ - about 30 minutes south of Philadelphia). We are actively searching for an associate professor level neuroscience person. The ad will also accommodate an assistant professor level person. The job description is located at: http://rowanuniversity.hodesiq.com/jobs/assistant-associate-professor-neuroscience-school-of-biomedical-sciences-glassboro-new-jersey-job-3974550 Please distribute it among your former students. (In the coming years, I anticipate that we will be searching for again for neuroscience people). Tabbetha A. Dobbins dobbins at rowan.edu 856-256-4366 Department of Physics & Astronomy Rowan University **************************************************************** 27. Wright State, Assistant Professor in Human Cognitive Neuroscience The Psychology Department at Wright State University seeks applicants for a tenure-track Assistant Professor position in Human Cognitive Neuroscience. Please forward the following link to eligible candidates. http://science-math.wright.edu/psychology/about/faculty-search-in-human-cognitive-neuroscience Thank you! Ion Juvina, Ph.D., ion.juvina at wright.edu Psychology, Wright State University http://www.wright.edu/cosm/departments/psychology/faculty/juvina.html http://psych-scholar.wright.edu/ijuvina/ **************************************************************** 28. Post doctoral position in systems neuroscience and connectivity modeling Hershey Medical Center, Hershey, PA Post doctoral position in systems neuroscience and connectivity modeling. Funding sources: PA; Tobacco Settlement Fund; Social Science Research Institute, University Park Duration: 2 years We are seeking a highly motivated Postdoctoral Fellow in the area of clinical/cognitive neuroscience, brain imaging, and network modeling. The successful candidate will work in collaboration with Dr. Hillary in Psychology, Dr. Reka Albert of Physics and Dr. Peter Molenaar in Human Development and Family Studies focusing on time series analysis, graph theory, and connectivity modeling of human brain imaging data (high density EEG, BOLD fMRI). The primary responsibility of this position is to facilitate ongoing research examining neural plasticity after severe traumatic brain injury in humans. There is also keen interest for this position to support the development of novel methods for understanding plasticity from a systems neuroscience perspective. This includes prospective data collection as well as analysis of existing data sets. Current lab goals aim to integrate: 1) signal processing (i.e., non-stationarity in time series data; cross-frequency coupling), 2) large scale connectivity analysis (e.g., graph theory), 3) machine learning, and 4) novel methods isolating regions of interest. A Doctorate (M.D. and/or Ph.D.) degree is required or anticipated in the calendar year. Excellent verbal and written communication skills and a background in computational modeling (broadly defined) is required and programming experience is preferred. Please send a CV, cover letter, and the names and contact information for three references to fhillary at psu.edu and make sure to attach a [CV?]. Contact: fhillary at psu.edu Frank G. Hillary, PhD Associate Professor of Psychology **************************************************************** 29. Postdoctoral Research Fellow, Wright State https://jobs.wright.edu/postings/7173 Postdoctoral Research Fellow and Research Assistant in Human Factors in Surgical Simulation and Training Department of Biomedical, Industrial and Human Factors Engineering Wright State University We are seeking a highly qualified postdoctoral research associates and a research assistant to perform research in surgical simulation and training. This multidisciplinary research involves engineers and physicians from Wright State University, Miami Valley Hospital, and other medical institutions in Ohio. Successful candidates will hold one-year appointments, renewable pending availability of funds. Job Title: Postdoctoral Research Associate Qualifications: 1. Earned Ph.D. degree in human factors engineering, experimental psychology, biomedical engineering, computer science, or equivalent, with an interest in medical devices and systems design, virtual reality simulation, haptics, and/or human performance evaluation and training. 2. Ability to work independently as well as collaboratively on research projects. 3. Excellent communication skills, both verbal and written. 4. Experience in conducting research with human and animal subjects, using both quantitative and qualitative methodologies, and the IRB and IACUC process. 5. Experience with virtual or augmented reality, haptic devices, HCI and UI design, programming in C, C++, OpenGL, and statistical data analysis packages such as SAS, SPSS, or R. 6. Ability to prioritise tasks, manage team members, and disseminate results in a timely manner. Responsibilities: 1. Conduct literature review. 2. Conduct field studies and controlled laboratory experiments in the hospital operating room, animal labs, and simulation labs. 3. Perform task analysis, cognitive task analysis, work domain analysis, etc. of the surgical environment and of emerging surgical techniques. 4. Manage project activities and team members 5. Assist in writing grant proposals and developing research ideas. 6. Develop and design data acquisition apparatus and measurement protocols. 7. Co-author conference papers and peer-reviewed journal papers. To submit an application: https://jobs.wright.edu/postings/7173 **************************************************************** 30. Postdoc in modeling with Townsend & Wenger The Laboratory for Mathematical Psychology at Indiana University (Jim Townsend) and the Visual Neuroscience Laboratory at the University of Oklahoma (Michael Wenger) seek a postdoctoral researcher in cognitive modeling and experimental psychology. This position is funded by the National Science Foundation. This person will be in residence at Indiana University and will be responsible for assisting Drs. Townsend and Wenger on a three-year project titled "Building a Unified Theory-Driven Methodology for Identification of Elementary Cognitive Systems." Duties will include overseeing graduate students, undergraduate RAs, and technical assistants; planning and carrying out experiments; data analysis; model testing; and preparing manuscripts for publication. Applicants should have a Ph.D. in a relevant field; background in cognitive modeling, mathematical statistics, and experimental behavioral science; demonstrated scientific expertise, with publications in refereed journals; extensive experience in experimental design and development; ability to create quantitative models of cognitive processing; solid programming skills, preferably with R, Matlab, and Python; experience writing grants and manuscripts; ability to work and lead in a team environment.; ability to work independently on designated projects.; and demonstrated interpersonal skills. The position begins as soon as it is filled and is a one-year appointment. Renewals for a second and third year are possible and are contingent on availability of funds, satisfactory performance, acceptable progress in carrying out the assigned duties, and mutual agreement. This is a 100% appointment. Initial salary is $40,000/year. Please submit a cover letter including a statement of career goals (one paragraph), curriculum vitae, publications, and contact information of three references. Please send these materials to: Michael Wenger, (mjw327 at cornell.edu) / (michael.j.wenger at ou.edu) Center for Applied Social Research, The University of Oklahoma, 3100 Monitor Avenue, Suite 100, Norman OK 73019. M. J. Wenger, Ph.D. Visiting Professor Division of Nutritional Sciences Cornell University Ithaca, NY 14850 Department of Psychology Graduate Program in Cellular and Behavioral Neurobiology The University of Oklahoma Norman OK 73019 phone: (405) 325-3846 e-mail: michael.j.wenger at ou.edu / mjw327 at cornell.edu web: https://sites.google.com/site/ouvnlab **************************************************************** 31. Post-PhD Opportunities for US citizens at Fort Belvoir, VA deadline 1 feb 2014 We are currently looking for four USA citizens with specific technical expertise to place them at Defense Threat Reduction Agency, DTRA, located in Fort Belvoir within the next few months. Candidates must be eligible for attaining a security clearance. The positions are in Thrust Areas 2, 3, 4 and 5. Post-Doctoral Research PIPELINE for Fellows Program The objective of this fellowship program is to establish and sustain a long-term process through which Penn State University will develop and execute a Post- Doctoral Research PIPELINE for Fellows Program to address critical scientific, technology and engineering needs for reducing the threat from Weapons of Mass Destruction (WMD). This project will enable DTRA to utilize mission-critical expertise possessed by highly qualified faculty and graduate students (nearing completion of their degree) who hold doctoral or terminal professional degrees in relevant scientific, technical and engineering disciplines. Post-Doctoral / Masters Fellows will be selected based upon their responsive ability to enhance the joint DTRA-Strategic Partnership mission requirements. Key science and technology skills include: nuclear and radiation physics; weapons engineering; structural, electrical and mechanical engineering; broad-based nano-technological engineering and applications; weapons effects and system response technologies; physics, chemistry and biological sciences related to detection, characterization and destruction of WMD materials; medical and pharmaceutical sciences; information technology, modeling, data visualization and advanced computational sciences; social, adversarial and behavioral modeling, science and analysis. [Email jbm18 at psu.edu for an application, some hard science background required, but this is in my experience, more looking for intelligence in the applicant than calculus, per se] Application deadline is 02/01/2014 by email. Further Details For qualified candidate, this opportunity would provide the following to a US citizen, capable of obtaining a security clearance at the Secret level, to spend one year working at DTRA (Fort Belvior): $83,664 annual salary, includes a $1,000 monthly living allowance $6,000 Domestic Travel allowance Specifically, the Research and Development Directorate, Basic and Applied Sciences Department (J9-BA) is looking to fill one position in each of the following thrust areas: TA2 = Cognitive and Information Science: Description: The basic science of cognitive and information science results from the convergence of computer, information, mathematical, network, cognitive, and social science. This research thrust expands our understanding of physical and social networks and advances knowledge of adversarial intent with respect to the acquisition, proliferation, and potential use of WMD. The methods may include analytical, computational or numerical, or experimental means to integrate knowledge across disciplines and improve rapid processing of intelligence and dissemination of information. Education: The candidate should have a Ph.D. in Electrical Engineering/Physics. The candidate should have a strong background in power systems and control theory. Knowledge of nuclear weapons effects a plus. [!] [this has been filled by information science in the past, and relvent BSEE to cogsci might find the other positions attainable] TA3 = Science for Protection: Description Basic science for protection involves advancing knowledge to protect life and life-sustaining resources and networks. Protection includes threat containment, decontamination, threat filtering, and shielding of systems. The concept is generalized to include fundamental investigations that reduce consequences of WMD, assist in the restoration of life-sustaining functions and support forensic science. Education: The candidate should have a Ph.D. in Physical Sciences such as electrical engineering, materials science, nuclear physics, solid state physics or related discipline. A background including coursework or research in nuclear science is desired TA 4 = Science to Defeat WMD: Description: Basic science to defeat WMD involves furthering the understanding of explosives, their detonation and problems associated with accessing the target WMDs. This research thrust includes the creation of new energetic molecules/materials that enhance the defeat of WMDs, the improvement of modeling, and simulation of these materials and various phenomena that affect success and estimate the impact of defeat actions, and investigation of novel methods that may yield order-of-magnitude improvements in energy and energy release rate. Education: The candidate should have a PhD in Material Science, Chemical Engineering, Chemistry, Physics, Chemical Physics, Computational Physics, Mechanical Engineering, or Materials Engineering. TA5 = Science to Secure WMD: Description: Basic science to support securing WMD includes: (a) environmentally responsible innovative processes to neutralize chemical, biological, radiological, nuclear, or explosive (CBRNE) materials and components; (b) discovery of revolutionary means to secure components and weapons; and (c) studies of scientific principles that lead to novel physical or other tags and methods to monitor compliance and disrupt proliferation pathways. The identification of basic phenomena that provide verifiable controls on materials and systems also helps arms control. Education: The candidate should have a Ph.D. in one of the fields of physical or life sciences. Jan Mahar Sturdevant, jbm18 at psu.edu Professor of Practice College of IST, Penn State **************************************************************** 32. PhD program, Applied Cognitive and Brain Sciences (ACBS), Drexel [closed, but will be interesting next year] From: Chris Sims To: "cvnet at mail.ewind.com" , comp-neuro at neuroinf.org, visionlist at visionscience.com The Applied Cognitive and Brain Sciences (ACBS) program at Drexel University invites applications for Ph.D. students to begin in the Fall of 2014. Faculty research interests in the ACBS program span the full range from basic to applied science in Cognitive Psychology, Cognitive Neuroscience, and Cognitive Engineering, with particular faculty expertise in computational modeling and electrophysiology. Accepted students will work closely with their mentor in a research-focused setting, housed in a newly-renovated, state-of-the-art facility featuring spacious graduate student offices and collaborative workspaces. Graduate students will also have the opportunity to collaborate with faculty in Clinical Psychology, the School of Biomedical Engineering and Health Sciences, the College of Computing and Informatics, the College of Engineering, the School of Medicine, and the University's new Expressive and Creative Interaction Technologies (ExCITe) Center. Specific faculty members seeking graduate students, and possible research topics are below. * Chris Sims, Drexel Laboratory for Adaptive Cognition http://www.pages.drexel.edu/~crs346/ - Visual memory and perceptual expertise - Decision-making under uncertainty and learning from feedback - Sensorimotor control and coordination - Computational models of cognition * Dan Mirman, Language and Cognitive Dynamics Laboratory http://www.danmirman.org/ - Recognition, comprehension, and production of spoken words - Organization and processing of semantic knowledge - Computational models of brain and behavior * John Kounios, Creativity Research Laboratory https://sites.google.com/site/johnkounios/laboratory - Cognitive psychology/cognitive neuroscience, focusing on human memory, problem solving, intelligence, and creativity - Specialization in electrophysiological methods (EEG, ERP), and other behavioral and neuroimaging methods (e.g., fMRI) For a full list of faculty members in the ACBS http://www.drexel.edu/psychology/academics/graduate/acbs/faculty/ Drexel University is located in the University City and Center City neighborhoods of Philadelphia, a major metropolitan area with numerous cultural, medical, educational, and recreational opportunities, as well as easy access via high speed rail to New York City, Washington, D.C., and surrounding areas of the Northeast Corridor. Drexel University is an Equal Opportunity/Affirmative Action Employer. The College of Arts and Sciences is especially interested in qualified students who can contribute to the diversity and excellence of the academic community. To Apply: Applications are now being accepted; the closing deadline is Dec 01, 2013. For complete application instructions, please see the following website: http://www.drexel.edu/psychology/academics/graduate/acbs/application/ Chris R. Sims, Ph.D. Chris.Sims at drexel.edu Applied Cognitive & Brain Sciences Department of Psychology Drexel University www.pages.drexel.edu/~crs346/ (215) 553-7170 **************************************************************** -30- From frank.ritter at psu.edu Sat Jan 4 17:09:43 2014 From: frank.ritter at psu.edu (Frank Ritter) Date: Sat, 4 Jan 2014 17:09:43 -0500 Subject: Connectionists: CogModel notes: ICCM15/BRIMS14/outlets/RFPs/TV show/Jobs Message-ID: [please send this on to your members] The first announcement is driving this email -- ICCM 2015 will be in Gronigen, the Netherlands, near April 2015, on its regular (15&21 month) schedule. The rest indicate new publication outlets, resources, and jobs in or related to Cog Sci and in modeling. I have also included an unusual item, this time in the middle. I may send this more often with so many announcements.... If you would like to be removed, please just let me know. I maintain it by hand to keep it small. [Hypertext version available at http://acs.ist.psu.edu/iccm2015/iccm-mailing-jan2014.html] cheers, Frank Ritter frank.e.ritter at gmail.com http://www.frankritter.com **************************************************************** 1. International Conf. on Cognitive Modeling, April 2015 in Gronigen, NL 2. BRiMS 2014 Call for Papers, due January 6, 2014 http://cc.ist.psu.edu/BRIMS2014/ 3. Annual Meeting of the Cognitive Science Society http://cognitivesciencesociety.org/conference_future.html submissions due 1 feb 2014 4. BRIMS 2013 Proceedings available online http://cc.ist.psu.edu/BRIMS2013/archives/2013/ 5. CMOT special issue on BRIMS 2011 published http://acs.ist.psu.edu/papers/ritterKBip.pdf 6. Conference on Advances in Cognitive Systems, Proceedings online http://www.cogsys.org/proceedings/2013 7. BICA 2013 program available online http://bicasociety.org/meetings/2013/bica2013program.pdf 8. 6th International Conference on Agents and AI, 6-8 March 2014 http://www.icaart.org/ 9. KOGWIS 2014 call for papers and symposia http://www.ccs.uni-tuebingen.de/kogwis14 Submissions due: 7 May 2014 10. AISB 50th conference, 1-4 April 2014 http://www.aisb50.org/ 11. Fourth ACT-R Spring School and Master Class 2014, April 7-12, 2014 http://www.ai.rug.nl/actr-springschool/ applications due 27 jan 2014 11b. Numerous books to review 12. Call for papers: Mental model ascription by intelligent agents, in Interaction Studies 14 jan 2014 deadline 13. Publication policy for Advances in Cognitive Systems [Journal] http://www.cogsys.org/ http://cogsys.org/pdf/paper-3-2-141.pdf 14. Editor change at J. of Interaction Science, and call for papers http://www.journalofinteractionscience.com/ 15. IEEE SMC: Transactions on Human-Machine Systems seeking papers http://www.ieeesmc.org/publications/index.html 16. Nat. Inst. on Dis. and Rehabilitation Res., for Computer Scientists 17. New Perspectives on the Psychology of Understanding http://www.varietiesofunderstanding.com Letters of Intent due March 1, 2014 18. Cognitive scientist to co-host TV show 19. Minding Norms, and Social Emotions, two new books http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf 20. Cambridge U. Press, Winter Sale 2013, 20% off. 21. Assistant or Associate Professor, College of IST http://recruit.ist.psu.edu 22. [Comp-neuro] Faculty positions at Imperial College London 23. Lecturer and research Position (Assistent/in) in Neuro-Robotics, TU/Chemnitz http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php 24. Director of Human MRI Facility, Penn State http://www.la.psu.edu/facultysearch/ 25. U. of Iowa- Assistant Professor Positions, CS v 1 jan 2014 applications get full consideration 26. Rowan U., associate professor level in neuroscience 27. Wright State, Assistant Professor in Human Cognitive Neuroscience 28. Post doctoral position in systems neuroscience and connectivity modeling Hershey Medical Center, Hershey, PA 29. Postdoctoral Research Fellow, Wright State https://jobs.wright.edu/postings/7173 30. Postdoc in modeling with Townsend & Wenger 31. Post-PhD Opportunities for US citizens at Fort Belvoir, VA deadline 1 feb 2014 32. PhD program, Applied Cognitive and Brain Sciences (ACBS), Drexel [closed, but will be interesting next year] **************************************************************** 1. International Conf. on Cognitive Modeling, April 2015 in Gronigen, NL The International Conference on Cognitive Modeling will take place in April 2015 (approx. date) at RU/Gronigen, in the Netherlands. The deadline date for submissions will be in the fall of 2014. Further announcements will provide more details. This was announced at the conference in Ottawa this summer. **************************************************************** 2. BRiMS 2014 Call for Papers, due January 6, 2013 http://cc.ist.psu.edu/BRIMS2013/ On behalf of the BRIMS Society, we are proud to announce the 23rd Annual Conference on Behavior Representation in Modeling & Simulation to be held at the University of California, DC campus in Washington, DC from April 1 to April 4, 2014. It's our great fortune to co-locate with the 2014 International Social Computing, Behavioral Modeling and Prediction Conference (SBP14). Arrangements still need to be made, but we hope to offer complimentary BRiMS registration for those who purchase SBP registration. Please visit the BRiMS web site at http://cc.ist.psu.edu/BRIMS2014/ where updates will be made periodically. Please visit SBP's web site at http://sbp-conference.org/ for their conference details. The submission deadlines reflect careful thinking on our part about your needs to have plenty of lead time to plan your submission, while providing enough time to assemble a BRiMS Proceedings available to all in time for the conference. This year, BRiMS evolved into a co-Chair arrangement. Please welcome Dr. Bill Kennedy of George Mason University, who joined me as I entered my second year of chairing duties. Bill will be our in-person administrator and host of the conference, while I will be the administrative point of contact leading up to the conference. Please direct your question to Bill (wkennedy at gmu.edu) or me (daniel.n.cassenti.civ at mail.mil) depending on the content of your query. We look forward to receiving your submission and to seeing you at the conference! Best Regards, Dan Cassenti & Bill Kennedy The BRIMS Executive Committee invites papers, posters, demos, symposia, panel discussions, and tutorials on topics related to the representation of individuals, groups, teams, and organizations in models and simulations. All submissions are peer-reviewed. Submissions are handled on-line at: http://cc.ist.psu.edu/BRIMS2013/ Please see the guidelines on the BRiMS website for format requirements and content suggestions. If you have any questions about the submission process or are unable to submit to the web site, please contact Daniel Cassenti by email (daniel.n.cassenti.civ at mail.mil) or phone 410-278- 5859. ACCOMMODATIONS and REGISTRATION The conference will be held at the University of California campus in Washington, DC [!]. We are pleased to co-locate BRIMS with the 2014 International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction. Please see their web site at http://sbp-conference.org/ for more information on the conference. CONFERENCE CO-CHAIRS Daniel N. Cassenti, U.S. Army Research Laboratory William G. Kennedy, George Mason University PROGRAM CHAIRS Robert St. Amant, North Carolina State University David Reitter, Penn State University Webb Stacy, Aptima, Inc. **************************************************************** 3. Annual Meeting of the Cognitive Science Society http://cognitivesciencesociety.org/conference_future.html submissions due 1 feb 2014 All Submissions Due - February 1, 2014 Authors will be notified of decisions by April 1, 2014 Camera-ready copy for inclusion in the proceedings due on May 1, 2014 CogSci 2014 - Cognitive Science Meets Artificial Intelligence: Human and Artificial Agents in Interactive Contexts Quebec City, CA July 23 - July 26, 2014 Website URL for future conference: http://cognitivesciencesociety.org/conference_future.html ---- Highlights Include: Plenary Speakers: Dedre Gentner, Steven Harnad, & Minoru Asada 13th Rumelhart Prize Recipient: Ray Jackendoff Symposia: "Foundations of Social Cognition", "Moral Cognition and Computation", "The Future of Human-Agent Interaction" Cognitive scientists from around the world are invited to attend CogSci 2014, the world's premiere annual conference on cognitive science. The conference represents a broad spectrum of disciplines, topics, and methodologies from across the cognitive sciences. In addition to the invited presentations, the program will be filled with reviewed submissions from the following categories: papers, symposia, presentation-based talks, member abstracts, tutorials, and workshops. Submissions must be completed electronically through the conference submissions web site. Submissions may be in any area of the cognitive sciences, including, but not limited to, anthropology, artificial intelligence, computational cognitive systems, cognitive development, cognitive neuroscience, cognitive psychology, education, linguistics, logic, machine learning, neural networks, philosophy, robotics and social network studies. Information regarding the submission process, including opening dates for the submission website will be posted shortly. http://cognitivesciencesociety.org/conference2014/submissions.html We look forward to seeing you in Quebec City. Conference Co-Chairs: Paul Bello, Marcello Guarini, Marjorie McShane, and Brian Scassellati Cognitive Science Society *********************************************************** 4. BRIMS 2013 Proceedings available online http://cc.ist.psu.edu/BRIMS2013/archives/2013/ The BRIMS 2013 proceedings are available online. They include: Modelling the Security Analyst's Role: Effects of Similarity and Past Experience on Cyber Attack Detection Accounting for the integration of descriptive and experiential information in a repeated prisoner's dilemma using an instance-based learning model Decision Criteria for Model Comparison Using Cross-Fitting A Model-based Evaluation of Trust and Situation Awareness in the Diner's Dilemma Game A concise model for innovation diffusion combining curvature-based opinion dynamics and zealotry A Trust-Based Framework for Information Sharing Behavior in Command and Control Environments Advantages of ACT-R over Prolog for Natural Language Analysis The Relational Blackboard Differences in Performance with Changing Mental Workload as the Basis for an IMPRINT Plug-in Proposal An ACT-R Model of Sensemaking in a Geospatial Intelligence Task Using the Immersive Cognitive Readiness Simulator to Validate the ThreatFireTM Belt as an Operational Stressor: A Pilot Study Integrated Simulation of Attention Distribution and Driving Behavior Modeling trust in multi-agent systems Simulating aggregate player behavior with learning behavior trees Declarative to procedural tutors: A family of cognitive architecture-based tutors Architectural considerations for modeling cognitive-emotional decision making Trust definitions and metrics for social media analysis Architecture for goal-driven behavior of virtual opponents in fighter pilot combat training Examining Model Scalability through Virtual World Simulations **************************************************************** 5. CMOT special issue on BRIMS 2011 published http://acs.ist.psu.edu/papers/ritterKBip.pdf The special issue of omputational Mathematical and Organizational Theory (CMOT) based on the best papers of BRIMS 2011 has been published. Kennedy, G. W., Ritter, F. E., & Best, B. J. (2013). Behavioral representation in modeling and simulation introduction to CMOT special issue-BRiMS 2011. Computational Mathematical and Organizational Theory, 19(3), 283-287. Abstract: This special issue is similar to our previous special issues (Kennedy et al. in Comput. Math. Organ. Theory 16(3):217-219, 2010; 17(3):225-228, 2011) in that it includes articles based on the best conference papers of the, here, 2011 BRiMS Annual Conference. These articles were reviewed by the editors, extended to journal article length, and then peer-reviewed and revised before being accepted. The articles include: a new way to evaluate designs of interfaces for safety critical systems (Bolton) an article that extends our understanding of how to model situation awareness (SA) in a cognitive architecture (Rodgers et al.) an article that presents elec- troencephalography (EEG) data used to derive dynamic neurophysiologic models of engagement in teamwork (Stevens et al.), and an article that demonstrates using machine learning to generate models and an ex- ample application of that tool (Best) After presenting a brief summary of each paper we will see some recurrent themes of task analysis, team and individual models, spatial reasoning, usability issues, and particu- larly that they are models that interact with each other or systems. **************************************************************** 6. Conference on Advances in Cognitive Systems, Proceedings online http://www.cogsys.org/proceedings/2013 Paper titles: Fractal Representations and Core Geometry A Cognitive Systems Approach to Tailoring Learner Practice Understanding Social Interactions Using Incremental Abductive Inference Changing Minds by Reasoning About Belief Revision: A Challenge for Cognitive Systems Anomaly-Driven Belief Revision by Abductive Metareasoning CRAMm - Memories for Robots Performing Everyday Manipulation Activities A Cognitive System for Human Manipulation Action Understanding Toward Learning High-Level Semantic Frames from Definitions On the Representation of Inferences and their Lexicalization Towards an Indexical Model of Situated Language Comprehension for Real-World Cognitive Agents Integrating Meta-Level and Domain-Level Knowledge for Interpretation and Generation of Task-Oriented Dialogue Three Lessons for Creating a Knowledge Base to Enable Explanation, Reasoning and Dialog X Goes First: Teaching Simple Games through Multimodal Interaction Learning Task Formulations through Situated Interactive Instruction Narrative Fragment Creation: An Approach for Learning Narrative Knowledge Conceptual Models of Structure and Function Reasoning from Radically Incomplete Information: The Case of Containers Am I Really Scared? A Multi-phase Computational Model of Emotions Three Challenges for Research on Integrated Cognitive Systems **************************************************************** 7. BICA 2013 program available online http://bicasociety.org/meetings/2013/bica2013program.pdf The program with abstracts for the 2013 meeting in Kiev are available online. The web site notes that it will be in Boston in 2014. **************************************************************** 8. 6th International Conference on Agents and AI, 6-8 March 2014 http://www.icaart.org/ This is a European conference on topics related to cognitive modeling. Its paper deadline was in September, but is probably a recurrent event. **************************************************************** 9. KOGWIS 2014 call for papers and symposia http://www.ccs.uni-tuebingen.de/kogwis14 Submissions due: 7 May 2014 ==== FIRST CALL FOR PAPERS AND SYMPOSIA ==== KOGWIS 2014: HOW LANGUAGE AND BEHAVIOUR CONSTITUTE COGNITION 12th Biannual Conference of the German Society for Cognitive Science 29th of September - 2nd of October 2014 Submission deadline: 7 May 2014 http://www.ccs.uni-tuebingen.de/kogwis14 ---- KogWis 2014 invites submissions of extended abstracts on current work in cognitive science. Generally *all topics related to cognitive science* are welcome. Contributions that address the focus of this meeting, that is, language and behaviour and the construction of cognition due to language and behaviour are particularly encouraged. Submissions will be sorted by topic and paradigms and will be independently reviewed. Notifications of acceptance will depend on the novelty of the research, the significance of the results, and the presentation of the work. Submissions will be published in the form of online conference proceedings. Submissions of extended abstracts should not exceed 4000 characters (including spaces, references, etc.). Additional figures may be included. The document should not exceed 4 pages in total. Call for symposia: Submissions are also invited for symposium proposals on specific themes in cognitive science. Symposia should be interdisciplinary with 4-6 speakers who can offer different perspectives on the proposed topic. KogWis cannot provide any financial support for participants, so all the costs need to be covered by the participants, including registration. Submissions of extended abstracts should present the symposium theme along with moderator and speakers names and short summaries of the single contributions. General chair: Martin V. Butz Local organizers: Anna Belardinelli, Elisabeth Hein and Jan Kneissler Anna Belardinelli, PhD Cognitive Modeling, Department of CS University of Tuebingen http://www.wsi.uni-tuebingen.de/lehrstuehle/cognitive-modeling/staff/staff/anna-belardinelli.html **************************************************************** 10. AISB 50th Convention, 1-4 April 2014 http://www.aisb50.org/ The AISB 2014 Convention at Goldsmiths, University of London (hereafter AISB-50) will commemorate both 50 years since the founding of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (the AISB) and sixty years since the death of Alan Turing, founding father of both Computer Science and Artificial Intelligence. The convention will be held at Goldsmiths, University of London, UK from the 1st to the 4th April 2014, and will follow the same overall structure as previous conventions. This will include a set of co-located symposia hosting events that include talks, posters, panels, discussions, demonstrations, and outreach sessions. A FULL COLOUR COMMEMORATIVE POSTER: https://www.dropbox.com/s/xoxao5773i6d1t0/AISB50-Poster.pdf PLENARY SPEAKERS * Terrence Deacon * John Searle * Susan Stepney * Lucy Suchmann PUBLIC LECTURES * John Barnden * Simon Colton GENERAL CHAIR: J. Mark Bishop SECRETARY AND LOCAL CHAIR: Andrew Martin (mr.a.martin at gmail.com) CONFERENCE WEB SITE: http://www.aisb50.org/ SYMPOSIA CONTACTS: * A-EYE: An exhibition of art and nature inspired computation: Mohammad Majid al-Rifaie * AI & Animal welfare: Anna Zamansky * AI & Games : Daniela M. Romano * Art, Graphics, Perception, Embodiment: modelling the creative mind : Frederic Leymarie * Artificial Ethics & medicine: Steve Torrance * Computational Creativity : Mohammad Majid al-Rifaie * Computational Intelligence : Edward Keedwell * Computational Scientific Discovery : Mark.Addis at bcu.ac.uk * Computing and Philosophy: is computation observer-relative? : Yasemin Erden * Consciousness without inner models : jkevin.oregan at gmail.com * Culture of the Artificial : Matthew Fuller * Embodied Cognition, Acting and Performance: deirdre.mclaughlin at cssd.ac.uk * Evolutionary Computing : larry.bull at uwe.ac.uk * Future of Art & Computing : AnnaDumitriu at hotmail.com> * History & Philosophy of Programming : Giuseppe Primiero * Killer robots : Noel Sharkey * Learning, gesture and interaction : Marco Gillies * Live algorithms : Tim Blackwell * New Frontiers in Human-Robot Interaction: Kerstin Dautenhahn * New perspectives on colour : Kate Devlin * Questions, discourse and dialogue: 20 years after Making it Explicit: Rodger Kibble * Reconceptualising mental illness: Joel.Parthemore at semiotik.lu.se * Representation of reality: humans, animals and machines: raffa.giovagnoli at tiscali.it * Robotics and Language : Katerina Pastra * Sex with Robots: davidlevylondon at yahoo.com * Varieties of Enactivism : Mario Eduardo Villalobos * Virtual Worlds & Ecosystems : Frederic Leymarie **************************************************************** 11. Fourth ACT-R Spring School and Master Class 2014, April 7-12, 2014 http://www.ai.rug.nl/actr-springschool/ applications due 27 jan 2014 Organizers: Niels Taatgen, Hedderik van Rijn, Jelmer Borst and Stefan Wierda University of Groningen, Netherlands April 7-12, 2013 [2014] ACT-R is a cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time experiments to driving a car, learning algebra and air traffic control. Following previous ACT-R events in 2010, 2011 and 2013, the University of Groningen will host a spring school and master class. Spring School Participants will follow a compressed five-day version of the traditional summer school curriculum. The standard curriculum is structured as a set of six units, of which we will cover five in the course of the week. Each unit lasts a day and involves a morning theory lecture, an afternoon discussion session on advanced topics, and an assignment which participants are expected to complete during the day. The last day will be devoted to discussions and presentations by participants. Computing facilities will be provided or attendees can bring their own laptop on which the ACT-R software will be installed. To provide an optimal learning environment, admission is limited. Prospective participants should submit an application by January 27, consisting of a curriculum vitae and a statement of purpose. Demonstrated experience with a modeling formalism similar to ACT-R will strengthen the application, as well as general programming experience. Applicants will be notified of admission by February 3. Master Class: Work on your own project Organized parallel to the spring school, the master class offers the opportunity for modelers to work on their own projects with guidance from experienced ACT-R modelers. Note that signing up for the Master Class assumes some prior ACT-R experience, either through self-study, or having followed an earlier ACT-R spring or summer school. Participation in the Master Class is open to all who have some prerequisite ACT-R experience through prior Summer or Spring schools. Please also register before January 27th. PRIM/Actransfer tutorial For those interested in modeling transfer, a tutorial will be offered about the new Actransfer extension to ACT-R. See http://www.ai.rug.nl/~niels/actransfer.html for more information. Registrations fees and housing Registration fee: Euro 200 Housing will be offered in the university guesthouse for approximately 61 Euro /day (single, double rooms are around Euro 78.50, Breakfast is 10 Euro/person). Registration To apply to the 2014 Spring School or Master Class, send an email to Hedderik van Rijn (hedderik at van-rijn.org) and attach the requested documents before January 27, 2014. **************************************************************** 11b. Numerous books to review If you would like to review a book, there are several books recently published that their publishers would give you a copy to facilitate a review in a magazine or journal. If you are interested, email me and I'll foward you to the publisher. Game Analytics: Maximizing the Value of Player Data [Hardcover] Magy Seif El-Nasr, Anders Drachen, Alessandro Canossa (Eds) http://www.amazon.com/Game-Analytics-Maximizing-Value-Player/dp/1447147685 http://www.springer.com/computer/hci/book/978-1-4471-4768-8 How to build a brain, A Neural Architecture for Biological Cognition Chris Eliasmith, OUP. http://nengo.ca/build-a-brain Minding Norms: Mechanisms and dynamics of social order in agent societies Rosaria Conte, Giulia Andrighetto, and Marco Campenn? (eds) http://global.oup.com/academic/product/minding-norms-9780199812677? Social Emotions in Nature and Artifact Jonathan Gratch and Stacy Marsella (eds). http://global.oup.com/academic/product/social-emotions-in-nature-and-artifact-978019538764314 Running Behavioral studies with human participants Ritter, Kim, Morgan, & Carlson, Sage http://www.sagepub.com/textbooks/Book237263 **************************************************************** 12. Call for papers: Mental model ascription by intelligent agents, in Interaction Studies 14 jan 2014 deadline Call for Papers Interaction Studies: Special Issue on Mental Model Ascription by Intelligent Agents Mental model ascription, otherwise known as "mindreading", involves inferring features of another human or artificial agent that cannot be directly observed, such as that agent's beliefs, plans, goals, intentions, personality traits, mental and emotional states, and knowledge about the world. This capability is an essential functionality of intelligent agents if they are to engage in sophisticated collaborations with people. The computational modeling of mindreading offers an excellent opportunity to explore the interactions of cognitive capabilities, such as high-level perception (including language understanding and vision), theory of mind, decision-making, inferencing, reasoning under uncertainty, plan recognition and memory management. Contributions are sought that will advance our understanding of mindreading, with priority being given to algorithmic and implemented (or under implementation) approaches that address the practical necessity of computing prerequisite inputs. This volume was inspired by successful workshops at CogSci 2012 (Modeling the Perception of Intentions) and CogSci 2013 (Mental Model Ascription by Language-Enabled Intelligent Agents). The deadline for submissions is January 14, 2014. Submission requirements and instructions can be found at http://benjamins.com/#catalog/journals/is/submission. Please address questions to the special edition editor, Marge McShane, at mcsham2 at rpi.edu. **************************************************************** 13. Publication policy for Advances in Cognitive Systems [Journal] http://www.cogsys.org/ http://cogsys.org/pdf/paper-3-2-141.pdf Advances in Cognitive Systems (http://www.cogsys.org/) is a new electronic journal, now in its second year, that publishes contributions in the original spirit of AI, which aimed to explain the mind in computational terms and reproduce the entire range of human cognition in computational artifacts. Advances in Cognitive Systems is associated with an annual conference of the same name, the first instance of which took place last December. The second volume of the journal served as the electronic proceedings of that meeting. I am writing to tell about a new policy that alters the relationship between journal and conference slightly: - Authors are still welcome to submit papers for publication either to the electronic journal or to the annual conference. - If a journal submission (16 pages) is received at least one month before the conference deadline and is accepted for publication, its authors will be invited to present a talk at the conference. - If a long submission (16 pages) to the conference is accepted for presentation at the meeting, it may either be included in the annual proceedings or be invited to appear in the journal. - If a short submission (8 pages) to the conference is accepted for presentation at the meeting, it will be included in the annual proceedings but not in the journal. This policy should spread submissions across the year, giving more flexibility to authors and reducing the load on reviewers while letting more researchers present their results at the conference. You can find more details at http://www.cogsys.org/instructions/ and in the call for papers to the Annual Conference on Advances in Cognitive Systems, which should be available shortly. Sincerely, Pat Langley, Editor Advances in Cognitive Systems **************************************************************** 14. Editor change at J. of Interaction Science, and call for papers http://www.journalofinteractionscience.com/ Dr. Ray Adams has resigned as editor in chief. The editorial director at Springer, Beverley Ford, asked me to serve as the sole editor-in-chief. It is not an easy job to fill Ray's shoes but with the help of colleagues and great editors, we are working as a team and we are up to the job. The good news is that after a slow start, JoIS is very close to the first issue. As soon as one more paper has been accepted after peer review, we will be online. We are always looking for submissions and expecially right now: Would you share the journal address with potential authors ? Many thanks! Best wishes, Susanne Professor Susanne Bahr and I would like to tell you that our dynamic, new "high-impact" journal, the "Journal of Interaction Science" will be published by Springer next year. We would also like to invite you to contribute a paper to our Journal, possibly on how to introduce more rigour into interaction design. The Journal of Interaction Science (JoIS) has a unique focus to promote research based on: Scientific approaches to interaction design, The use of interaction paradigms for fundamental research in human and computer centred sciences The use of computer science to support scientific research. On the one hand, our scope is narrowly focused specifically on promoting scientific HCI; on the other hand, it is broadly focused, being designed to attract a wide audience and research by the large number of human-centred scientists who do not consider current HCI journals to be sufficiently relevant to them. Interaction science and focuses on the integration of the study of people, with that of artifacts and the sciences involved. We are looking for significant, scientific papers that report empirical results, substantial new theories, methodological innovations and important meta-analyses. With best wishes, Susanne Bahr Gisela Susanne Bahr, Associate Professor Florida Institute of Technology Ph.D. Experimental Psychology - Cognition ABD Ph.D. Computer Sciences Editor in Chief Journal of Interaction Science School of Psychology, Florida Institute of Technology Melbourne, Florida 32901-6975 gbahr at fit.edu, 321.674.8104 **************************************************************** 15. IEEE SMC: Transactions on Human-Machine Systems seeking papers http://www.ieeesmc.org/publications/index.html IEEE Systems Man and Cybernetics Society has a journal, IEEE SMC: Transactions on Human-Machine Systems. This used to be IEEE SMC: Part C: Applications & Reviews, but has recently been rebadged. It is asking its editorial board to ask others to submit high quality papers. So, I'm noting to you that many of you will find that your research, I believe, fits into this journal. Details are available at: http://www.ieeesmc.org/publications/index.html http://mc.manuscriptcentral.com/thms [If you have questions, please PM me, email me, see me in the hall, or call. -Frank] **************************************************************** 16. Nat. Inst. on Dis. and Rehabilitation Res., for Computer Scientists [There may be some proposal calls in here. This announcement is not itself a call, but a pointer to an area. I have a lot of time for Clayton, and while I don't know about this much, I know him and there is at least an audience if not money.] From: "Lewis, Clayton (Contractor)" To: Clayton Lewis Date: Wed, 8 May 2013 16:09:49 -0500 Subject: NIDRR for Computer Scientists In 2011 I began working as a consultant to the National Institute on Disability and Rehabilitation Research (NIDRR), helping to develop an initiative on cloud computing for people with disabilities (NIDRR provided early funding for the Global Public Inclusive Infrastructure, gpii.net, of which you may have heard). In that role I've become aware that there are significant needs and opportunities for computer science research in support of NIDRR's goals, but not many computer scientists know about NIDRR and its programs. I'm writing to you to call your attention to these opportunities, including a brand new announcement on funding for research on inclusive cloud and web computing, . I've attached a writeup, " NIDRR for Computer Scientists". Please pass it along to any colleagues who may be interested. NIDRR for Computer Scientists. What is NIDRR? NIDRR (the National Institute on Disability and Rehabilitation Research) is an agency within the US Department of Education that funds about $110M of research per year (of course this amount varies) aimed at improving the lives of people with disabilities. Why am I writing this? I'm a long-time computer science faculty member at the University of Colorado, Boulder. In 2011 I began working as a consultant to NIDRR, helping to develop an initiative on cloud computing for people with disabilities (NIDRR provided early funding for the Global Public Inclusive Infrastructure, gpii.net, of which you may have heard). In that role I've become aware that there are significant needs and opportunities for computer science research in support of NIDRR's goals, but not many computer scientists know about NIDRR and its programs. Why should computer scientists be interested in NIDRR? As information and communication technologies become important in more aspects of life, and as the ability of these technologies to provide useful assistance grows, there are more and more opportunities for computer science research to contribute to NIDRR's programs. Cloud computing, data integration and data analytics for service effectiveness improvement, recognition technology, social software, accessible and autonomous transportation systems, natural language processing, and configurable user interface technology all have a role in enabling people with disabilities to participate in society more fully and independently. In a typical year NIDRR funds a wide range of projects, from multi-year research and engineering centers, aimed at designated aspects of disability research, to smaller "field initiated" projects proposed by investigators. If you are not already active in disability research, your chances of success will likely be greater if you collaborate with other investigators who have knowledge of and experience in disability research, though you are free to apply on your own. Many of NIDRR's peer reviewers are disability researchers, and of course must judge that proposals are well conceived as contributions to that field. Proposals that represent excellent computer science, but are weak in connecting to the needs of people with disabilities, are unlikely to be competitive. Collaboration can solve this problem. You could develop such collaboration in more than one way. You could approach local colleagues who have the necessary experience, or you could reach out to investigators nationally who work on problems to which the computing technology on which you work could be relevant. For NIDRR-funded projects, current and completed, there is a convenient search facility at http://www.naric.com/?q=en/ProgramDatabase, where you can find projects whose descriptions mention your institution or your state, or particular topics of interest to you. Here are some professional associations that publish papers on technology and disability that you can explore to see who is doing what on the national and international scene: ACM Special Interest Group on Accessible Computing (SIGACCESS), proceedings searchable at http://dl.acm.org/sig.cfm?id=SP1530&CFID=155317526&CFTOKEN=59939155SIGACCESS; IEEE Engineering in Medicine and Biology Society and IEEE Systems, Man, and Cybernetics Society, publications searchable in the IEEE Xplore Digital Library (http://ieeexplore.ieee.org); The Rehabilitation and Assistive Technology Association of North America (RESNA), proceedings at http://www.resna.org/conference/proceedings/index.dot Another good way to familiarize yourself with NIDRR and its programs is to serve as a reviewer. NIDRR is always looking for peer reviewers with a variety of specific subject-matter expertise. See http://www2.ed.gov/about/offices/list/osers/nidrr/nidrrpeerreview.html for more information. FAQ How can I find out more about NIDRR funding programs? NIDRR has a nice Web resource for potential applicants at http://www2.ed.gov/programs/nidrr/applyingforanidrrgrant.html. This includes a summary of its programs, and suggestions about how to track new funding opportunities as they arise. What NIDRR programs are likely to be of most interest to computer scientists? Field Initiated Projects can address any of a wide range of issues relating to people with disabilities, including development of new technologies, employment, independent living, and medical rehabilitation, for any disability populations, with a wide range of research approaches. Disability and Rehabilitation Research Projects are invited to address particular topic areas, or "priorities", and an increasing number of these include topics in computer science. NIDRR also participates in the SBIR (Small Business Innovation Research) program. How do I apply? The Web resource mentioned above, http://www2.ed.gov/programs/nidrr/applyingforanidrrgrant.html , has detailed information for applicants, including tips on writing a strong proposal. How does NIDRR evaluate proposals? NIDRR uses a peer review process. Unlike NSF, but like NIH, NIDRR evaluation is based on numeric scoring guided by a rubric, so it is very important that all proposal requirements are carefully addressed. Here, too, collaboration with an experienced disability researcher can be a big help. Being at NIDRR has made me a big fan of the agency and the contributions its grantees have made and are making. I hope you'll investigate the opportunities NIDRR offers for computer scientists to participate in this important and satisfying work. Please let me know if I can help. Sincerely, Clayton Lewis, NIDRR Consultant **************************************************************** 17. New Perspectives on the Psychology of Understanding http://www.varietiesofunderstanding.com Letters of Intent due March 1, 2014 New Perspectives on the Psychology of Understanding. Fordham University, with the support of a grant from the John Templeton Foundation, invites proposals for the "New Perspectives on the Psychology of Understanding" funding initiative. Our aim is to encourage research from both new and established scholars working on projects related to understanding in its many forms. This $1.2 million RFP is intended to support empirical work in cognitive, developmental, educational, and other areas of psychology. Proposals can request between $50,000 and $225,000 for projects not to exceed two years in duration. We intend to make 7-8 awards. Timeline Letters of Intent due March 1, 2014 Invited full proposals due April 15, 2014 For more information, visit: www.varietiesofunderstanding.com All questions should be directed to: psychology at varietiesofunderstanding.com Psychology Director - Tania Lombrozo, Assistant Professor of Psychology, University of California, Berkeley Project Leader - Stephen Grimm, Associate Professor of Philosophy, Fordham University Research begins July 1, 2014 **************************************************************** 18. Cognitive scientist to co-host TV show [!] http://bellmediapr.ca/Network/Discovery-World/Press/STEPHEN-HAWKINGS-BRAVE-NEW-WORLD-Glimpses-into-the-Future-Nov-15-on-Discovery-World- Science is about to change the world. [!] Professor Stephen Hawking examines cutting-edge breakthroughs and their implications for the future in STEPHEN HAWKING'S BRAVE NEW WORLD. The six-part, original Canadian series returns for a second season, premiering Friday, November 15 at 8 p.m. ET/10 p.m. PT on Discovery World. Hawking is joined by a crack team of scientists - including Dr. Carin Bondar and Professor Chris Eliasmith from Canada, Dr. Daniel Kraft from the U.S., and Professor Jim Al-Khalili and Dr. Aarathi Prasad from the U.K. They travel the world to investigate the latest innovations, from gecko skin-like material with adhesive qualities that mimic Spiderman's ability to scale buildings; to realistic digital avatars and the latest gaming technology that will change the face of the entertainment industry; to a 3D printer that can generate live human tissue; and a Canadian innovation using submarines for deep space training. [Haven't had something this to announce before. These aired in the Fall in Canada. There are several more episodes, and will appear, I believe, in the US (Scienc Discovery) and around the world (National Geographic Channel), and other locations later.] **************************************************************** 19. Minding Norms, and Social Emotions, two new books http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf Two new books have been published in the Oxford Series on Cognitive Models and Architectures: Minding Norms: Mechanisms and dynamics of social order in agent societies Rosaria Conte, Giulia Andrighetto, and Marco Campenn? (eds) http://global.oup.com/academic/product/minding-norms-9780199812677 Social Emotions in Nature and Artifact, Jonathan Gratch and Stacy Marsella (eds). http://global.oup.com/academic/product/social-emotions-in-nature-and-artifact-9780195387643 A flyer describing them and the series, offering a 20% discount http://www.frankritter.com/oxford-cma/ICCMFlyer.2.CogArchSeries.pdf **************************************************************** 20. Cambridge U. Press, Winter Sale 2013, 20% off. http://www.cambridge.org/us/knowledge/academic_discountpromotion/?site_locale=en_US&code=OURWINTERBEST Cambridge U. Press also has a 20% discount going. **************************************************************** 21. Assistant or Associate Professor, College of IST http://recruit.ist.psu.edu The College of Information Sciences and Technology at The Pennsylvania State University is a College that emphasizes a) systems-level thinking to approach global, societal problems, b) multiple methodologies in the pursuit of interdisciplinary research and design, and c) active, collaborative learning to support transformative teaching. To learn more about our vision, mission, goals, structure, faculty and students, please go to http://ist.psu.edu. We are searching to fill multiple positions at the Assistant or Associate Professor level in our ranks of tenure-track faculty members, who will assist our college in attaining its goals in education, research and service to the community. The College has strengths in six key areas including: 1) Computational Informatics and Science; 2) Organizational Informatics; 3) Social Policy, Economics and Informatics; 4) Human- Computer Interaction; 5) Cognition and Networked Intelligent Systems and 6) Security, Privacy and Informatics. We seek applicants who show clear evidence that they will become or are leading scholars and premier teachers in their fields and are interested in being part of a vibrant, civil and diverse academic community. Although we welcome applications from a broad variety of areas that match the research interests in the college, we are particularly interested in applicants who would like to pursue research and teaching in the following areas: 1) Enterprise Architecture; 2) Biomedical/Health Informatics; 3) Computational Informatics; 4) Security & Risk Analysis. We are interested in applicants who approach these areas from either a social, cognitive, or computational perspective or a combination of these perspectives. Qualified candidates are invited to submit their curriculum vitae, summary of research and teaching plans, as well as the contact information of four persons who will write letters of recommendations at http://recruit.ist.psu.edu. For questions, please contact Dr. Prasenjit Mitra, Faculty Search Committee Chair, 313F IST Building, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802-6823 or via email facsearch at ist.psu.edu. Review of applications will begin on October 15, 2013 [informal enquiries to David Reitter or to me] **************************************************************** 22. [Comp-neuro] Faculty positions at Imperial College London Date: Tue, 10 Dec 2013 10:32:13 +0000 From: Simon Schultz Subject: [Comp-neuro] Faculty positions at Imperial College London There is currently an open round for Lecturers/Senior Lecturers (US equivalent Assistant/Associate Professors) in the Department of Bioengineering at Imperial College London. I would like to encourage applications in the area of neurotechnology, and particularly translational/clinical neurotechnology, to apply for this round. We have recently been awarded a ?10M EPSRC Centre for Doctoral Training in Neurotechnology, which will award approximately fourteen 4-year PhD studentships per year. Successful faculty applicants in this area would become members of the new Imperial College Centre for Neurotechnology. ---- Lecturers/Senior Lecturers (open call), Imperial College, Bioengineering Imperial College London - Department of Bioengineering Lecturer salary range ?45,040 to ?50,190 pa Senior Lecturer minimum salary ?55,340 pa Due to rapid growth and the strategic importance of the discipline of bioengineering at Imperial College London, the Department of Bioengineering wishes to appoint a number of individuals at Lecturer or Senior Lecturer grade. This is an open call in which we are looking for outstanding candidates in any field of Bioengineering. Additionally, as the department is making multiple appointments for strategic reasons, it particularly welcomes applications from individuals working in one of the two areas listed below: * Regenerative Medicine/Tissue Engineering * Bioengineering of cancer detection, diagnosis, pathogenesis and/or treatment Candidates should have an advanced degree (PhD or equivalent) in an appropriate science or engineering discipline and will have demonstrated an ability to generate and execute research at an internationally-leading level. Evidence of teaching at all levels is also required. Professor Anthony Bull: telephone 020 7594 5186 Closing date: 05 January 2014. **************************************************************** 23. Lecturer and research Position (Assistent/in) in Neuro-Robotics, TU/Chemnitz http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php Lecturer and research Position (Assistent/in) in Neuro-Robotics Lecturer and research Position (Akademische/r Assistent/in) in Neuro-Robotics The position is available at Chemnitz University of Technology in the Department of Computer Science within the Professorship of Artificial Intelligence. It requires teaching and research. Teaching is required about 4 hours per week within the semester and involves lectures and exercises in robotics and neuro-robotics as well as exercises in artificial intelligence or image processing. The candidate is expected to contribute to research in neuro-robotics, e.g. to develop brain inspired models of motor or cognitive processes run on robotic platforms. He or she should have a PhD in computer science or related fields, e.g. electrical engineering. Prior experience in robotics or neuro-computational modeling is advantageous. Good English language skills are necessary. Good German is initially not required, but the candidate should have an interest to learn the German language. [This is true, it is useful to know some or all German or a German when living in Chemnitz!] We offer a stimulating international and interdisciplinary environment. Available and recently ordered robotic platforms include an iCub head, two Nao, a Koala with stereo pan-tilt vision and several K-Junior V2 robots. The salary is according to German standards (E 13 TV-L or A 13). The position is initially for 4 years, but can be extended. The starting date is April 2014 or earlier. Chemnitz is the third-largest city of the state of Saxony and close to scenic mountains. Major cities nearby are Leipzig and Dresden with a rich tradition of music and culture. Further details (in German) can be found here: http://www.tu-chemnitz.de/verwaltung/personal/stellen/257030_AA_Rab.php Applications should be sent by email (preferably in PDF format) to (fred.hamker at informatik.tu-chemnitz.de). The deadline was on 30.09.2013, but applications will be considered until the position is filled. In addition to a CV the candidate should provide an overview of his planned research for the next 4 years. **************************************************************** 24. Director of Human MRI Facility, Penn State http://www.la.psu.edu/facultysearch/ NEUROSCIENCE, Director of Human MRI Facility The Department of Psychology at Penn State, http://psych.la.psu.edu/ is recruiting for a neuroscientist (associate professor or professor) who also will serve as Director of the Human MRI Facility at the Social, Life, and Engineering Sciences Imaging Center, or SLEIC (http://www.imaging.psu.edu). The substantive focus of research is open; expertise in advanced data analysis techniques for fMRI data is welcome, but not required. The Director will affiliate with one or more of the Department's graduate program areas (cognitive, developmental, social, clinical, and industrial/organizational psychology). Rich opportunities exist for collaboration across the department and across the campus, including a range of centers such as the Center for Language Science (http://cls.psu.edu/), the Child Study Center (http://csc.psych.psu.edu/), and the Center for Brain, Behavior, and Cognition (http://www.huck.psu.edu/center/brain-behavior-cognition). The Director position is co-funded by the Social Science Research Institute (http://www.ssri.psu.edu/). Candidates are expected to have a record of excellence in research and teaching, and a history of external funding. Review of applications for the position begins immediately and will continue until the position is filled. Candidates should submit a letter of application including concise statements of research and teaching interests, a CV, and selected (p) reprints. Letters of recommendation will be requested from applicants selected as finalists. Electronic submission is strongly preferred; please submit materials at http://www.la.psu.edu/facultysearch/ If unable to submit electronically, mail materials to the Neuroscience Faculty Search Committee - Box A, Department of Psychology, Penn State, University Park, PA 16802. Questions regarding the application process and the position can be emailed to Judy Bowman, jak8 at psu.edu and questions regarding the position can be sent to Sheri Berenbaum sab31 at psu.edu, Chair. We especially encourage applications from individuals of diverse backgrounds. Employment will require successful completion of background check(s) in accordance with University policies. Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce. **************************************************************** 25. U. of Iowa- Assistant Professor Positions, CS 1 jan 2014 applications get full consideration Pending final budget approval, the Computer Science Department seeks three tenure-track faculty at the level of assistant professor starting August of 2014 as part of a new institution-wide initiative in informatics. The new initiative is intended to strengthen expertise and infrastructure in informatics, more specifically, data analytics, systems software, machine learning, theory and algorithms, embedded systems, networks and smart sensors, computer graphics and visualization as well as areas that bridge core informatics to other disciplines. Areas of particular interest in this first round of hiring include i) machine learning, ii) data science and visualization, and iii) device and network centric software. Applications received by January 1, 2014, are assured of full consideration. Education Requirement: Candidates must hold a PhD in computer science, informatics, or a closely related discipline. Appointments will be made within the Computer Science Department, which offers BA, BS, MCS, and PhD degrees in computer science, and BA and BS degrees in Informatics. Required Qualifications: Duties include conducting externally funded research in the candidate's area of expertise, teaching undergraduate and graduate computer science and/or informatics courses, supervising graduate student research, and making service contributions to the Department, the College, the University, and the discipline. Successful candidates must demonstrate potential for research and teaching excellence in the environment of a major research university, and will be expected to participate in collaborative research as part of the interdisciplinary informatics initiative. Desirable Qualifications: Demonstrated interest in solving interdisciplinary problems, the ability to work with interdisciplinary teams, and prior teaching experience. Both the Department and the College of Liberal Arts and Sciences are strongly committed to gender and ethnic diversity; the strategic plans of the University, College, and Department reflect this commitment. Women and minorities are encouraged to apply. Applications should contain a CV, research and teaching statements, and three letters of recommendation. Apply at http://vinci.cs.uiowa.edu/recruit From: Juan Pablo Hourcade To: CHI-JOBS at LISTSERV.ACM.ORG **************************************************************** 26. Rowan U., associate professor level in neuroscience [From a former PSU grad student:] I am now a faculty member at Rowan University (in Glassboro, NJ - about 30 minutes south of Philadelphia). We are actively searching for an associate professor level neuroscience person. The ad will also accommodate an assistant professor level person. The job description is located at: http://rowanuniversity.hodesiq.com/jobs/assistant-associate-professor-neuroscience-school-of-biomedical-sciences-glassboro-new-jersey-job-3974550 Please distribute it among your former students. (In the coming years, I anticipate that we will be searching for again for neuroscience people). Tabbetha A. Dobbins dobbins at rowan.edu 856-256-4366 Department of Physics & Astronomy Rowan University **************************************************************** 27. Wright State, Assistant Professor in Human Cognitive Neuroscience The Psychology Department at Wright State University seeks applicants for a tenure-track Assistant Professor position in Human Cognitive Neuroscience. Please forward the following link to eligible candidates. http://science-math.wright.edu/psychology/about/faculty-search-in-human-cognitive-neuroscience Thank you! Ion Juvina, Ph.D., ion.juvina at wright.edu Psychology, Wright State University http://www.wright.edu/cosm/departments/psychology/faculty/juvina.html http://psych-scholar.wright.edu/ijuvina/ **************************************************************** 28. Post doctoral position in systems neuroscience and connectivity modeling Hershey Medical Center, Hershey, PA Post doctoral position in systems neuroscience and connectivity modeling. Funding sources: PA; Tobacco Settlement Fund; Social Science Research Institute, University Park Duration: 2 years We are seeking a highly motivated Postdoctoral Fellow in the area of clinical/cognitive neuroscience, brain imaging, and network modeling. The successful candidate will work in collaboration with Dr. Hillary in Psychology, Dr. Reka Albert of Physics and Dr. Peter Molenaar in Human Development and Family Studies focusing on time series analysis, graph theory, and connectivity modeling of human brain imaging data (high density EEG, BOLD fMRI). The primary responsibility of this position is to facilitate ongoing research examining neural plasticity after severe traumatic brain injury in humans. There is also keen interest for this position to support the development of novel methods for understanding plasticity from a systems neuroscience perspective. This includes prospective data collection as well as analysis of existing data sets. Current lab goals aim to integrate: 1) signal processing (i.e., non-stationarity in time series data; cross-frequency coupling), 2) large scale connectivity analysis (e.g., graph theory), 3) machine learning, and 4) novel methods isolating regions of interest. A Doctorate (M.D. and/or Ph.D.) degree is required or anticipated in the calendar year. Excellent verbal and written communication skills and a background in computational modeling (broadly defined) is required and programming experience is preferred. Please send a CV, cover letter, and the names and contact information for three references to fhillary at psu.edu and make sure to attach a [CV?]. Contact: fhillary at psu.edu Frank G. Hillary, PhD Associate Professor of Psychology **************************************************************** 29. Postdoctoral Research Fellow, Wright State https://jobs.wright.edu/postings/7173 Postdoctoral Research Fellow and Research Assistant in Human Factors in Surgical Simulation and Training Department of Biomedical, Industrial and Human Factors Engineering Wright State University We are seeking a highly qualified postdoctoral research associates and a research assistant to perform research in surgical simulation and training. This multidisciplinary research involves engineers and physicians from Wright State University, Miami Valley Hospital, and other medical institutions in Ohio. Successful candidates will hold one-year appointments, renewable pending availability of funds. Job Title: Postdoctoral Research Associate Qualifications: 1. Earned Ph.D. degree in human factors engineering, experimental psychology, biomedical engineering, computer science, or equivalent, with an interest in medical devices and systems design, virtual reality simulation, haptics, and/or human performance evaluation and training. 2. Ability to work independently as well as collaboratively on research projects. 3. Excellent communication skills, both verbal and written. 4. Experience in conducting research with human and animal subjects, using both quantitative and qualitative methodologies, and the IRB and IACUC process. 5. Experience with virtual or augmented reality, haptic devices, HCI and UI design, programming in C, C++, OpenGL, and statistical data analysis packages such as SAS, SPSS, or R. 6. Ability to prioritise tasks, manage team members, and disseminate results in a timely manner. Responsibilities: 1. Conduct literature review. 2. Conduct field studies and controlled laboratory experiments in the hospital operating room, animal labs, and simulation labs. 3. Perform task analysis, cognitive task analysis, work domain analysis, etc. of the surgical environment and of emerging surgical techniques. 4. Manage project activities and team members 5. Assist in writing grant proposals and developing research ideas. 6. Develop and design data acquisition apparatus and measurement protocols. 7. Co-author conference papers and peer-reviewed journal papers. To submit an application: https://jobs.wright.edu/postings/7173 **************************************************************** 30. Postdoc in modeling with Townsend & Wenger The Laboratory for Mathematical Psychology at Indiana University (Jim Townsend) and the Visual Neuroscience Laboratory at the University of Oklahoma (Michael Wenger) seek a postdoctoral researcher in cognitive modeling and experimental psychology. This position is funded by the National Science Foundation. This person will be in residence at Indiana University and will be responsible for assisting Drs. Townsend and Wenger on a three-year project titled "Building a Unified Theory-Driven Methodology for Identification of Elementary Cognitive Systems." Duties will include overseeing graduate students, undergraduate RAs, and technical assistants; planning and carrying out experiments; data analysis; model testing; and preparing manuscripts for publication. Applicants should have a Ph.D. in a relevant field; background in cognitive modeling, mathematical statistics, and experimental behavioral science; demonstrated scientific expertise, with publications in refereed journals; extensive experience in experimental design and development; ability to create quantitative models of cognitive processing; solid programming skills, preferably with R, Matlab, and Python; experience writing grants and manuscripts; ability to work and lead in a team environment.; ability to work independently on designated projects.; and demonstrated interpersonal skills. The position begins as soon as it is filled and is a one-year appointment. Renewals for a second and third year are possible and are contingent on availability of funds, satisfactory performance, acceptable progress in carrying out the assigned duties, and mutual agreement. This is a 100% appointment. Initial salary is $40,000/year. Please submit a cover letter including a statement of career goals (one paragraph), curriculum vitae, publications, and contact information of three references. Please send these materials to: Michael Wenger, (mjw327 at cornell.edu) / (michael.j.wenger at ou.edu) Center for Applied Social Research, The University of Oklahoma, 3100 Monitor Avenue, Suite 100, Norman OK 73019. M. J. Wenger, Ph.D. Visiting Professor Division of Nutritional Sciences Cornell University Ithaca, NY 14850 Department of Psychology Graduate Program in Cellular and Behavioral Neurobiology The University of Oklahoma Norman OK 73019 phone: (405) 325-3846 e-mail: michael.j.wenger at ou.edu / mjw327 at cornell.edu web: https://sites.google.com/site/ouvnlab **************************************************************** 31. Post-PhD Opportunities for US citizens at Fort Belvoir, VA deadline 1 feb 2014 We are currently looking for four USA citizens with specific technical expertise to place them at Defense Threat Reduction Agency, DTRA, located in Fort Belvoir within the next few months. Candidates must be eligible for attaining a security clearance. The positions are in Thrust Areas 2, 3, 4 and 5. Post-Doctoral Research PIPELINE for Fellows Program The objective of this fellowship program is to establish and sustain a long-term process through which Penn State University will develop and execute a Post- Doctoral Research PIPELINE for Fellows Program to address critical scientific, technology and engineering needs for reducing the threat from Weapons of Mass Destruction (WMD). This project will enable DTRA to utilize mission-critical expertise possessed by highly qualified faculty and graduate students (nearing completion of their degree) who hold doctoral or terminal professional degrees in relevant scientific, technical and engineering disciplines. Post-Doctoral / Masters Fellows will be selected based upon their responsive ability to enhance the joint DTRA-Strategic Partnership mission requirements. Key science and technology skills include: nuclear and radiation physics; weapons engineering; structural, electrical and mechanical engineering; broad-based nano-technological engineering and applications; weapons effects and system response technologies; physics, chemistry and biological sciences related to detection, characterization and destruction of WMD materials; medical and pharmaceutical sciences; information technology, modeling, data visualization and advanced computational sciences; social, adversarial and behavioral modeling, science and analysis. [Email jbm18 at psu.edu for an application, some hard science background required, but this is in my experience, more looking for intelligence in the applicant than calculus, per se] Application deadline is 02/01/2014 by email. Further Details For qualified candidate, this opportunity would provide the following to a US citizen, capable of obtaining a security clearance at the Secret level, to spend one year working at DTRA (Fort Belvior): $83,664 annual salary, includes a $1,000 monthly living allowance $6,000 Domestic Travel allowance Specifically, the Research and Development Directorate, Basic and Applied Sciences Department (J9-BA) is looking to fill one position in each of the following thrust areas: TA2 = Cognitive and Information Science: Description: The basic science of cognitive and information science results from the convergence of computer, information, mathematical, network, cognitive, and social science. This research thrust expands our understanding of physical and social networks and advances knowledge of adversarial intent with respect to the acquisition, proliferation, and potential use of WMD. The methods may include analytical, computational or numerical, or experimental means to integrate knowledge across disciplines and improve rapid processing of intelligence and dissemination of information. Education: The candidate should have a Ph.D. in Electrical Engineering/Physics. The candidate should have a strong background in power systems and control theory. Knowledge of nuclear weapons effects a plus. [!] [this has been filled by information science in the past, and relvent BSEE to cogsci might find the other positions attainable] TA3 = Science for Protection: Description Basic science for protection involves advancing knowledge to protect life and life-sustaining resources and networks. Protection includes threat containment, decontamination, threat filtering, and shielding of systems. The concept is generalized to include fundamental investigations that reduce consequences of WMD, assist in the restoration of life-sustaining functions and support forensic science. Education: The candidate should have a Ph.D. in Physical Sciences such as electrical engineering, materials science, nuclear physics, solid state physics or related discipline. A background including coursework or research in nuclear science is desired TA 4 = Science to Defeat WMD: Description: Basic science to defeat WMD involves furthering the understanding of explosives, their detonation and problems associated with accessing the target WMDs. This research thrust includes the creation of new energetic molecules/materials that enhance the defeat of WMDs, the improvement of modeling, and simulation of these materials and various phenomena that affect success and estimate the impact of defeat actions, and investigation of novel methods that may yield order-of-magnitude improvements in energy and energy release rate. Education: The candidate should have a PhD in Material Science, Chemical Engineering, Chemistry, Physics, Chemical Physics, Computational Physics, Mechanical Engineering, or Materials Engineering. TA5 = Science to Secure WMD: Description: Basic science to support securing WMD includes: (a) environmentally responsible innovative processes to neutralize chemical, biological, radiological, nuclear, or explosive (CBRNE) materials and components; (b) discovery of revolutionary means to secure components and weapons; and (c) studies of scientific principles that lead to novel physical or other tags and methods to monitor compliance and disrupt proliferation pathways. The identification of basic phenomena that provide verifiable controls on materials and systems also helps arms control. Education: The candidate should have a Ph.D. in one of the fields of physical or life sciences. Jan Mahar Sturdevant, jbm18 at psu.edu Professor of Practice College of IST, Penn State **************************************************************** 32. PhD program, Applied Cognitive and Brain Sciences (ACBS), Drexel [closed, but will be interesting next year] From: Chris Sims To: "cvnet at mail.ewind.com" , comp-neuro at neuroinf.org, visionlist at visionscience.com The Applied Cognitive and Brain Sciences (ACBS) program at Drexel University invites applications for Ph.D. students to begin in the Fall of 2014. Faculty research interests in the ACBS program span the full range from basic to applied science in Cognitive Psychology, Cognitive Neuroscience, and Cognitive Engineering, with particular faculty expertise in computational modeling and electrophysiology. Accepted students will work closely with their mentor in a research-focused setting, housed in a newly-renovated, state-of-the-art facility featuring spacious graduate student offices and collaborative workspaces. Graduate students will also have the opportunity to collaborate with faculty in Clinical Psychology, the School of Biomedical Engineering and Health Sciences, the College of Computing and Informatics, the College of Engineering, the School of Medicine, and the University's new Expressive and Creative Interaction Technologies (ExCITe) Center. Specific faculty members seeking graduate students, and possible research topics are below. * Chris Sims, Drexel Laboratory for Adaptive Cognition http://www.pages.drexel.edu/~crs346/ - Visual memory and perceptual expertise - Decision-making under uncertainty and learning from feedback - Sensorimotor control and coordination - Computational models of cognition * Dan Mirman, Language and Cognitive Dynamics Laboratory http://www.danmirman.org/ - Recognition, comprehension, and production of spoken words - Organization and processing of semantic knowledge - Computational models of brain and behavior * John Kounios, Creativity Research Laboratory https://sites.google.com/site/johnkounios/laboratory - Cognitive psychology/cognitive neuroscience, focusing on human memory, problem solving, intelligence, and creativity - Specialization in electrophysiological methods (EEG, ERP), and other behavioral and neuroimaging methods (e.g., fMRI) For a full list of faculty members in the ACBS http://www.drexel.edu/psychology/academics/graduate/acbs/faculty/ Drexel University is located in the University City and Center City neighborhoods of Philadelphia, a major metropolitan area with numerous cultural, medical, educational, and recreational opportunities, as well as easy access via high speed rail to New York City, Washington, D.C., and surrounding areas of the Northeast Corridor. Drexel University is an Equal Opportunity/Affirmative Action Employer. The College of Arts and Sciences is especially interested in qualified students who can contribute to the diversity and excellence of the academic community. To Apply: Applications are now being accepted; the closing deadline is Dec 01, 2013. For complete application instructions, please see the following website: http://www.drexel.edu/psychology/academics/graduate/acbs/application/ Chris R. Sims, Ph.D. Chris.Sims at drexel.edu Applied Cognitive & Brain Sciences Department of Psychology Drexel University www.pages.drexel.edu/~crs346/ (215) 553-7170 **************************************************************** -30- From muftimahmud at gmail.com Sun Jan 5 13:53:50 2014 From: muftimahmud at gmail.com (Mufti Mahmud) Date: Sun, 5 Jan 2014 19:53:50 +0100 Subject: Connectionists: CFP: 1st International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2014): 10 - 12 April 2014. Message-ID: 1st International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2014): 10 - 12 April 2014 The 1st International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2014, 10 - 12 April 2014) aims to bring together leading academic scientists, researchers and scholars to exchange and share their experiences and research results about all aspects of Electrical, Electronics, Communication Engineering and IT, and discuss the practical challenges encountered and the solutions adopted. The conference is jointly organized by the Electrical Electronics and Communication Engineering of Military Institute of Science and Technology, Mirpur, Dhaka and Institute of Information Technology of Jahangirnagar University, Savar, Dhaka-1342. Prospective authors are invited to submit original technical papers for presentation at the conference and publication in the conference proceedings. All submitted papers will be peer reviewed. Accepted and registered papers will be included in the conference proceedings and will also be submitted to IEEE Xplore/Springer Digital Library. Scope and Topics: The 1st International Conference on Electrical Engineering and Information & CommunicationTechnology (iCEEiCT) has become a primary venue for researchers and industry practitioners to discuss open problems, new research directions, and real-world case studies on all aspects of computer and information technology. iCEEiCT is soliciting original, previously unpublished and high quality papers addressing research challenges and advances spanning over the multidisciplinary aspects of information technology, computing science and computer engineering. The topics of the conference include, but not limited to: ?Algorithms and Computation ?Artificial Intelligence and Machine Learning ?Software Engineering ?Bio-informatics ?Biomedical Engineering ?Circuits and Systems ?RF and Wireless Communications Systems ?Communication Networks ?Computer Animation and Games ?Computer Architecture, Reconfigurable Computing ?Embedded Systems ?Computer Graphics, Virtual Reality ?Multimedia Communication ?Computer Networks and Data Communication ?Computer Security ?Control Theory and Applications ?Database Systems ?Digital Signal and Image Processing ?Distributed and Cloud Computing ?Energy Conversion ?High Voltage Engineering and Protection ?Information Technology ?Internet and Web Applications ?Microwave Engineering ?Radar Engineering ?Multimedia Systems and Applications ?Nanodevices, Nanotechnology, and MEMS ?Optical Communication and Networks ?Photonics ?Power Electronics and Drives ?Renewable Energy ?Robotics and Automation ?Satellite Navigation ?Security & Cryptography ?Smart Power Grid ?Mobile Computing ?Wireless Sensor Network ?VLSI Design and Fabrication Important Dates Submission Deadline : January 31, 2014 Acceptance Notification : March 1, 2014 Camera Ready : March 21, 2014 Registration : March 31, 2014 Contact Information: iCEEiCT Secretariat Department of Electrical, Electronic and Communication Engineering (EECE) Military Institute of Science and Technology Mirpur, Dhaka, Bangladesh Phone:+880-2-8000324, +880-2-8000434 Fax: +880-2-9011311 Website: http://www.iceeict.org/ Email: info at iceeict.org -- Mufti Mahmud, PhD Marie Curie Research Fellow Theoretical Neurobiology & Neuroengineering Lab University of Antwerp T6.63, Universiteitsplein 1 2610 - Wilrijk, Belgium Lab: +32 3 265 2610 http://www.muftimahmud.co.nr/ & Assistant Professor (on leave) Institute of Information Technology Jahangirnagar University Savar, 1342 - Dhaka, Bangladesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ffang at pku.edu.cn Sat Jan 4 20:56:02 2014 From: ffang at pku.edu.cn (Fang Fang) Date: Sun, 5 Jan 2014 09:56:02 +0800 Subject: Connectionists: Psychology Faculty Positions at Peking University, Beijing, China Message-ID: <2014010509560177615411@pku.edu.cn> Apologies for any cross posting Psychology Faculty Positions at Peking University, Beijing, China The Department of Psychology at Peking University invites applications for multiple faculty positions in all areas of psychology. Positions are open at all ranks from assistant, associate to full professorship. Peking University is one of the top research universities in China. The IDG/McGovern Institute for Brain Research was recently established at Peking University and is hosted by the Psychology Department. Research facilities available include 3T MRI scanners, TMS, tDCS, fNIRS, high-density EEG, eye-tracker, several fully equipped labs for cognitive psychology, industrial and organizational psychology, social psychology, clinical psychology and systems neuroscience. A 7T MRI scanner and a MEG system will be installed soon. Applicants should have at least a Ph.D. at the time of appointment and show strong evidence of outstanding research achievement and potential. The department will provide a supportive and stimulating environment and an international competitive start-up package and salary to successful applicants. Successful applicants working in areas of brain and cognitive sciences may also obtain strong support from the IDG/McGovern Institute for Brain Research and the Center for Life Sciences. Informal enquiry and application in the form of a curriculum vitae, research statement, and three recommendation letters should be sent to Dr. Sheng Li (sli at pku.edu.cn). Non-Chinese citizens are also encouraged to apply. For more information, please visit the websites for the Psychology Department (http://www.psy.pku.edu.cn/en/), the IDG/McGovern Institute for Brain Research (http://mgv.pku.edu.cn/?co=index&ac=index&catalog=enidx), the Center for Life Sciences (http://www.cls.edu.cn/english/). -------------- Fang Fang, Ph.D. Chang Jiang Distinguished Professor Chair, Department of Psychology, Executive Associate Director, IDG/McGovern Institute for Brain Research Peking University Beijing, P.R.China Tel & Fax: +86-10-62756932 Email: ffang at pku.edu.cn Web: www.psy.pku.edu.cn/en/fangfang.html From erik at oist.jp Sun Jan 5 19:39:11 2014 From: erik at oist.jp (Erik De Schutter) Date: Mon, 6 Jan 2014 09:39:11 +0900 Subject: Connectionists: Okinawa Computational Neuroscience Course 2014: Application open Message-ID: <6A7282C4-9C1E-4A50-AE57-4855CE4AD416@oist.jp> OKINAWA COMPUTATIONAL NEUROSCIENCE COURSE 2014 Methods, Neurons, Networks and Behaviors June 16 - July 3, 2014 Okinawa Institute of Science and Technology, Japan https://groups.oist.jp/ocnc The aim of the Okinawa Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling. We invite graduate students and postgraduate researchers to participate in the course, held from June 16th through July 3rd, 2014 at an oceanfront seminar house of the Okinawa Institute of Science and Technology Graduate University. Applications are through the course web page (https://groups.oist.jp/ocnc) only; they will close February 9th, 2014. Applicants are required to propose a project at the time of application. Applicants will receive confirmation of acceptance in March. Like in preceding years, OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with ample time for student projects. The first week will focus exclusively on methods with hands-on tutorials during the afternoons, while the second and third weeks will have lectures by international experts. We invite those who are interested in integrating experimental and computational approaches at each level, as well as in bridging different levels of complexity. There is no tuition fee. The sponsor will provide lodging and meals during the course and may support travel for those without funding. We hope that this course will be a good opportunity for theoretical and experimental neuroscientists to meet each other and to explore the attractive nature and culture of Okinawa, the southernmost island prefecture of Japan. Invited faculty: ? Upinder Bhalla (NCBS, India) ? Erik De Schutter (OIST) ? Kenji Doya (OIST) ? Tomoki Fukai (RIKEN, Japan) ? Bernd Kuhn (OIST) ? Javier Medina (University of Pennsylvania, USA) ? Abigail Morrison (Forschungszentrum J?lich, Germany) ? Yael Niv (Princeton University, USA) ? Tony Prescott (University of Sheffield, UK) ? Magnus Richardson (University of Warwick, UK) ? Bernardo Sabatini (Harvard University, USA) ? Ivan Soltesz (UC Irvine, USA) ? Greg Stephens (OIST) ? Greg Stuart (Eccles Institute of Neuroscience, Australia) ? Josh Tenenbaum (MIT, USA) ? Jeff Wickens (OIST) ? Yoko Yazaki-Sugiyama (OIST) From y-inoue at hi.is.uec.ac.jp Sun Jan 5 21:39:52 2014 From: y-inoue at hi.is.uec.ac.jp (Yasuyuki Inoue) Date: Mon, 06 Jan 2014 11:39:52 +0900 Subject: Connectionists: [Deadline Extented] CFP Neural Networks: Special Issue on Communication and Brain Message-ID: <52CA1778.8080706@hi.is.uec.ac.jp> Dear Colleagues, The submission deadline of the manuscipt for Special Issue on "Communication and Brain" of Neural Network journal is extended to January 31, 2014. We cordially invite you to submit a paper to this special issue. Please visit the following site for details. http://www.journals.elsevier.com/neural-networks/call-for-papers/communication-brain-emergent-functions/ Sincerely, Yutaka Sakaguchi Guest Editor of Special Issue on "Communication and Brain", Neural Networks From sebastian.risi at gmail.com Mon Jan 6 14:49:42 2014 From: sebastian.risi at gmail.com (Sebastian Risi) Date: Mon, 6 Jan 2014 20:49:42 +0100 Subject: Connectionists: CFP: Generative and Developmental Systems (GDS) track at GECCO 2014 Message-ID: *************************************************************************** *** CALL FOR PAPERS *** 2014 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO-2014) *** Generative and Developmental Systems (GDS) Track *** July 12-16, 2014 in Vancouver, BC, Canada *** Organized by ACM SIGEVO *** http://www.sigevo.org/gecco-2014 ************************************************************************* Deadlines coming up - required abstracts due Jan 15, papers due Jan 29 - no extensions this year! We invite you to submit your paper to the Generative and Developmental Systems (GDS) track at GECCO 2014. The focus of the GDS track is making artificially evolved systems scale to high complexity, with work ranging from biologically inspired approaches to automated engineering design. Each paper submitted to the GDS Track will be reviewed by experts in the field. The size and prestige of the GECCO conference will allow many researchers to learn about your work, both at the conference and via the proceedings (GECCO has the highest impact rating of all conferences in the field of Evolutionary Computation and Artificial Life). TRACK DESCRIPTION --------------------------- As artificial systems (of hardware, software and networks) continue to grow in size and complexity, the engineering traditions of rigid top-down planning and control are reaching the limits of their applicability. In contrast, biological evolution is responsible for the apparently unbounded complexity and diversity of living organisms. Yet, over 150 years after Darwin's and Mendel's work, and the subsequent "Modern Synthesis" of evolution and genetics, the developmental process that maps genotype to phenotype is still poorly understood. Understanding the evolution of complex systems - large sets of elements interacting locally and giving rise to collective behavior - will help us create a new generation of truly autonomous and adaptive artificial systems. The Generative and Developmental Systems (GDS) track seeks to unlock the full potential of in silico evolution as a design methodology that can "scale up" to systems of great complexity, meeting our specifications with minimal manual programming effort. Both qualitative and quantitative advances toward this long-term goal will be welcomed. Indirect and open-ended representations: The genotype is more than the information needed to produce a single individual. It is a layered repository of many generations of evolutionary innovation, shaped by two requirements: to be fit in the short term, and to be evolvable over the long term through its influence on the production of variation. "Indirect representations" such as morphogenesis or string-rewriting grammars, which rely on developmental or generative processes, may allow long-term improvements to the "genetic architecture" via accumulated layers of elaboration, and emergent new features. In contrast, "direct representations" are not capable of open-ended elaboration because they are restricted to predefined features.Complex environments encourage complex phenotypes: While complex genotypes may not be required for success in simple environments, they may enable unprecedented phenotypes and behaviors that can later successfully invade new, uncrowded niches in complex environments; this can create pressure toward increasing complexity over the long term. Many factors may affect environmental (hence genotypic) complexity, such as spatial structure, temporal fluctuations, or competitive co-evolution.More is more: Today's typical numbers of generations, sizes of populations, and components inside individuals are still too small. Just like physics needs higher-energy accelerators and farther-reaching telescopes to understand matter and space-time, evolutionary computation needs a boost in computational power to understand the generation of complex functionality. Biological evolution involved 4 billion years and untold numbers of organisms. Nature could afford to be "wasteful", but we cannot. We expect that datacenter-scale computing power will be applied in the future to produce artificially evolved artifacts of great complexity. How will we apply such resources most efficiently to "scale up" to high complexity?How should we measure evolved complexity?: The GDS track has recently added a new focus: defining quantitative metrics of evolved complexity. (Which is more complex - a mouse, or a stegosaurus?) The evolutionary computing community is badly in need of such metrics, which may be theoretical (e.g., Kolmogorov complexity) or more practical. Ideally, such metrics will be applicable across multiple problem domains and genetic architectures; however, any efforts will be welcomed. We encourage authors to submit papers on these quantitative metrics, which will be given special attention by the track chairs this year. The GDS track invites all papers addressing open-ended evolution, including, but not limited to, the areas of: - artificial development, artificial embryogeny - evo-devo robotics, morphogenetic robotics - evolution of evolvability - gene regulatory networks - grammar-based systems, generative systems, rewriting systems - indirect mappings, compact encodings, novel representations - morphogenetic engineering - neural development, neuroevolution, augmenting topologies - synthetic biology, artificial chemistry - spatial computing, amorphous computing - competitive co-evolution (arms races) - complex, spatially structured, and dynamically changing environments - diversity preservation, novelty search - efficiently "scaling up" to large numbers of generations, individuals, and internal components - measures of evolved complexity (theoretical, or practical) VENUE -----------The track and conference will be held in Vancouver, BC, Canada.IMPORTANT DATES --------------------------- - Abstract submission: January 15, 2014 (required, new for 2014!) - Submission of full papers: January 29, 2014 (NO extensions this year) - Notification of paper acceptance: March 12, 2014 - Camera ready submission: April 14, 2014 - Advance registration: May 2, 2014 - Conference: July 12-16, 2014 in Vancouver, BC, Canada FOR MORE INFORMATION: ----------------------------------------- Please see the GDS 2014 website , http://www.mepalmer.net/gds2014, or email the GDS co-chairs, Michael Palmer (mepalmer at charles.stanford.edu) and Sebastian Risi (sebastian.risi at gmail.com ). Or you can join the GDS Google Group, https://groups.google.com/forum/#!forum/gds-gecco, to see the latest updates. -- Dr. Sebastian Risi Assistant Professor IT University of Copenhagen, Room 5D08 Rued Langgaards Vej 7, 2300 Copenhagen, Denmark email: sebastian.risi at gmail.com, web: http://www.cs.ucf.edu/~risi/ mobile: +45-50250355, office: +45-7218-5127 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecai2014 at guarant.cz Mon Jan 6 08:04:21 2014 From: ecai2014 at guarant.cz (ecai2014) Date: Mon, 6 Jan 2014 13:04:21 +0000 Subject: Connectionists: ECAI 2014 workshops / final call for papers Message-ID: <1aa90f17449b453ebc3eee8b456d8e2f@GUARANT-EX.guarant.ad> ECAI'14 Final Call for Papers The Twenty-first European Conference on Artificial Intelligence 18-22 August 2014, Prague, Czech Republic http://www.ecai2014.org The ECAI 2014 Organizing Committee invites proposals for workshops to be held in conjunction with the conference. The workshops will be scheduled on August 18 and 19, 2013. Proposals by all members of the international AI community are welcome. There is no restriction regarding topics, as long as there is a clear relevance to ECAI. We prefer a workshop programme that is as varied as possible. Most workshops will follow the classical format of presentations of peer-reviewed papers followed by discussion, but other formats (e.g., AI competitions) and entirely new ideas are also welcome. Whatever the format, all workshops should be interactive events and ample time should be allocated to discussion. The typical duration for a workshop is one full day, but two-day or shorter workshops can also be accommodated. If you are considering to propose a workshop or if you have any questions, please do not hesitate to get in touch with the ECAI-2014 Workshop Chairs, Marina De Vos (mdv at cs.bath.ac.uk) and Karl Tuyls (k.tuyls at liverpool.ac.uk). The deadline for submission of a workshop proposal is 12 January 2014. Detailed submission instructions and further information can be found on http://www.ecai2014.org/ ECAI Workshop Chairs Marina De Vos and Karl Tuyls -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandra.sciutti at gmail.com Tue Jan 7 04:36:08 2014 From: alessandra.sciutti at gmail.com (Alessandra Sciutti) Date: Tue, 7 Jan 2014 10:36:08 +0100 Subject: Connectionists: 2nd Call for Papers - WORKSHOP at HRI 2014 - HRI: a bridge between Robotics and Neuroscience Message-ID: <52cbca80.45250e0a.2ceb.ffffc12d@mx.google.com> Apologies for cross-posting ========================================================================= Workshop "HRI: a bridge between Robotics and Neuroscience", at HRI 2014, Bielefeld, DE ========================================================================= March 3, 2014 Submission deadline: January 13, 2014 Notification of acceptance: January 27, 2014 Website: http://http://www.macs.hw.ac.uk/~kl360/HRI2014W/ ========================================================================= INVITED SPEAKERS: ----------------- Prof. Malinda Carpenter, University of St Andrews on research leave at Max Planck Institute for Evolutionary Anthropology Prof. Luciano Fadiga, Italian Institute of Technology Prof. Giulio Sandini, Italian Institute of Technology Prof. Brian Scassellati, Yale University A fundamental challenge for robotics is to transfer the human natural social skills to the interaction with a robot. At the same time, neuroscience and psychology are still investigating the mechanisms behind the development of human-human interaction. HRI becomes therefore an ideal contact point for these different disciplines, as the robot can join these two research streams by serving different roles. From a robotics perspective, the study of interaction is used to implement cognitive architectures and develop cognitive models, which can then be tested in real world environments. >From a neuroscientific perspective, robots could represent an ideal stimulus to establish an interaction with human partners in a controlled manner and make it possible studying quantitatively the behavioral and neural underpinnings of both cognitive and physical interaction. Ideally, the integration of these two approaches could lead to a positive loop: the implementation of new cognitive architectures may raise new interesting questions for neuroscientists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. However, the integration of two different disciplines is always difficult, as often even similar goals are masked by difference in language or methodologies across fields. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss and present the possible point of contacts, to address the issues and highlight the advantages of bridging the two disciplines in the context of the study of interaction. LIST OF TOPICS ----------------- - Human Robot Interaction - Cognitive Models - Development of Social Cognition - Neural bases of Interaction - Cognitive and Physical Interaction - Social Signals FORMAT AND SUBMISSIONS --------------------- The workshop will consist of invited keynotes, time for discussions and will also feature a poster session. Prospective participants are invited to submit full papers (8 pages) or short papers (2 pages). Submissions will be accepted in PDF format only, using the HRI formatting guidelines and including author names. Authors should send their papers to hri2014workshop at gmail.com. All submissions will be peer-reviewed. Upon available time, selected contributions may have the opportunity to be presented in the oral session. The other selected contributions will be presented as posters during a dedicated session. Accepted publications will be published on our workshop web page. Depending on the overall quality of the contributions, we might consider proposing a Special Issue to journal in the near future. Authors will have the option of opting out from including their reports in the website. Information on the opt-out option will be provided along with the acceptance notice for the papers. In addition to the submission participants have to answer one of the following questions: - Which outcomes should provide neuroscientific research to be useful to robotics? And vice versa? Can descriptive results be enough or a modelling is necessary for a positive communication to exist? - How can robotics research contribute to/influence neuroscience and/or psychology? Although there are many robotics studies inspired by evidences obtained in neuroscience and/or psychology, the impact of robotics on neuroscience or psychology is less evident, especially the modelling research. What can roboticists do to cause a paradigm shift? - Where the bridge between robotics and neuroscience is more useful, and where is it not (or less)? E.g., very useful for social robotics, less useful for algorithm design (or not?) - How this bridge should be built? At the level of the single individual (i.e. a person with multidisciplinary background), at the level of a group (i.e., a group of people with different backgrounds), at the level of a department (with different labs meeting once in a while) or a mix of the previous? Upon available time, those questions/answers will be used to "drive" a final discussion. IMPORTANT DATES ------------- Submission deadline: January 13, 2014 Notification of acceptance: January 27, 2014 March 3, 2014, Workshop at HRI 2014 ORGANIZERS ---------- - Alessandra Sciutti Istituto Italiano di Tecnologia - Katrin Solveig Lohan Heriot-Watt University - Yukie Nagai Osaka University -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Tue Jan 7 12:53:04 2014 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 07 Jan 2014 09:53:04 -0800 Subject: Connectionists: NEURAL COMPUTATION - February, 2013 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 26, Number 2 - February 1, 2014 Available online for download now: http://www.mitpressjournals.org/toc/neco/26/2 ----- Article Likelihood Methods for Point Processes With Refractoriness Luca Citi, Demba Ba, Emery Brown, and Riccardo Barbieri Letters Functional Identification of Spike-Processing Neural Circuits Aurel A. Lazar, Yevgeniy B. Slutskiy A New Class of Metrics for Spike Trains Catalin Vasile Rusu, Razvan Valentin Florian Robust Common Spatial Filters With a Maxmin Approach Motoaki Kawanabe, Wojciech Samek, Klaus-Robert Muller, and Carmen Vidaurre Improved Sparse Coding Under the Influence of Perceptual Attention Ashkan Amiri, Simon Haykin Spontaneous Clustering via Minimum Gamma-divergence Akifumi Notsu, Osamu Komori, and Shinto Eguchi A Novel Iterative Algorithm for Computing Generalized Inverse Youshen Xia, Tianping Chen, and Jinjun Shan ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2014 - VOLUME 26 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From c.clopath at imperial.ac.uk Tue Jan 7 13:08:52 2014 From: c.clopath at imperial.ac.uk (Claudia Clopath) Date: Tue, 7 Jan 2014 18:08:52 +0000 Subject: Connectionists: PhD opportunities at Imperial College London in the Clopath lab Message-ID: *PhD opportunities in Computational Neuroscience.* Computational Neuroscience Laboratory Headed by Dr. Claudia *Clopath* Department of Bioengineering Imperial College London -----------------Requirements:----------------- The Computational Neuroscience Laboratory, headed by Dr. Claudia Clopath, is looking for a talented PhD student, interested in working in the field of computational neuroscience, specifically addressing questions of *learning and memory*. The perfect candidate has a strong mathematical, physical or engineering background (or equivalent), and a keen interest in biological and neural systems. Demonstrated programming skills are a plus. -----------------Research topic:----------------- Learning and memory are among the most fascinating topic of neuroscience, yet our understanding of it is only at the beginning. Learning is thought to change the connections between the neurons in the brain, a process called *synaptic plasticity*. Using mathematical and computational tools, it is possible to model synaptic plasticity across different time scales, which helps understand how different types of memory are formed. The PhD candidate will be working to build those models of synaptic plasticity, and study the functional role of synaptic plasticity in artificial neural networks. They will have the opportunity to collaborate with experimental laboratories, which study connectivity changes and behavioural learning. ----------------- The lab:----------------- The Computational Neuroscience Laboratory is very young and dynamic, and publishes in prestigious journals, such as Nature and Science. It is part of the Department of Bioengineering, which conducts state-of-the-art multidisciplinary research in biomechanics, neuroscience and neurotechnology. The lab is at *Imperial College London*, the 3rd ranked university in Europe, 10th in the World, and is located in the city centre of London. More information can be found at: http://www.bg.ic.ac.uk/research/c.clopath/ ----------------- How to apply:----------------- Candidates should send a single pdf file, consisting of a 1-page motivation letter, CV, and names and contact information of two references, to clopathlab.imperial at gmail.com, with the subject containing 'PHD2014'. The application *deadline is **Jan 20th*. Skype Interviews will be held on the Jan 22nd. ----------------- Funding:----------------- The candidate will apply to an Imperial College PhD scholarship. Deadline for funding application is on Feb 3rd. More information can be found at: http://www3.imperial.ac.uk/researchstrategy/funding/phdscholarships -------------- next part -------------- An HTML attachment was scrubbed... URL: From v.steuber at herts.ac.uk Tue Jan 7 09:40:41 2014 From: v.steuber at herts.ac.uk (Steuber, Volker) Date: Tue, 7 Jan 2014 14:40:41 +0000 Subject: Connectionists: EvoSN: postdoctoral position Message-ID: <18EF08266D889C41A14D1099C7102CE2BDF6B6B1AF@UH-MAILSTOR.herts.ac.uk> A 2-year research fellow position is available in the international (Polish-UK) project EvoSN (Evolution of Spiking Neural Networks). This full-time research post will allow the post holders to pursue research at the intersection of several fields of biology (computational/systems neuroscience, theoretical/evolutionary/systems biology) with computer science (biologically-inspired computation, evolutionary algorithms, design of complex systems) and robotics. EvoSN will use a software platform for the evolution of regulatory networks created by Prof. Borys Wrobel and his collaborators at the Adam Mickiewicz University in Poznan, Poland, to answer fundamental questions about the computational properties of spiking neural networks, formulated thanks to the extensive experience in the field of computational neuroscience of the group led by Dr. Volker Steuber at the University of Hertfordshire. The post holders will spend a substantial part of their fellowship in the UK and Poland, but may include short stays at other collaborating institutions. The position is to be filled as soon as possible. The period of appointment is 2 years (with intermediary evaluations), and is potentially extensible. Applicants for the position should have a strong postgraduate degree (MSc or PhD) in a quantitative research-oriented discipline, examples would be computer science, mathematics or physics. They should have a strong computational background; programming skills (C++ or Java, Python, Matlab) are essential to the position. Further requirements for candidates include proficiency in English, both in writing and in speaking (English is the working language in both labs). The salary will correspond to the levels established for the University of Hertfordshire and will be based on experience. Further enquiries should be directed to Borys Wr?bel (wrobel at evosys.org). From avellido at lsi.upc.edu Wed Jan 8 04:56:45 2014 From: avellido at lsi.upc.edu (Alfredo Vellido) Date: Wed, 08 Jan 2014 10:56:45 +0100 Subject: Connectionists: Last CFP: IEEE BHI'14: Special session "Towards interpretable ML applications in biomedicine and health" In-Reply-To: <52A60063.50305@lsi.upc.edu> References: <529893FB.3040304@lsi.upc.edu> <52A60063.50305@lsi.upc.edu> Message-ID: <52CD20DD.4090006@lsi.upc.edu> *ONE WEEK FROM TODAY* *** Apologies for cross-posting *** Dear colleagues, We have an UPDATED DEADLINE (*15th of January*) for the special session: "Towards interpretable Machine Learning applications in biomedicine and health" at: IEEE-EMBS International Conference on Biomedical and Health Informatics 2014 (IEEE BHI'2014) Valencia, Spain 1-4 June 2014 Web site:http://bhi.embs.org/2014 ================================ Important Dates Deadline for paper submission: 15th January 2014 Notification of acceptance: 3rd March 2014. Deadline for camera-ready papers: 21st March 2014. Session goal =========== The practical use of machine learning and computational intelligence algorithms in biomedicine and health is sometimes hampered by the limited interpretability of the analytical models, without which it is difficult to validate against domain expertise and to explain the extracted knowledge to the user. Model interpretability, which is a problem that extends to all machine learning fields (classification, prediction, clustering, etc.), is paramount in those application domains. This special session expects to make contributions on interpretable Machine Learning models: both basic methodology for the interpretation of efficient non-linear models and practical applications in biomedicine and health are welcome. Topics of Interest Topics of interest include, but are not restricted to: . Interpretation of non-linear models, including SVMs and other kernel methods. . Deep learning. . Inductive learning, including rule generation from data and interpretation of random forests and tree bagging. . Graphical models and structure finding. . Manifolds for nonlinear dimensionality reduction. . Data visualization. . Practical applications in biomedicine and health to extract knowledge from Machine Learning models. Session format and submission The session will take place during the IEEE BHI 2014 Conference. Only papers in English will be accepted. All the papers will go through the normal conference reviewing process. Final papers are limited to 4 pages and must follow the conference instructions as described in the conference website (http://bhi.embs.org/2014/authors/). The session will consist of a limited number of paper presentations. A separate submission procedure has been established for this special session. To submit a paper to this session, a special code is required for paper upload. To submit a paper for this special session - Click Submit a contribution to BHI 2014 athttps://embs.papercept.net/conferences/scripts/start.pl - Click Submit of the "Special Session Paper" row - Enter Code 7g259 and complete the rest of the form with the information of your contribution Looking forward to seeing you in Valencia! Jos? D. Mart?n, Universitat de Val?ncia (Spain) Alfredo Vellido, Universitat Polit?cnica de Catalunya (Spain) Paulo J. G. Lisboa, Liverpool John Moores University (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From neumann at cbs.mpg.de Wed Jan 8 05:33:30 2014 From: neumann at cbs.mpg.de (Dr. Jane Neumann) Date: Wed, 08 Jan 2014 11:33:30 +0100 Subject: Connectionists: PhD position - computational modelling In-Reply-To: <52121396.1040909@cbs.mpg.de> References: <52121396.1040909@cbs.mpg.de> Message-ID: <52CD297A.7060800@cbs.mpg.de> Dear colleagues, The *Collaborative Research Center *(CRC) 1052 "Obesity mechanisms" at the Leipzig University Hospital is offering a *PhD studentship in computational modelling* under the supervision of Dr Jane Neumann and Dr Annette Horstmann. Within the project, computational modelling will be used to investigate decision-making and learning in humans by combining genetic, behavioural and magnetic resonance imaging (MRI) data from different modalities. The PhD position will be based at the *Max Planck Institute for Human Cognitive and Brain Sciences* in the beautiful city of *Leipzig*. Both Leipzig's long tradition in conducting neuroscientific research and the ultra-modern equipment at the Institute provide an environment that offers new perspectives in neuroimaging research. Further, the position will be part of the CRCs Integrated Research Training Group. This graduate program offers interdisciplinary qualification in various research methods and transferable skills, and provides support in career planning and in establishing an own scientific network. Applicants should hold a Master's degree (or equivalent) in one of the following disciplines: computational or cognitive neuroscience, computer science, mathematics, physics, cognitive science or related. Prior experience in the field of computational neuroscience and/or neuroimaging are of advantage. Sound knowledge of statistics and excellent programming skills are essential. A good command of written and spoken English is requested of all applicants. Please send your application as a single pdf-file to _neumann at cbs.mpg.de_ referring to 'Modelling SFB 1052'. Complete applications include cover letter, CV, letter(s) of recommendation, and copies of university degrees and additional certificates. Informal enquiries should be made to Dr Jane Neumann (_neumann at cbs.mpg.de_, +49 (0) 341 99 40 26 21). The salary is based on the German E 13 TV-L salary scale. In order to increase the proportion of female staff members, applications from female scientists are particularly encouraged. Disabled applicants are preferred if qualification is equal. Deadline for application: until position is filled -------------- next part -------------- An HTML attachment was scrubbed... URL: From areynolds2 at vcu.edu Tue Jan 7 21:22:50 2014 From: areynolds2 at vcu.edu (Angela M Reynolds) Date: Tue, 7 Jan 2014 21:22:50 -0500 Subject: Connectionists: REGISTRATION APPLICATION DEADLINE 1/14/2014 11:59 PM (EST) Conference & Bard Ermentrout's 60th Birthday: Nonlinear Dynamics and Stochastic Methods Message-ID: Nonlinear dynamics and stochastic methods: from neuroscience to other biological applications March 10-12, 2014 Pittsburgh, PA This conference on nonlinear dynamics and stochastic methods will bring together a mix of senior and junior scientists to report on theoretical methods that proved successful in mathematical neuroscience, and to encourage their dissemination and application to modeling in computational medicine and other biological fields. *This conference will coincide with a celebration of G. Bard Ermentrout's sixtieth birthday. *The invited speakers will present on mathematical topics such as dynamical systems, multi-scale modeling, phase resetting curves, pattern formation and statistical methods. The mathematical tools will be demonstrated in the context of the following main topics: i) Rhythms in biological systems; ii) The geometry of systems with multiple time scales; iii) Pattern formation in biological systems; iv) Stochastic models: statistical methods and mean field approximations. The conference runs from March 10-12, 2014 at the University of Pittsburgh, Pittsburgh, PA. Travel support may become available for young investigators. Currently, this conference is partial funded by the Mathematical Biosciences Institute and the University of Pittsburgh. REGISTRATION APPLICATION, ABSTRACT SUBMISSION, AND SCHEDULE: http://homepage.math.uiowa.edu/~rcurtu/conferencePitt2014.htm Important Dates: Travel award application and abstract submission deadline Passed. Registration application deadline January 14th 11:59 pm (EST): Researchers interested in attending the conference should submit a registrationapplication. The attendees will be selected among these based on research interest overlapping with the conference theme, and to ensure diversity and breadth of participation by individuals and institutions. Decisions regarding Registration and Abstracts: January 17th SPONSORS: Department of Mathematics, University of Pittsburgh Mathematical Biosciences Institute National Science Foundation (pending) CONTACT: rodica-curtu at uiowa.edu or areynolds2 at vcu.edu *Confirmed Speakers:* Paul Bressloff (University of Utah) Carson Chow (National Institutes of Health) Sharon Crook (Arizona State University) Jack Cowan (University of Chicago) Jonathan Drover (Cornell Medical College, NYC) Leah Edelstein-Keshet (University of British Columbia, Vancouver - Canada) Roberto Fernandez Galan (Case Western Reserve University) Pranay Goel (Indian Institute of Science, Education and Research, Pune - India) Boris Gutkin (Ecole Normale Superieure/ ENS, Paris - France) Zachary Kilpatrick (University of Houston) Nancy Kopell (Boston University) Cheng Ly (Virginia Commonwealth University) Remus Osan (Georgia State University) George Oster (University of California, Berkeley) John Rinzel (New York University) Jonathan Rubin (University of Pittsburgh) Daniel Simons (University of Pittsburgh) David Terman (Ohio State University) If you have questions now please contact one of the organizers, Angela Reynolds, areynolds2 at vcu.edu , or Rodica Curtu, rodica-curtu at uiowa.edu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.devlin at york.ac.uk Wed Jan 8 09:00:36 2014 From: sam.devlin at york.ac.uk (Sam Devlin) Date: Wed, 8 Jan 2014 14:00:36 +0000 Subject: Connectionists: 2nd CFP: Adaptive and Learning Agents (ALA) Workshop @ AAMAS 2014 Message-ID: Second Call For Papers: Adaptive and Learning Agents Workshop 2014 (Paris, France) We apologize if you receive more than one copy. Please share with colleagues and students. Paper deadline: JANUARY 22, 2014 ALA 2014: Adaptive and Learning Agents Workshop held at AAMAS 2014 (Paris, France). The ALA workshop has a long and successful history and is now in its sixth edition. The workshop is a merger of European ALAMAS and the American ALAg series which is usually held at AAMAS. Details may be found on the workshop web site: http://swarmlab.unimaas.nl/ala2014/ Adaptive and Learning Agents, particularly those in a multi-agent setting are becoming more and more prominent as the sheer size and complexity of many real world systems grows How to adaptively control, coordinate and optimize such systems is an emerging multi-disciplinary research area at the intersection of Computer Science, Control theory, Economics, and Biology. The ALA workshop will focus on agent and multi-agent systems which employ learning or adaptation. The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design). This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular emphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to: * Novel combinations of reinforcement and supervised learning approaches * Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc. * Supervised multi-agent learning * Reinforcement learning (single and multi-agent) * Planning (single and multi-agent) * Reasoning (single and multi-agent) * Distributed learning * Adaptation and learning in dynamic environments * Evolution of agents in complex environments * Co-evolution of agents in a multi-agent setting * Cooperative exploration and learning to cooperate and collaborate * Learning to cooperate and collaborate * Learning trust and reputation * Communication restrictions and their impact on multi-agent coordination * Design of reward structure and fitness measures for coordination * Scaling learning techniques to large systems of learning and adaptive agents * Emergent behaviour in adaptive multi-agent systems * Game theoretical analysis of adaptive multi-agent systems * Neuro-control in multi-agent systems * Bio-inspired multi-agent systems * Applications of adaptive and learning agents and multi-agent systems to real world complex systems The workshop will also include a half day tutorial on multi-agent reinforcement learning. Previous versions of this tutorial were successfully run at EASSS 2004 (the European Agent Systems Summer School), ECML 2005, ICML 2006, EWRL 2008 and AAMAS 2009-2013, and ECML 2013 with different collaborators. The ALA 2014 edition will include revised and updated content with a new focus on reward shaping covering in particular depth - difference rewards and potential-based reward shaping. ******************************************************* Submission Details Papers can be submitted through Easychair: https://www.easychair.org/conferences/?conf=ala20140 Submissions may be up to 8 pages in the ACM proceedings format (i.e., the same as AAMAS papers in the main conference track). Accepted work will be allocated time for oral presentation during the one day workshop. Papers accepted at the workshop will also be eligible for inclusion in a special issue published after the workshop. ******************************************************* * Submission Deadline: January 22, 2014 * Notification of acceptance: February 19, 2014 * Camera-ready copies: March 10, 2014 * Workshop: May 5 or 6, 2014 ******************************************************* -- Sam Devlin Research Associate York Centre for Complex Systems Analysis The University of York Deramore Lane, York, YO10 5GH w: http://www.cs.york.ac.uk/~devlin/ Disclaimer: http://www.york.ac.uk/docs/disclaimer/email.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpezaris at gmail.com Wed Jan 8 10:51:55 2014 From: jpezaris at gmail.com (John Pezaris) Date: Wed, 8 Jan 2014 10:51:55 -0500 Subject: Connectionists: AREADNE 2014 Call for Abstracts Message-ID: CONFERENCE ANNOUNCEMENT -- and -- CALL FOR ABSTRACTS AREADNE 2014 Research in Encoding and Decoding of Neural Ensembles 25-29 June 2014 Nomikos Conference Center Santorini, Greece http://www.areadne.org info at areadne.org INTRODUCTION One of the fundamental problems in neuroscience today is to understand how the activation of large populations of neurons give rise to higher order functions of the brain including learning, memory, cognition, perception, action and ultimately conscious awareness. Electrophysiological recordings in behaving animals over the past forty years have revealed considerable information about what the firing patterns of single neurons encode in isolation, but it remains largely a mystery how collections of neurons interact to perform these functions. Recent technological advances have provided a glimpse into the global functioning of the brain. These technologies include functional magnetic resonance imaging, optical imaging methods including intrinsic, voltage-sensitive dye, and two-photon imaging, high-density electroencephalography and magnetoencephalography, and multi-microelectrode array electrophysiology. These technologies have expanded our knowledge of brain functioning beyond the single neuron level. At the same time, our understanding of how neuronal ensembles carry information has allowed the development of brain-machine interfaces (BMI) to enhance the capabilities of patients with sensory and motor deficits. Knowledge of how neuronal ensembles encode sensory stimuli has made it possible to develop perceptual BMIs for the hearing and visually impaired. Likewise, research in how neuronal ensembles decode motor intentions has resulted in motor BMIs by which people with severe motor disabilities can control external devices. CONFERENCE MISSION First and foremost, this conference is intended to bring scientific leaders from around the world to present their recent findings on the functioning of neuronal ensembles. Second, the meeting will provide an informal yet spectacular setting on Santorini in which attendees can discuss and share ideas outside of the presentations at the conference center. Third, this conference continues our long term project to form a systems neuroscience research institute within Greece to conduct state-of-the-art research, offer meetings and courses, and provide a center for visiting scientists from around the world to interact with Greek researchers and students. FORMAT AND SPEAKERS The conference will span four days, in morning and early evening sessions. Confirmed speakers include experts in the field of multi-neuron experiment, theory, and analysis (in alphabetic order): Ken Britten EJ Chichilnisky Mark Churchland Rosa Cossart Allision Doupe Loren Frank Wulfram Gerstner Sonja Gruen Gabriel Kreiman Gilles Laurent Eve Marder Tom Mrsic-Flogel Ole Paulsen Carl Petersen John Pezaris Alexandre Pouget Jennifer Raymond Michael Roukes Philip Sabes Terry Sanger Andreas Tolias Susumu Tonegawa Brian Wandell CALL FOR ABSTRACTS We are currently soliciting abstracts for poster presentation. Submissions will be accepted electronically, and must be received by 7 March 2014. Automated email acknowledgement of submission will be provided, and manual verification will be made a few days after submission. Notification of acceptance will be provided by 28 March 2014. Please see our on-line Call for Abstracts for additional details and submission templates: http://areadne.org/call-for-abstracts.html ORGANIZING COMMITTEE John Pezaris, Co-Chair Nicholas Hatsopoulos, Co-Chair Yiota Poirazi Andreas Tolias Dora Angelaki Thanos Siapas FOR FURTHER INFORMATION For further information please see the conference web site http://areadne.org or send email to info at areadne.org. -- Dr. J. S. Pezaris AREADNE 2014 Co-Chair Massachusetts General Hospital 55 Fruit Street Boston, MA 02114, USA john at areadne.org From christopher.honey at gmail.com Wed Jan 8 16:34:10 2014 From: christopher.honey at gmail.com (Christopher Honey) Date: Wed, 8 Jan 2014 16:34:10 -0500 Subject: Connectionists: Computational Models of Narrative: the Neuroscience of Narrative (July 31 - Aug 2) Message-ID: On behalf of the organizing committee, I want to put on your radar an upcoming conference on "*Computational Models of Narrative: the Neuroscience of Narrative*", which will take place *July 31 - Aug 2, 2014* in *Quebec City, Canada*. Paper submissions are due April 4, 2014. The goal of this 5th annual workshop is to foster approaches to the science of narrative and story that are compatible with computational and cognitive perspectives. Narratives are powerful social tools that are used across all human cultures for a range of purposes, from moral education and empathy induction to recruiting and galvanizing support for an ideological cause. Understanding how narratives are created, transmitted and interpreted is therefore of interest to a broad swath of academic fields, which provides the opportunity for a rich inter-disciplinary meeting. Indeed, the workshop historically attracts a broad interdisciplinary group, including computer scientists, psychologists, cognitive scientists, narratologists, philosophers, and many others. We hope that you will consider joining the discussion! Three types of papers will be accepted: (1) Long Papers (8 pages, excluding references): appropriate for concrete research results, including pilot studies, or studies in progress. (2) Short Papers (4 pages, excluding references): appropriate for a small, focused contribution, a negative result, or an interesting application nugget. (3) Position Papers (2 pages, excluding references): appropriate for discussion of an interesting new idea, identification of important neglected areas or topics, or an opinion piece. More information on the conference can be found at the conference website: http://narrative.csail.mit.edu/cmn14 Hope to see you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dubuf at ualg.pt Thu Jan 9 07:34:45 2014 From: dubuf at ualg.pt (Hans du Buf) Date: Thu, 09 Jan 2014 12:34:45 +0000 Subject: Connectionists: Postdoc position, 12 months, deep neural networks Message-ID: <52CE9765.9060200@ualg.pt> The Vision Laboratory is a small group (2 postdocs plus 7 PhD students plus 3 MSc students) which develops models of visual perception. Our V1 models of simple, complex and end-stopped cells run on multi-core CPUs and in real time on GPUs. We are now developing NN hierarchies for object detection and recognition in complex scenes on mobile robots. We need a postdoc who is really specialized in NN architectures. Keywords: hierarchies, redundancy, sparse coding, Gripon-Bessou NNs (a special form of Hamming NNs with cliques in the output layers), learning, CPU and GPU programming, cognitive robotics. Start: preferably March 1st, 2014, or soon after. Duration: 12 months. Remuneration: 1495 euro/month - exempt from taxation! Ample money: for computers and conferences. Location: sunny Algarve, Portugal. This is a preliminary announcement. The project funding has been approved, but there is an administrative procedure which must be applied with specific data and deadlines. Nevertheless, interested postdocs and PhD students who will defend their thesis before or in March should contact us asap. Pls send email with CV to Prof. Joao Rodrigues (jrodrig at ualg.pt) or to me (dubuf at ualg.pt). Note: we need a NN EXPERT, not someone who has applied some 2/3-layer NN classifier. Regards, Hans -- ======================================================================= Prof.dr.ir. J.M.H. du Buf mailto:dubuf at ualg.pt Dept. of Electronics and Computer Science - FCT, University of Algarve, fax (+351) 289 818560 Campus de Gambelas, 8000 Faro, Portugal. tel (+351) 289 800900 ext 7761 ======================================================================= UALG Vision Laboratory: http://w3.ualg.pt/~dubuf/vision.html ======================================================================= From M.Gillies at gold.ac.uk Fri Jan 10 09:31:56 2014 From: M.Gillies at gold.ac.uk (Marco Gillies) Date: Fri, 10 Jan 2014 14:31:56 +0000 Subject: Connectionists: 2nd CFP Machine Learning, Expressive Movement, Interaction Design, Creative Applications References: <7A197F78-DC85-4CA2-BF4B-1AC636CA57AD@gold.ac.uk> Message-ID: <9418F041-4B81-4FBA-95D4-FC9A7F39F4D3@gold.ac.uk> Dear all, We are happy to announce the following Call for Participation for the multidisciplinary Workshop on Machine Learning, Expressive Movement and Interaction Design. The workshop is organized at Godsmiths, London, on April 1-2, 2014 (part of the AISB Symposium) Submission deadline: 21st January 2014 Visit the website: https://www.doc.gold.ac.uk/~mas02mg/GestureWorkshop/?page_id=15 ================================================================= Machine Learning, Expressive Movement, Interaction Design, Creative Applications AISB 2014 Symposium 1-2 April 2014 Call for Participation Submission deadline: 21st January 2014 https://www.doc.gold.ac.uk/~mas02mg/GestureWorkshop/?page_id=15 ================================================================= OVERVIEW Machine Learning (ML) is a set of techniques widely used for data analysis and understanding of complex phenomena. A subset of ML methods that are real time or that look at continuous data have been designed to carry out a wide variety of tasks such as gesture recognition, movement prediction, gesture spotting, animation, social signal processing, style generation. These in turn can be applied in diverse areas including novel human computer interaction methods, human robot interaction, musical performance, digital arts and entertainment. All of these application areas involve specific constraints in the design of ML methods, regarding movement complexity (e.g. from symbols to continuous gestures), learning procedure (e.g. from few examples) and realtime inference. PRESENTATION The workshop will take the form of a symposium to bear on the key challenges of the design of ML methods for expressive movement, interaction design and related fields. We wish to emphasize how the applications contexts contribute to shaping the methods used, the learning strategies, the tasks imagined. We will consider both computational challenges and interaction design issues. We will draw on advances in interactive art and music, fields which provide many relevant use cases for real-time, continuous, and ?expressive? gestural interactions. This 2-day symposium will consist of 3 sessions: presentations, demos, and an evening performance. Researchers from industry, academia, and arts with an interest in the machine learning of gestural input are invited to submit one of the following types of position papers: 1.) Technical innovation in machine learning & gesture analysis 2.) Design case studies of gestural interaction systems 3.) Proposals for live demos of functional prototypes SUBMISSION To submit, please email your submission paper (PDF) to mlworkshop at goldsmithsdigital.com Note that supporting videos are encouraged (send as URLs). Papers should be maximum 4 pages in AISB format. Template files for submission can be found at:http://www.aisb.org.uk/convention/aisb08/download.html Selection will be based on submission quality and relevance to the workshop topic. Each paper will be peer-reviewed by program committee members. At least one author of each accepted submission must register for AISB. Note that a special issue in a journal will be considered and discussed during the workshop. IMPORTANT DATES ? Submission deadline: 21 January 2014 ? Notification of acceptance: 20 February 2014 ? Workshop: 1-2 April 2014 ORGANISERS ? Fr?d?ric Bevilacqua, IRCAM, France ? Baptiste Caramiaux, Goldsmiths, University of London, UK ? Rebecca Fiebrink, Goldsmiths, University of London, UK ? Marco Gillies, Goldsmiths, University of London, UK ? Atau Tanaka, Goldsmiths, University of London, UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerstin at nld.ds.mpg.de Fri Jan 10 12:19:11 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Fri, 10 Jan 2014 18:19:11 +0100 Subject: Connectionists: =?iso-8859-15?q?Job_posting_for_Postdocs=3A_Berns?= =?iso-8859-15?q?tein_Fellows_-_BCCN_G=F6ttingen=2C_Germany?= Message-ID: <52D02B8F.5060407@nld.ds.mpg.de> Dear Computational Neuroscience Community, Please find below our job posting for postdoctoral positions at the Bernstein Center for Computational Neuroscience (BCCN) G?ttingen, Germany. Kind regards, Kerstin Mosch* * Bernstein Center for**Computational Neuroscience G?ttingen *Bernstein Fellows* *Independent Postdoctoral Research Positions in Computational Neuroscience* We invite applications for Independent Research Positions in Computational Neuroscience at the Bernstein Center for Computational Neuroscience (BCCN) in G?ttingen, Germany. G?ttingen is a center of neuroscience in Europe hosting numerous internationally recognized neuroscience research institutions, including three Max Planck Institutes, the European Neuroscience Institute, the German Primate Center, and the Center for Systems Neuroscience (CSN) and the Center for Nanoscale Microscopy and Molecular Physiology of the Brain (CNMPB). The BCCN integrates theoretical and experimental research groups from these institutions to foster interdisciplinary research in computational neuroscience specifically supporting close collaboration between theorists and experimental researchers. We are looking for strong research personalities, who are experienced in the field of Computational Neuroscience and/or related disciplines such as theoretical physics, biophysics, mathematics, or computer science and with commitment to a research career in neuroscience. Prior biological or neuroscience training is desirable. The Bernstein Fellow will have the opportunity to collaborate with other members of the BCCN or to pursue an independent research program complementing the activities of the BCCN. The positions will be awarded for two year periods. The search will remain open until an appointment is made, but complete applications should be received by *February 2, 2014*, to ensure full consideration. Salary is in accordance with the public service salary scale (E13/E14) and is complemented by social benefits. The BCCN is committed to employing more handicapped individuals and especially encourages them to apply. We also seek to increase the number of women in those areas where they are underrepresented and therefore explicitly encourage women to apply. Please submit your e-mail application preferably in one single PDF-document, including cover letter, CV, list of publications, research proposal/interests, relevant certificates, copies of three of your most important publications and arrange for 2 letters of recommendation to be sent to: *jobs @ bccn-goettingen.de**(Subject: Bernstein Fellow)* For more information please refer to http://www.bccn-goettingen.de . Prof. Dr. Fred Wolf Director Bernstein Center for Computational Neuroscience (BCCN) G?ttingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 37077 G?ttingen, Germany http://www.bccn-goettingen.de -- Dr. Kerstin Mosch Bernstein Center for Computational Neuroscience (BCCN) Goettingen Bernstein Focus Neurotechnology (BFNT) Goettingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 D-37077 Goettingen Germany T: +49 (0) 551 5176 - 405 E:kerstin at nld.ds.mpg.de I:www.bccn-goettingen.de I:www.bfnt-goettingen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 26496 bytes Desc: not available URL: From fjaekel at uos.de Fri Jan 10 09:34:45 2014 From: fjaekel at uos.de (Frank =?ISO-8859-1?Q?J=E4kel?=) Date: Fri, 10 Jan 2014 15:34:45 +0100 Subject: Connectionists: OCCAM Workshop "Mechanisms for Probabilistic Inference" Message-ID: <1389364485.23627.7.camel@birke.ikw.uni-osnabrueck.de> Dear Colleague, we would like to invite you to register for the 4th +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Osnabrueck Computational Cognition Alliance Meeting (OCCAM 2014) on "Mechanisms for Probabilistic Inference" May 7-9, 2014. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ List of invited speakers: Wendy Adams Adrienne Fairhall Frank Jaekel Timm Lochmann Mate Lengyel Wolfgang Maass Jakob Macke Klaus Pawelzik Frederike Petzschner Brigitte Roeder Maneesh Sahani Wolf Singer Melanie Wilke Florentin Woergoetter Davide Zoccolan The workshop will take place in Osnabrueck, Germany, and will be hosted by the Institute of Cognitive Science (University of Osnabrueck). Details can be found below and on the following webpage: http://www.occam-os.de The registration deadline is March 7, 2014 (first come first served). The registration fee is 100,- Euros. This fee covers the workshop attendance incl. coffee, buffet on the first day, and the conference lunch on the last day. The OCCAM workshop series aims at understanding the principles of information processing in the brain with a particular focus on 3 major topics: 1. Neural coding and representation in hierarchical systems 2. Self-organization in dynamical systems 3. Mechanisms for probabilistic inference There will also be a poster session where conference participants will have the opportunity to present their work. Best regards, Frank Jaekel, Peter K?nig, Gordon Pipa (Organizing committee) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6293 bytes Desc: not available URL: From alessandra.sciutti at gmail.com Fri Jan 10 12:31:38 2014 From: alessandra.sciutti at gmail.com (Alessandra Sciutti) Date: Fri, 10 Jan 2014 18:31:38 +0100 Subject: Connectionists: Deadline extension - WORKSHOP at HRI 2014 - HRI: a bridge between Robotics and Neuroscience Message-ID: <52d02e7c.c6310f0a.089f.7799@mx.google.com> Apologies for cross-posting ========================================================================= Workshop "HRI: a bridge between Robotics and Neuroscience", at HRI 2014, Bielefeld, DE ========================================================================= March 3, 2014 Submission deadline: January 20, 2014 (EXTENDED) Notification of acceptance: January 27, 2014 Website: http://www.macs.hw.ac.uk/~kl360/HRI2014W/ ========================================================================= INVITED SPEAKERS: ----------------- Prof. Malinda Carpenter, University of St Andrews on research leave at Max Planck Institute for Evolutionary Anthropology Prof. Luciano Fadiga, Italian Institute of Technology Prof. Giulio Sandini, Italian Institute of Technology Prof. Brian Scassellati, Yale University A fundamental challenge for robotics is to transfer the human natural social skills to the interaction with a robot. At the same time, neuroscience and psychology are still investigating the mechanisms behind the development of human-human interaction. HRI becomes therefore an ideal contact point for these different disciplines, as the robot can join these two research streams by serving different roles. From a robotics perspective, the study of interaction is used to implement cognitive architectures and develop cognitive models, which can then be tested in real world environments. >From a neuroscientific perspective, robots could represent an ideal stimulus to establish an interaction with human partners in a controlled manner and make it possible studying quantitatively the behavioral and neural underpinnings of both cognitive and physical interaction. Ideally, the integration of these two approaches could lead to a positive loop: the implementation of new cognitive architectures may raise new interesting questions for neuroscientists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. However, the integration of two different disciplines is always difficult, as often even similar goals are masked by difference in language or methodologies across fields. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss and present the possible point of contacts, to address the issues and highlight the advantages of bridging the two disciplines in the context of the study of interaction. LIST OF TOPICS ----------------- - Human Robot Interaction - Cognitive Models - Development of Social Cognition - Neural bases of Interaction - Cognitive and Physical Interaction - Social Signals FORMAT AND SUBMISSIONS --------------------- The workshop will consist of invited keynotes, time for discussions and will also feature a poster session. Prospective participants are invited to submit full papers (8 pages) or short papers (2 pages). Submissions will be accepted in PDF format only, using the HRI formatting guidelines and including author names. Authors should send their papers to hri2014workshop at gmail.com. All submissions will be peer-reviewed. Upon available time, selected contributions may have the opportunity to be presented in the oral session. The other selected contributions will be presented as posters during a dedicated session. Accepted publications will be published on our workshop web page. Depending on the overall quality of the contributions, we might consider proposing a Special Issue to journal in the near future. Authors will have the option of opting out from including their reports in the website. Information on the opt-out option will be provided along with the acceptance notice for the papers. In addition to the submission participants have to answer one of the following questions: - Which outcomes should provide neuroscientific research to be useful to robotics? And vice versa? Can descriptive results be enough or a modelling is necessary for a positive communication to exist? - How can robotics research contribute to/influence neuroscience and/or psychology? Although there are many robotics studies inspired by evidences obtained in neuroscience and/or psychology, the impact of robotics on neuroscience or psychology is less evident, especially the modelling research. What can roboticists do to cause a paradigm shift? - Where the bridge between robotics and neuroscience is more useful, and where is it not (or less)? E.g., very useful for social robotics, less useful for algorithm design (or not?) - How this bridge should be built? At the level of the single individual (i.e. a person with multidisciplinary background), at the level of a group (i.e., a group of people with different backgrounds), at the level of a department (with different labs meeting once in a while) or a mix of the previous? Upon available time, those questions/answers will be used to "drive" a final discussion. IMPORTANT DATES ------------- Submission deadline: January 13, 2014 Notification of acceptance: January 27, 2014 March 3, 2014, Workshop at HRI 2014 ORGANIZERS ---------- - Alessandra Sciutti Istituto Italiano di Tecnologia - Katrin Solveig Lohan Heriot-Watt University - Yukie Nagai Osaka University -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaakko.peltonen at aalto.fi Fri Jan 10 15:10:48 2014 From: jaakko.peltonen at aalto.fi (Peltonen Jaakko) Date: Fri, 10 Jan 2014 20:10:48 +0000 Subject: Connectionists: AISTATS: Second call for late-breaking posters, DL Jan 24 Message-ID: <34678FBC663BDC47BAD0B96BB3FF28580167157CF1@EXMDB01.org.aalto.fi> ============================================================================== AISTATS 2014 Call for Late-breaking Posters Seventeenth International Conference on Artificial Intelligence and Statistics April 22 - 25, 2014, Reykjavik, Iceland http://www.aistats.org Colocated with a MLSS Machine Learning Summer School ============================================================================== AISTATS is an interdisciplinary gathering of researchers at the intersection of computer science, artificial intelligence, machine learning, statistics, and related areas. Since its inception in 1985, the primary goal of AISTATS has been to broaden research in these fields by promoting the exchange of ideas among them. Some time at AISTATS will be set aside for "breaking news" posters having a one-page abstract. These posters may report on ongoing or unpublished projects, projects already published elsewhere, partially developed ideas, negative results etc, and are meant as informal forums to encourage discussion. Abstracts should summarize and highlight why the projects will be of interest to the AISTATS community. The review process of the late-breaking posters will be very light-touch and presentation at the Conference will not lead to publication in the Proceedings. We encourage the submission of late-breaking posters at http://www.aistats.org In brief: - Electronic submission of one-page abstracts is required. - Include authors names as the reviewing is not double blind. - Accepted submissions will be presented as posters but will not be published. - The submission site will be open by 20 December. - Submissions will be considered if received by 24 January, 2014, 23:59 UTC. - Acceptance notifications will be emailed by February 11th. - More details below and at http://www.aistats.org/submit_latebreaking_posters.php Keynote and Tutorial Speakers: ------------------------------ Keynote Speakers: Peter Buhlmann, ETH Zurich Andrew Gelman, Columbia University Michael I. Jordan, University of California, Berkeley Tutorial Speakers: Roderick Murray-Smith, University of Glasgow Christian P. Robert, Ceremade - Universite Paris-Dauphine Havard Rue, Norwegian University of Science and Technology Submission Requirements for Late-breaking Posters: -------------------------------------------------- Electronic submission of one-page abstracts is required. The abstracts must fit one A4/letter page with sufficient margins and at least 10 point font size. The authors may use the proceedings track paper format, but this is not required. Please remember to include authors names as the reviewing is not double blind. Abstracts will be lightly reviewed. Acceptance notifications will be emailed by February 11th. These will be presented as posters at the conference but will not be published. Solicited topics include, but are not limited to: * Models and estimation: graphical models, causality, Gaussian processes, approximate inference, kernel methods, nonparametric models, statistical and computational learning theory, manifolds and embedding, sparsity and compressed sensing, ... * Classification, regression, density estimation, unsupervised and semi-supervised learning, clustering, topic models, ... * Structured prediction, relational learning, logic and probability * Reinforcement learning, planning, control * Game theory, no-regret learning, multi-agent systems * Algorithms and architectures for high-performance computation in AI and statistics * Software for and applications of AI and statistics For a more detailed list of keywords, see http://www.aistats.org/keywords.php. See http://www.aistats.org/submit_latebreaking_posters.php for submission details. Submission Deadline: -------------------- Submissions will be considered if received by 24 January, 2014, 23:59 UTC. Acceptance notifications will be emailed by February 11th. See the conference website for additional important dates: http://www.aistats.org/dates.php. Colocated Events: ----------------- A Machine Learning Summer School (MLSS) will be held after the conference (April 25th-May 4th). April 25 will be an AISTATS/MLSS joint tutorial + MLSS poster session day. The summer school features an exciting program with talks from leading experts in the field, see http://mlss2014.hiit.fi for details. Venue: ------ AISTATS 2014 will be held in Reykjavik, the capital of Iceland, in Grand Hotel Reykjavik. Reykjavik and its environs offer a unique mix of culture and varied nature, from glaciers to waterfalls to geysers and thermal pools. This is a unique opportunity to spend an afternoon break at the famous Blue Lagoon geothermal warm beach. Travel information available at http://www.aistats.org. Program Chairs: --------------- Samuel Kaski, Aalto University and University of Helsinki Jukka Corander, University of Helsinki Local Chair: Deon Garrett, School of Computer Science, Reykjavik University and Icelandic Institute for Intelligent Machines Senior Program Committee: ------------------------- Edoardo Airoldi, Harvard University; Florence d'Alche-Buc, Universite d'Evry-Val d'Essonne; Cedric Archambeau, Amazon Peter Auer, University of Leoben; Erik Aurell, KTH; Yoshua Bengio, Universite de Montreal; Carlo Berzuini, University of Manchester; Jeff A. Bilmes, University of Washington; Wray Buntine, NICTA; Lawrence Carin, Duke University; Guido Consonni, Universita Cattolica del Sacro Cuore; Koby Crammer, The Technion; Emily B. Fox, University of Washington; Mehmet Gonen, Sage Bionetworks; Aapo Hyvarinen, University of Helsinki; Timo Koski, KTH; John Paisley, Columbia University; Jan Peters, Technische Universitat Darmstadt; Volker Roth, Universitat Basel; Yevgeny Seldin, Queensland University of Technology and UC Berkeley; Scott Sisson, University of New South Wales; Suvrit Sra, Max-Planck Institute for Intelligent Systems; Masashi Sugiyama, Tokyo Institute of Technology; Joe Suzuki, Osaka University; Bill Triggs, Centre National de Recherche Scientifique; Aki Vehtari, Aalto University; Jean-Philippe Vert, Mines ParisTech and Curie Institute; Stephen Walker, University of Texas at Austin; Kun Zhang, Max Planck Institute for Intelligent Systems The European meetings of AISTATS are organized by the European Society for Artificial Intelligence and Statistics. ============== for more information see http://www.aistats.org =============== From mlsp at NEURO.KULEUVEN.BE Fri Jan 10 06:01:48 2014 From: mlsp at NEURO.KULEUVEN.BE (2014 IEEE International Workshop on Machine Learning fo Signal Processing) Date: Fri, 10 Jan 2014 12:01:48 +0100 Subject: Connectionists: call for papers - 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP) Message-ID: <52CFD31C.7030609@neuro.kuleuven.be> 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP) September 21-24, 2014 Reims, France http://mlsp2014.conwiz.dk CALL FOR PAPERS The 24th MLSP workshop in the series of workshops organized by the IEEE Signal Processing Society MLSP Technical Committee will present the most recent and exciting advances in machine learning for signal processing through keynote talks, tutorials as well as special and regular single-track sessions. Prospective authors are invited to submit papers on relevant algorithms and applications including, but not limited to: - Learning theory and techniques - Graphical models and kernel methods - Data-driven adaptive systems and models - Pattern recognition and classification - Distributed, Bayesian, subspace/manifold and sparsity-aware learning - Multiset data analysis and multimodal data fusion - Perceptual signal processing in audio, image and video - Cognitive information processing - Multichannel adaptive and nonlinear signal processing - Applications, including: speech and audio, image and video, music, biomedical signals and images; communications; bioinformatics; biometrics, computational intelligence, genomic signals and sequences; social networks; games, and smart grid. DATA ANALYSIS AND SIGNAL PROCESSING COMPETITION Data Analysis and Signal Processing Competition is being organized in conjunction with the workshop. The goal of the competition is to advance the current state-of-the-art in theoretical and practical aspects of signal processing domains. The problems are selected to reflect current trends, evaluate existing approaches on common benchmarks, and identify critical new areas of research. Winners will be announced and awards given at the workshop. BEST STUDENT AWARD The MLSP Best Student Paper Award will be granted to the best overall paper for which a student is the principal author and presenter. This author must be a registered student at the time of paper submission to be eligible for this award. The award will be presented during the conference and consists of a honorarium (to be divided equally between all student authors of the paper), and a certificate for each such author. The award will be selected by a subcommittee of the program committee. The selection is based on the quality, originality, and clarity of the submission. PAPER SUBMISSION Prospective authors are invited to submit a double column paper of up to six pages using the electronic submission procedure at http://mlsp2014.conwiz.dk/paper_submission.htm Accepted papers will be published on memory sticks to be distributed at the workshop. The presented papers will be published in and indexed by IEEE Xplore. SCHEDULE: - Submission of full paper: May 5, 2014 - Notification of acceptance: June 27, 2014 - Advance registration before: July 25, 2014 - Camera-ready paper: July 25, 2014 ORGANIZING COMMITTEE: General Chair Mamadou Mboup Contact general chair at mlsp2014-chair at conwiz.dk Program Chairs T?lay Adali Eric Moreau Contact program chairs at mlsp2014-programchairs at conwiz.dk Data Competition Chair Vince Calhoun Special Session Chair Jean-Yves Tourneret Publicity Chair Marc Van Hulle Web and Publication Chair Jan Larsen Kevin Guelton Local Arrangement Chairs Valeriu Vrabie Hassan Fenniri From rao at cs.washington.edu Fri Jan 10 19:41:24 2014 From: rao at cs.washington.edu (Rajesh Rao) Date: Fri, 10 Jan 2014 16:41:24 -0800 Subject: Connectionists: Computational Neuroscience: "Massively open" online course (MOOC) Message-ID: Dear Colleagues, After a successful first offering last Spring (about 50K registered students), we are pleased to announce the second offering of our Computational Neuroscience MOOC ("massively open" online course) on Coursera: https://class.coursera.org/compneuro-002 Feel free to pass this information to students and others who may be interested. Registration is free, requires only an email address, and is open to all. with best regards, Rajesh Rao & Adrienne Fairhall -- Rajesh P. N. Rao Director, Center for Sensorimotor Neural Engineering Associate Professor, Department of CSE Adjunct Professor, Bioengineering and Electrical Engineering Faculty, Neurobiology and Behavior Program Box 352350, University of Washington, Seattle, WA 98195-2350, USA Phone: 206-685-9141 Fax: 206-543-2969 http://homes.cs.washington.edu/~rao/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cie.conference.series at gmail.com Sat Jan 11 10:15:56 2014 From: cie.conference.series at gmail.com (CiE Conference Series) Date: Sat, 11 Jan 2014 15:15:56 +0000 (GMT) Subject: Connectionists: CiE 2014: Language, Life, Limits - extended deadline Message-ID: FINAL CALL FOR PAPERS (incl. deadline extension due to popular demand) CiE 2014: Language, Life, Limits Budapest, Hungary June 23 - 27, 2014 http://cie2014.inf.elte.hu IMPORTANT DATES: EXTENDED Submission Deadline for LNCS: 20 January 2014 Notification of authors: 3 March 2014 Deadline for final revisions: 31 March 2014 FUNDING and AWARDS: CiE 2014 has received funding for student participation from the European Association for Theoretical Computer Science EATCS. Please contact the PC chairs if you are interested. The best student paper will receive an award sponsored by Springer. CiE 2014 is the tenth conference organized by CiE (Computability in Europe), a European association of mathematicians, logicians, computer scientists, philosophers, physicists and others interested in new developments in computability and their underlying significance for the real world. Previous meetings have taken place in Amsterdam (2005), Swansea (2006), Siena (2007), Athens (2008), Heidelberg (2009), Ponta Delgada (2010), Sofia (2011), Cambridge (2012), and Milan (2013). The motto of CiE 2014 "Language, Life, Limits" intends to put a special focus on relations between computational linguistics, natural and biological computing, and more traditional fields of computability theory. This is to be understood in its broadest sense including computational aspects of problems in linguistics, studying models of computation and algorithms inspired by physical and biological approaches as well as exhibiting limits (and non-limits) of computability when considering different models of computation arising from such approaches. As with previous CiE conferences the allover glueing perspective is to strengthen the mutual benefits of analyzing traditional and new computational paradigms in their corresponding frameworks both with respect to practical applications and a deeper theoretical understanding. We particularly invite papers that build bridges between different parts of the research community. For topics covered by the conference, please visit http://cie2014.inf.elte.hu/?Topics We particularly welcome submissions in emergent areas, such as bioinformatics and natural computation, where they have a basic connection with computability. TUTORIAL SPEAKERS: Wolfgang Thomas (RWTH Aachen) Peter Gruenwald (CWI, Amsterdam) INVITED SPEAKERS: Lev Beklemishev (Steklov Mathematical Institute, Moscow) Alessandra Carbone (Universite Pierre et Marie Curie and CNRS Paris) Maribel Fernandez (King's College London) Przemyslaw Prusinkiewicz (University of Calgary) Eva Tardos (Cornell University Albert Visser (Utrecht University) SPECIAL SESSIONS: History and Philosophy of Computing (organizers: Liesbeth de Mol, Giuseppe Primiero) Computational Linguistics (organizers: Maria Dolores Jimenez-Lopez, Gabor Proszeky) Computability Theory (organizers: Karen Lange, Barbara Csima) Bio-inspired Computation (organizers: Marian Gheorghe, Florin Manea) Online Algorithms (organizers: Joan Boyar, Csanad Imreh) Complexity in Automata Theory (organizers: Markus Lohrey, Giovanni Pighizzini) Contributed papers will be selected from submissions received by the PROGRAM COMMITTEE consisting of: * Gerard Alberts (Amsterdam) * Sandra Alves (Porto) * Hajnal Andreka (Budapest) * Luis Antunes (Porto) * Arnold Beckmann (Swansea) * Laurent Bienvenu (Paris) * Paola Bonizzoni (Milan) * Olivier Bournez (Palaiseau) * Vasco Brattka (Munich) * Bruno Codenotti (Pisa) * Erzsebet Csuhaj-Varju (Budapest, co-chair) * Barry Cooper (Leeds) * Michael J. Dinneen (Auckland) * Erich Graedel (Aachen) * Marie Hicks (Chicago IL) * Natasha Jonoska (Tampa FL) * Jarkko Kari (Turku) * Elham Kashefi (Edinburgh) * Viv Kendon (Leeds) * Satoshi Kobayashi (Tokyo) * Andras Kornai (Budapest) * Marcus Kracht (Bielefeld) * Benedikt Loewe (Amsterdam & Hamburg) * Klaus Meer (Cottbus, co-chair) * Joseph R. Mileti (Grinnell IA) * Georg Moser (Innsbruck) * Benedek Nagy (Debrecen) * Sara Negri (Helsinki) * Thomas Schwentick (Dortmund) * Neil Thapen (Prague) * Peter van Emde Boas (Amsterdam) * Xizhong Zheng (Glenside PA) The PROGRAMME COMMITTEE cordially invites all researchers (European and non-European) in computability related areas to submit their papers (in PDF format, max 10 pages using the LNCS style) for presentation at CiE 2014. The submission site https://www.easychair.org/conferences/?conf=cie2014 is open. For submission instructions consult http://cie2014.inf.elte.hu/?Submission_Instructions The CONFERENCE PROCEEDINGS will be published by LNCS, Springer Verlag. Contact: Erzsebet Csuhaj-Varju - csuhaj[at]inf.elte.hu Website: http://cie2014.inf.elte.hu/ __________________________________________________________________________ ASSOCIATION COMPUTABILITY IN EUROPE http://www.computability.org.uk CiE Conference Series http://www.illc.uva.nl/CiE CiE 2014: Language, Life, Limits http://cie2014.inf.elte.hu CiE Membership Application Form http://www.lix.polytechnique.fr/CIE AssociationCiE on Twitter http://twitter.com/AssociationCiE __________________________________________________________________________ From grlmc at urv.cat Sat Jan 11 06:05:55 2014 From: grlmc at urv.cat (GRLMC) Date: Sat, 11 Jan 2014 12:05:55 +0100 Subject: Connectionists: SSTiC 2014: January 18, 2nd registration deadline Message-ID: <852362E7B20949D3AC8B42C3BF6886C2@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* ********************************************************************* 2014 TARRAGONA INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2014 Tarragona, Spain July 7-11, 2014 Organized by Rovira i Virgili University http://grammars.grlmc.com/sstic2014/ ********************************************************************* --- January 18, 2nd registration deadline --- ********************************************************************* AIM: SSTiC 2014 is the second edition in a series started in 2013. For the previous event, see http://grammars.grlmc.com/SSTiC2013/ SSTiC 2014 will be a research training event mainly addressed to PhD students and PhD holders in the first steps of their academic career. It intends to update them about the most recent developments in the diverse branches of computer science and its neighbouring areas. To that purpose, renowned scholars will lecture and will be available for interaction with the audience. SSTiC 2014 will cover the whole spectrum of computer science through 5 keynote lectures and 24 six-hour courses dealing with some of the most lively topics in the field. The organizers share the idea that outstanding speakers will really attract the brightest students. ADDRESSED TO: Graduate students from around the world. There are no formal pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, reference may be made to specific knowledge background in the description of some of them. SSTiC 2014 is also appropriate for more senior people who want to keep themselves updated on developments in their own field or in other branches of computer science. They will surely find it fruitful to listen and discuss with scholars who are main references in computing nowadays. REGIME: In addition to keynotes, 3 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: SSTiC 2014 will take place in Tarragona, located 90 kms. to the south of Barcelona. The venue will be: Campus Catalunya Universitat Rovira i Virgili Av. Catalunya, 35 43002 Tarragona KEYNOTE SPEAKERS: Larry S. Davis (U Maryland, College Park), A Historical Perspective of Computer Vision Models for Object Recognition and Scene Analysis David S. Johnson (Columbia U, New York), Open and Closed Problems in NP-Completeness George Karypis (U Minnesota, Twin Cities), Recommender Systems Past, Present, & Future Steffen Staab (U Koblenz), Explicit and Implicit Semantics: Two Sides of One Coin Ronald R. Yager (Iona C, New Rochelle), Social Modeling COURSES AND PROFESSORS: Divyakant Agrawal (U California, Santa Barbara), [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Pierre Baldi (U California, Irvine), [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Stephen Brewster (U Glasgow), [introductory] Multimodal Human-computer Interaction Rajkumar Buyya (U Melbourne), [intermediate] Cloud Computing John M. Carroll (Pennsylvania State U, University Park), [introductory] Usability Engineering and Scenario-based Design Kwang-Ting (Tim) Cheng (U California, Santa Barbara), [introductory/intermediate] Smartphones: Hardware Platform, Software Development, and Emerging Apps Amr El Abbadi (U California, Santa Barbara), [introductory] The Distributed Foundations of Data Management in the Cloud Richard M. Fujimoto (Georgia Tech, Atlanta), [introductory] Parallel and Distributed Simulation Mark Guzdial (Georgia Tech, Atlanta), [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science David S. Johnson (Columbia U, New York), [introductory] The Traveling Salesman Problem in Theory and Practice George Karypis (U Minnesota, Twin Cities), [intermediate] Programming Models/Frameworks for Parallel & Distributed Computing Aggelos K. Katsaggelos (Northwestern U, Evanston), [intermediate] Optimization Techniques for Sparse/Low-rank Recovery Problems in Image Processing and Machine Learning Arie E. Kaufman (U Stony Brook), [advanced] Visualization Carl Lagoze (U Michigan, Ann Arbor), [introductory] Curation of Big Data Bijan Parsia (U Manchester), [introductory] The Empirical Mindset in Computer Science Charles E. Perkins (FutureWei Technologies, Santa Clara), [intermediate] Beyond LTE: the Evolution of 4G Networks and the Need for Higher Performance Handover System Designs Sudhakar M. Reddy (U Iowa, Iowa City), [introductory] Test and Design for Test of Digital Logic Circuits Robert Sargent (Syracuse U), [introductory] Validation of Models Mubarak Shah (U Central Florida, Orlando), [intermediate] Visual Crowd Analysis Steffen Staab (U Koblenz), [intermediate] Programming the Semantic Web Mike Thelwall (U Wolverhampton), [introductory] Sentiment Strength Detection for Twitter and the Social Web Jeffrey D. Ullman (Stanford U), [introductory] MapReduce Algorithms Nitin Vaidya (U Illinois, Urbana-Champaign), [introductory/intermediate] Distributed Consensus: Theory and Applications Philip Wadler (U Edinburgh), [intermediate] Topics in Lambda Calculus and Life ORGANIZING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, chair) Florentina Lilica Voicu (Tarragona) REGISTRATION: It has to be done at http://grammars.grlmc.com/sstic2014/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an approximation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. It is very convenient to register prior to the event. FEES: As far as possible, participants are expected to attend for the whole (or most of the) week (full-time). Fees are a flat rate allowing one to participate to all courses. They vary depending on the registration deadline. ACCOMMODATION: Information about accommodation will be available on the website of the School in due time. CERTIFICATE: Participants will be delivered a certificate of attendance. QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2014 Lilica Voicu Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Universitat Rovira i Virgili From hava at cs.umass.edu Sun Jan 12 14:21:00 2014 From: hava at cs.umass.edu (Hava Siegelmann) Date: Sun, 12 Jan 2014 14:21:00 -0500 Subject: Connectionists: grad + postdoc Message-ID: <52D2EB1C.6030205@cs.umass.edu> The BINDS Lab at UMass Amherst is looking to hire both a grad student and a postdoc for a project combining components of: Super-Turing Theory Time Series Analysis Chaos Theory Programming If you have strong background in at least two of the above, please contact me directly. Positions are available now. -- Hava T. Siegelmann, Ph.D. Professor Director, BINDS Lab (Biologically Inspired Neural Dynamical Systems) Dept. of Computer Science Program of Neuroscience and Behavior University of Massachusetts Amherst Amherst, MA, 01003 Phone - Grant Administrator -- Michele Roberts: 413-545-4389 Fax: 413-545-1249 LAB WEBSITE: http://binds.cs.umass.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro.torcini at cnr.it Sun Jan 12 13:29:10 2014 From: alessandro.torcini at cnr.it (Alessandro Torcini) Date: Sun, 12 Jan 2014 19:29:10 +0100 Subject: Connectionists: Workshop on dynamics of neural circuits ----- Florence - 17-19 march 2014 Message-ID: Workshop Title: Dynamics of Neural Circuits Venue: Institute for Complex Systems, CNR, Florence, IT Date: March 17-19 2014 Webpage: http://neuro.fi.isc.cnr.it/ index.php?page=NETT2014. This is the 4th workshop organized by the Marie Curie Initial Training Network "Neural Engineering Transformative Technologies (NETT)". It focuses on Dynamics of Neural Circuits, including: experimental work on large-scale recording of neural activity, novel tools for analyzing high-dimensional neural datasets, and theoretical aspects related to the emergence of collective behaviors in these systems. Invited speakers: Ralph Andrzejak (University Pompeu Fabra, Barcelona, Spain) Stefano Boccaletti (Institute for Complex Systems, Sesto Fiorentino, Italy) Ingo Bojak (University of Reading, UK) Laurent Bougrain (INRIA, Nancy, France) Raffaella Burioni (University of Parma, Italy) Daniel Chicharro (IIT, Genova, Italy) Alain Destexhe (National Center for Scientific Research, Gif-sur-Yvette, France) Peter Grassberger (Research Center Juelich, Germany) Conor Houghton (University of Bristol, UK) Axel Hutt (INRIA, Nancy, France) Thomas Kn?pfel (Imperial College, London, UK) Roberto Livi (University of Florence, Italy) Stefano Luccioli (Institute for Complex Systems, Sesto Fiorentino, Italy) Jason Maclean (University of Chicago, USA) Valentina Pasquale (IIT, Genova, Italy) Francesco Saverio Pavone (University of Florence, Italy) Eckehard Schoell (Technical University Berlin, Germany) Simon Schultz (Imperial College, London, UK) In addition to invited and a few contributed talks there will also be an open poster session. There is some (limited) space to accommodate external speakers and participants. If you intend to give a contributed talk or present a poster please send us a short abstract, otherwise send us just a few lines in which you explain why this workshop is important for you. This selection procedure is necessary due to space and budget limitations. The deadline for application to attend by non NETT members is January 31, 2014. Confirmations will be sent out by February 7, 2014. Once your participation is confirmed the deadline for the registration is February 28, 2014. Information about the registration fees and registration procedures can be found at [http://neuro.fi.isc.cnr.it/ index.php?page=registration- fees]. For information about accommodation options at reduced rates please have a look at [http://neuro.fi.isc.cnr.it/ index.php?page=hotels]. For more information about NETT please refer to [http://www.neural- engineering.eu/]. Scientific Coordinators: Thomas Kreuz and Alessandro Torcini, ISC, Florence, Italy Simon Schultz, Imperial College, London, UK Organization: Nebojsa Bozanic and David Angulo Garcia, ISC, Florence, Italy. Romain Caz?, Imperial College, London, UK This workshop is partially supported by the Italian Society of Chaos and Complexity (SICC, http://www.sicc-it.unina.it/). -- Neural Engineering Transformative Technologies (NETT) EU-FP& Initial Training Network http://www.neural-engineering. eu From Colin.Wise at uts.edu.au Sun Jan 12 18:03:43 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Mon, 13 Jan 2014 10:03:43 +1100 Subject: Connectionists: AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 29 January 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A016E461F9043@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 29 January 2014 https://shortcourses-bookings.uts.edu.au/Clientview/Schedules/ScheduleDetail.aspx?ScheduleID=1540&EventID=1273 The AAI short course 'Advanced Data Analytics - an Introduction' may well be of interest to you and your organisation and key personnel. This Data Analytics introductory short course will provide an early and rewarding understanding of the level of analytics which your organisation and your people should be seeking. Course outcomes Upon completion of this course students will: * Understand why advanced data analytics is essential to your business success * Understand the key terms and concepts used in advanced data analytics * Understand relations of big data, clouding computing and analytics * Be familiar with basic skills of statistics in data analytics, including descriptive analysis, regression, multivariate data analysis * Learning basic data mining and data warehousing, visualization and reporting, such as supervised vs unsupervised methods, clustering, association rule and frequent mining and so on * Knowing key techniques in machine learning, such as Parametric and non-parametric models, learning and inference, Maximum-likelihood estimation, and Bayesian approaches and so on * Be given the introduction of social media analytics, multimedia analytics, and the real projects or case studies conducted in AAI Future short courses on Data Analytics and Big Data may be viewed at http://analytics.uts.edu.au/shortcourses/structure.html First in a series of advanced data analytic short courses - register here Link Happy to discuss at your convenience. Regards. Colin Wise Operations Manager Advanced Analytics Institute (AAI) Blackfriars Building 2, Level 1 University of Technology, Sydney (UTS) Email: Colin.Wise at uts.edu.au Tel. +61 2 9514 9267 M. 0448 916 589 AAI: www.analytics.uts.edu.au/ AAI Email Policy - should you wish to not receive these communication on Data Analytics Learning please reply to our email (sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your patience and consideration. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pblouw at uwaterloo.ca Sun Jan 12 22:09:35 2014 From: pblouw at uwaterloo.ca (Peter Blouw) Date: Sun, 12 Jan 2014 22:09:35 -0500 Subject: Connectionists: Reminder: Summer School on Large-Scale Brain Modelling - Apply by Feb. 15th! Message-ID: Hello! [All details about this school can be found online at http://www.nengo.ca/summerschool] The Centre for Theoretical Neuroscience at the University of Waterloo is inviting applications for an in-depth, two week summer school that will teach participants how to use the Nengo simulation package to build state-of-the-art cognitive and neural models. Nengo has been used to build what is currently the world's largest functional brain model, Spaun[1], and provides users with a versatile and powerful environment for simulating cognitive and neural systems. We welcome applications from all interested graduate students, research associates, postdocs, professors, and industry professionals. No specific training in the use of modelling software is required, but we encourage applications from active researchers with a relevant background in psychology, neuroscience, cognitive science, or a related field. [1] Eliasmith, C., Stewart T. C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science. Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266. [ http://nengo.ca/publications/spaunsciencepaper] ***Application Deadline: February 15, 2014*** Format Participants are encouraged to bring their own ideas for projects, which may focus on testing hypotheses, modelling neural or cognitive data, implementing specific behavioural functions with neurons, expanding past models, or provide a proof-of-concept of various neural mechanisms. Projects can be focused on software, hardware, or a combination of both. Amongst other things, participants will have the opportunity to: - build perceptual, motor, and cognitive models with spiking neurons - model anatomical, electrophysiological, cognitive, and behavioural data - use a variety of single cell models within a large-scale model - integrate machine learning methods into biologically oriented models - use Nengo with your favorite simulator, e.g. Brian, NEST, Neuron, etc. - interface Nengo with a variety of neuromorphic hardware - interface Nengo with cameras and robotic systems of various kinds - implement modern nonlinear control methods in neural models - and much more? Hands-on tutorials, work on individual or group projects, and talks from invited faculty members will make up the bulk of day-to-day activities. There will be a weekend break on June 14-15, and fun activities scheduled for evenings throughout! Date and Location: June 8th to June 21st, 2014 at the University of Waterloo, Ontario, Canada. Applications: Please visit http://www.nengo.ca/summerschool, where you can find more information regarding costs, travel, lodging, along with an application form listing required materials. Questions about the summer school and application process can be directed to Peter Blouw (pblouw at uwaterloo.ca) We look forward to hearing from you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Mon Jan 13 06:38:07 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Mon, 13 Jan 2014 06:38:07 -0500 Subject: Connectionists: CSWC 2014 - Call for Papers In-Reply-To: <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> Message-ID: <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> -------------------------------------------------------------------------------------------------- Please accept our apologies if you receive multiple copies of this CFP! -------------------------------------------------------------------------------------------------- ============================================================ 4th International Workshop on Cloud Services and Web 2.0 Technologies for Collaboration (CSWC 2014) As part of The 2014 International Conference on Collaboration Technologies and Systems (CTS 2014) http://cts2014.cisedu.info/2-conference/workshops/workshop-01-cswc May 19-23, 2014 The Commons Hotel, Minneapolis, Minnesota, USA In Cooperation with ACM, IEEE, and IFIP (Pending) Submission Deadline: January 14, 2014 Submissions could be for full papers, short papers, poster papers, or posters ============================================================ SCOPE AND OBJECTIVES Recent technology directions have highlighted the remarkable commercial developments in the cloud arena combined with a suite of Web 2.0 technologies around mashups, gadgets and social networking. These have clear relevance to collaborative systems and sensor grids that involve evolution of grid architectures to clouds or to fresh approaches such as social networking to building virtual organizations. This workshop explores this area and invites exploratory papers as well as mature research contributions. CSWC Possible topics include but are not limited to: ? Performance and experience using Clouds and Web 2.0 to support sensors or collaboration ? Relation of Grids and Clouds in their application to sensors or collaboration ? Data driven applications and use of emerging technologies like Mashups, gadgets and Hadoop ? Clouds for Social Networking and Virtual Organizations ? Security, Privacy, and Trust issues in Clouds and Web 2.0 ? MapReduce ? Virtualization ? Interoperability and Standardization ? e-Science ? Architectures ? Services and Applications ? Models for Managing Clouds of Clouds and including Inter-networking ? Mobile clouds -- Architectures and Performance SUBMISSION INSTRUCTIONS: You are invited to submit original and unpublished research works on above and other topics related to Cloud services and Web 2.0 technologies for Collaboration, distributed sensing and related topics. Submitted papers must not have been published or simultaneously submitted elsewhere. Submission should include a cover page with authors' names, affiliation addresses, fax numbers, phone numbers, and email addresses. Please, indicate clearly the corresponding author and include up to 6 keywords from the above list of topics and an abstract of no more than 400 words. The full manuscript should be at most 8 pages using the two-column IEEE format. Additional pages will be charged additional fee. Short papers (up to 4 pages), poster papers and poster (please refer to http://cts2014.cisedu.info/home/posters for the posters submission details) will also be accepted. Please include page numbers on all preliminary submissions to make it easier for reviewers to provide helpful comments. Submit a PDF copy of your full manuscript via email to the workshop organizers at https://www.easychair.org/conferences/?conf=cswc2014. Only PDF files will be accepted, sent by email to the workshop organizers. Each paper will receive a minimum of three reviews. Papers will be selected based on their originality, relevance, contributions, technical clarity, and presentation. Submission implies the willingness of at least one of the authors to register and present the paper, if accepted. Authors of accepted papers must guarantee that their papers will be registered and presented at the workshop. Accepted papers will be published in the Conference proceedings. Instructions for final manuscript format and requirements will be posted on the CTS 2014 Conference web site. It is our intent to have the proceedings formally published in hard and soft copies and be available at the time of the conference. The proceedings is projected to be included in the IEEE Digital Library and indexed by all major indexing services accordingly. If you have any questions about paper submission or the workshop, please contact the workshop organizers. IMPORTANT DATES Paper Submissions: -------------------------------------- January 14, 2014 Acceptance Notification: -------------------------------- February 07, 2014 Camera Ready Papers and Registration Due: --------- February 21, 2014 Conference Dates: --------------------------------------- May 19 - 23, 2014 WORKSHOP ORGANIZERS: Geoffrey Charles Fox Indiana University - Bloomington, Indiana, USA gcf at indiana.edu Ivan Rodero Rutgers University, New Jersey, USA irodero at rutgers.edu Leandro Navarro Universitat Polit?cnica de Catalunya, Spain leandro at ac.upc.edu Kyle Chard University of Chicago/Argonne National Laboratory chard at uchicago.edu For information or questions about Conference's paper submission, tutorials, posters, workshops, special sessions, exhibits, demos, panels and forums organization, doctoral colloquium, and any other information about the conference location, registration, paper formatting, etc., please consult the Conference?s web site at URL: http://cts2014.cisedu.info/ or contact one of the Conference's organizers or Co-Chairs. ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From janla at dtu.dk Mon Jan 13 08:32:55 2014 From: janla at dtu.dk (Jan Larsen) Date: Mon, 13 Jan 2014 13:32:55 +0000 Subject: Connectionists: 2nd call for paper - 4th International Workshop on Cognitive Information Processing Message-ID: Dear Connectionists [cid:image005.jpg at 01CF1051.FD4E5B80] [cid:image006.png at 01CF1051.FD4E5B80] CALL FOR PAPERS http://cip2014.conwiz.dk Schedule Special session proposal: December 1, 2013 Submission of full paper: February 10, 2014 Notification of acceptance: March 15, 2014 Camera-ready paper: April 15, 2014 Advance registration before: April 5, 2014 Following the success of three previous editions, the 4th International Workshop on Cognitive Information Processing (CIP2014) aims at bringing together researchers from the machine learning, cognitive engineering, human-machine interaction, pattern recognition, statistical signal processing in an effort to promote and encourage cross-fertilization of ideas and tools. CIP 2014 will take place May 26-28 2014 at the Bella Sky Hotel, Copenhagen, Denmark. The workshop will feature keynote addresses and technical presentations, oral and poster, all of which will be included in the workshop proceedings. The papers will further be indexed and published in IEEE Xplore. Papers are solicited for the following areas in theory and applications: Theory Areas ? Machine learning theory and algorithms ? Cognitive principles for machine architectures and training ? Adaptive and sparsity-aware learning ? Cognitive psychology modeling ? Machine understanding and human-machine interaction ? Cognitive dynamic systems ? Cognitive fuzzy methods and techniques ? Cognitive approaches to decision making and game theory ? Collaborative sensing ? Algorithms and Tools for Large-Scale Machine Learning Application Areas ? Sound and audio systems ? Datamining, textmining, ? Sentiment analysis ? Social networks, social science ? Medicine, neuroimaging, bioinformatics, ? Energy and smart grids, advanced manufacturing, ? Robotics, artificial vision, ? Engineering systems, industrial processes, ? Economy and finance ? Cognitive communications: Modulation, networks, dynamic spectrum management and personalization ? Cognitive radar and sonar: Detection, estimation, tracking and target identification Special Sessions Special sessions are solicited in the same areas. Please suggest the topic and 3-5 potential speakers to Program Chair Jan Larsen (cip2014-programchair.conwiz.dk) before December 1, 2013. Student Paper Prize A prize of 500.00 EUR will recognize the best contribution whose first author is a graduate student. The award will be presented during the conference and consists of a honorarium (to be divided equally between all student authors of the paper), and a certificate for each such author. The award will be selected by a subcommittee of the program committee and based on the quality, originality, and clarity of the submission. Paper submission Prospective authors are invited to submit a double column paper of up to six pages. Accepted papers will be published in an online proceedings available to workshop registrants. The papers will further be indexed and published in IEEE Xplore. For more information and paper submission visit http://cip2014.conwiz.dk/paper_submission.htm You are very welcome to share this call for paper with relevant lists and individuals. Best regards, General Chair Professor Lars Kai Hansen, DTU Compute, Technical University of Denmark, Denmark Vice Chair, Professor S?ren Holdt Jensen, Department of Electronic Systems, Aalborg University, Denmark Program Chair, Associate Professor Jan Larsen, DTU Compute, Technical University of Denmark, Denmark Sponsored by [cid:image001.png at 01CF103A.3F8E1F20] [cid:image010.jpg at 01CF103B.677A31D0] [cid:image011.jpg at 01CF103B.677A31D0] Technical Co-Sponsors [cid:image012.jpg at 01CF103B.677A31D0] [cid:image013.jpg at 01CF103B.677A31D0] SP Chapter, IEEE Denmark Section -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6379 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 4324 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.jpg Type: image/jpeg Size: 3782 bytes Desc: image011.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image012.jpg Type: image/jpeg Size: 4348 bytes Desc: image012.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image013.jpg Type: image/jpeg Size: 3518 bytes Desc: image013.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 3297 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 12990 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cip2014_cfp.pdf Type: application/pdf Size: 311405 bytes Desc: cip2014_cfp.pdf URL: From ivot at ni.tu-berlin.de Mon Jan 13 12:28:58 2014 From: ivot at ni.tu-berlin.de (Ivo Trowitzsch) Date: Mon, 13 Jan 2014 18:28:58 +0100 Subject: Connectionists: Postdoc or PhD Position at TU Berlin Message-ID: <20140113182858.824350zjgd7yw2fu@webmail.tu-berlin.de> MACHINE LEARNING & BIO-INSPIRED AUDITORY PROCESSING The succesful candidate will develop and apply signal processing and machine learning techniques in order to detect and annotate acoustic events in an auditory scene analysis and an quality of experience setting. Project and position are part of an international collaborative project which is funded through the EU FET Open scheme (see brief description below). Starting date: Immediate Salary level: E-13 TV-L The position is for a maximum of three years. Candidates should hold a recent PhD-degree (Postdoc position) or Diplom-/Master- degree (PhD position), should have excellent programming skills, and should have good knowledge in the machine learning field. Candidates with research experience in Machine Learning or applications to auditory processing will be preferred. Application material (CV, list of publications, abstract of PhD thesis (if applicable), abstract of Diplom-/Master/Thesis, copies of certificates and two letters of reference) should be sent to: Prof. Dr. Klaus Obermayer MAR 5-6, Technische Universitaet Berlin, Marchstrasse 23 10587 Berlin, Germany http://www.ni.tu-berlin.de/ email: oby at cs.tu-berlin.de preferably by email. All applications received before January 26th, 2014, will be given full consideration, but applications will be accepted until the position is filled. TUB seeks to increase the proportion of women and particularly encourages women to apply. Women will be preferred given equal qualification. Disabled persons will be preferred given equal qualification. --------------------------------------------------------------------------- Consortium Summary: TWO!EARS replaces current thinking about auditory modelling by a systemic approach in which human listeners are regarded as multi-modal agents that develop their concept of the world by exploratory interaction. The goal of the project is to develop an intelligent, active computational model o auditory perception and experience in a multi-modal context. Our novel approach is based on a structural link from binaural perception to judgment and action, realised by interleaved signal-driven (bottom-up) and hypothesis-driven (top-down) processing within an innovative expert system architecture. The system achieves object formation based on Gestalt principles, meaning assignment, knowledge acquisition and representation, learning, logic-based reasoning and reference-based judgment. More specifically, the system assigns meaning to acoustic events by combining signal- and symbol-based processing in a joint model structure, integrated with proprioceptive and visual percepts. It is therefore able to describe an acoustic scene in much the same way that a human listener can, in terms of the sensations that sounds evoke (e.g. loudness, timbre, spatial extent) and their semantics (e.g. whether the sound is unexpected or a familiar voice). Our system will be implemented on a robotic platform, which will actively parse its physical environment, orientate itself and move its sensors in a humanoid manner. The system has an open architecture, so that it can easily be modified or extended. This is crucial, since the cognitive functions to be modelled are domain and application specific. TWO!EARS will have significant impact on future development of ICT wherever knowledge and control of aural experience is relevant. It will also benefit research in related areas such as biology, medicine and sensory and cognitive psychology. From ala at csc.kth.se Mon Jan 13 12:14:42 2014 From: ala at csc.kth.se (Anders Lansner) Date: Mon, 13 Jan 2014 18:14:42 +0100 Subject: Connectionists: Workshop Progress in Brain-Like Computing, February 5-6 2014 Message-ID: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> Workshop on Brain-Like Computing, February 5-6 2014 The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. List of speakers Speaker Affiliation Giacomo Indiveri ETH Z?rich Abigail Morrison Forschungszentrum J?lich Mark Ritter IBM Watson Research Center Guillermo Cecchi IBM Watson Research Center Anders Lansner KTH Royal Institute of Technology Ahmed Hemani KTH Royal Institute of Technology Steve Furber University of Manchester Kazuyuki Aihara University of Tokyo Karlheinz Meier Heidelberg University Andreas Schierwagen Leipzig University For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR You need to sign up before January 28th. Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/work shop-on-brain-like-computing-1.442038 ****************************************** Anders Lansner Professor in Computer Science, Computational biology School of Computer Science and Communication Stockholm University and Royal Institute of Technology (KTH) ala at kth.se, +46-70-2166122 --- Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! antivirus ?r aktivt. http://www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From myllymak at cs.helsinki.fi Tue Jan 14 04:32:31 2014 From: myllymak at cs.helsinki.fi (Petri Myllymaki) Date: Tue, 14 Jan 2014 11:32:31 +0200 Subject: Connectionists: Helsinki ICT network: 15 doctoral student positions, call for applications Message-ID: <52D5042F.8010606@cs.helsinki.fi> The Helsinki Doctoral Education Network in Information and Communications Technology (HICT) is a collaborative doctoral education network hosted jointly by Aalto University and the University of Helsinki, the two leading universities within this area in Finland. The network involves at present over 50 professors and over 200 doctoral students, and the participating units graduate altogether more than 40 new doctors each year. The activities of HICT are structured along six research area specific tracks: - Algorithms and machine learning - Creative technologies - Life science informatics - Networks, networked systems and services - Software and service engineering and systems - User centered information technology The quality of research and education in both HICT universities is world-class, but the education is still practically free as there are no tuition fees in the Finnish university system. Helsinki has been ranked as the World's Most Livable City (Monocle, 2011), and is the capital city of Finland, which is in the top 10 of the most highly educated nations in the world (OECD, 2013), and has been selected as the world?s best country to live in (Newsweek, 2010). The participating units of HICT have currently 15 fully funded positions available for exceptionally qualified doctoral students. For more information and application instructions, see http://www.hict.fi/. ----------------------------------------------------------------------- From ecai2014 at guarant.cz Tue Jan 14 05:23:53 2014 From: ecai2014 at guarant.cz (ecai2014) Date: Tue, 14 Jan 2014 10:23:53 +0000 Subject: Connectionists: ECAI 2014 - second call for papers Message-ID: ECAI'14 Second Call for Papers The Twenty-first European Conference on Artificial Intelligence 18-22 August 2014, Prague, Czech Republic http://www.ecai2014.org The biennial European Conference on Artificial Intelligence (ECAI) is Europe's premier archival venue for presenting scientific results in AI. Organised by the European Coordinating Committee for AI (ECCAI), the ECAI conference provides an opportunity for researchers to present and hear about the very best research in contemporary AI. As well as a full programme of technical papers, ECAI'14 will include the Prestigious Applications of Intelligent Systems conference (PAIS), the Starting AI Researcher Symposium (STAIRS), the International Web Rule Symposium (RuleML) and an extensive programme of workshops, tutorials, and invited speakers. (Separate calls are issued for PAIS, STAIRS, tutorials, and workshops.) ECAI'14 will be held in the beautiful and historic city of Prague, the capital of the Czech Republic. With excellent opportunities for sightseeing and gastronomy, Prague promises to be a wonderful venue for a memorable conference. This call invites the submission of papers and posters for the technical programme of ECAI'14. High-quality original submissions are welcome from all areas of AI; the following list of topics is indicative only. - Agent-based and Multi-agent Systems - Constraints, Satisfiability, and Search - Knowledge Representation, Reasoning, and Logic - Machine Learning and Data Mining - Natural Language Processing - Planning and Scheduling - Robotics, Sensing, and Vision - Uncertainty in AI - Web and Knowledge-based Information Systems - Multidisciplinary Topics Both long (6-page) and short (2-page) papers can be submitted. Whereas long papers should report on substantial research results, short papers are intended for highly promising but possibly more preliminary work. Short papers will be presented in poster form. Rejected long papers will be considered for the short paper track. Submitted papers must be formatted according to ECAI'14 guidelines and submitted electronically through the ECAI'14 paper submission site. Full instructions including formatting guidelines and electronic templates are available on the ECAI'14 website. Paper submission: 1 March 2014 Author feedback: 14-18 April 2014 Notification of acceptance/rejection: 9 May 2014 Camera-ready copy due: 30 May 2014 The proceedings of ECAI'14 will be published by IOS Press. Best papers go AIJ The authors of the best papers (and runner ups) of ECAI'14 will be invited to submit an extended version of their paper to the Artificial Intelligence Journal. Conference Secretariat GUARANT International Na Pankr?ci 17 140 21 Prague 4 Tel: +420 284 001 444, Fax: +420 284 001 448 E-mail: ecai2014 at guarant.cz Web: www.ecai2014.org This email is not intended to be spam or to go to anyone who wishes not to receive it. If you do not wish to receive this letter and wish to remove your email address from our database please reply to this message with "Unsubscribe" in the subject line. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpezaris at gmail.com Mon Jan 13 13:16:21 2014 From: jpezaris at gmail.com (John Pezaris) Date: Mon, 13 Jan 2014 13:16:21 -0500 Subject: Connectionists: AREADNE 2014 Call for Abstracts Message-ID: CONFERENCE ANNOUNCEMENT -- and -- CALL FOR ABSTRACTS AREADNE 2014 Research in Encoding and Decoding of Neural Ensembles 25-29 June 2014 Nomikos Conference Center Santorini, Greece http://www.areadne.org info at areadne.org INTRODUCTION One of the fundamental problems in neuroscience today is to understand how the activation of large populations of neurons give rise to higher order functions of the brain including learning, memory, cognition, perception, action and ultimately conscious awareness. Electrophysiological recordings in behaving animals over the past forty years have revealed considerable information about what the firing patterns of single neurons encode in isolation, but it remains largely a mystery how collections of neurons interact to perform these functions. Recent technological advances have provided a glimpse into the global functioning of the brain. These technologies include functional magnetic resonance imaging, optical imaging methods including intrinsic, voltage-sensitive dye, and two-photon imaging, high-density electroencephalography and magnetoencephalography, and multi-microelectrode array electrophysiology. These technologies have expanded our knowledge of brain functioning beyond the single neuron level. At the same time, our understanding of how neuronal ensembles carry information has allowed the development of brain-machine interfaces (BMI) to enhance the capabilities of patients with sensory and motor deficits. Knowledge of how neuronal ensembles encode sensory stimuli has made it possible to develop perceptual BMIs for the hearing and visually impaired. Likewise, research in how neuronal ensembles decode motor intentions has resulted in motor BMIs by which people with severe motor disabilities can control external devices. CONFERENCE MISSION First and foremost, this conference is intended to bring scientific leaders from around the world to present their recent findings on the functioning of neuronal ensembles. Second, the meeting will provide an informal yet spectacular setting on Santorini in which attendees can discuss and share ideas outside of the presentations at the conference center. Third, this conference continues our long term project to form a systems neuroscience research institute within Greece to conduct state-of-the-art research, offer meetings and courses, and provide a center for visiting scientists from around the world to interact with Greek researchers and students. FORMAT AND SPEAKERS The conference will span four days, in morning and early evening sessions. Confirmed speakers include experts in the field of multi-neuron experiment, theory, and analysis (in alphabetic order): Ken Britten EJ Chichilnisky Mark Churchland Rosa Cossart Allison Doupe Loren Frank Wulfram Gerstner Sonja Gruen Gabriel Kreiman Gilles Laurent Eve Marder Tom Mrsic-Flogel Ole Paulsen Carl Petersen John Pezaris Alexandre Pouget Jennifer Raymond Michael Roukes Philip Sabes Terry Sanger Andreas Tolias Susumu Tonegawa Brian Wandell CALL FOR ABSTRACTS We are currently soliciting abstracts for poster presentation. Submissions will be accepted electronically, and must be received by 7 March 2014. Automated email acknowledgement of submission will be provided, and manual verification will be made a few days after submission. Notification of acceptance will be provided by 28 March 2014. Please see our on-line Call for Abstracts for additional details and submission templates: http://areadne.org/call-for-abstracts.html ORGANIZING COMMITTEE John Pezaris, Co-Chair Nicholas Hatsopoulos, Co-Chair Yiota Poirazi Andreas Tolias Dora Angelaki Thanos Siapas FOR FURTHER INFORMATION For further information please see the conference web site http://www.areadne.org or send email to info at areadne.org. -- Dr. J. S. Pezaris AREADNE 2014 Co-Chair Massachusetts General Hospital 55 Fruit Street Boston, MA 02114, USA john at areadne.org From t.j.prescott at sheffield.ac.uk Tue Jan 14 05:35:39 2014 From: t.j.prescott at sheffield.ac.uk (Tony Prescott) Date: Tue, 14 Jan 2014 10:35:39 +0000 Subject: Connectionists: Connection Science: Revised Aims & Scope and New Editor-in-Chief Message-ID: Dear Connectionists, The journal *Connection Science* (http://www.tandfonline.com/loi/ccos), one of the longest running journals in field of connectionist research, has a revised aims and scope, a new editor-in-chief, and a substantially revised editorial board. In addition to research articles the journal is also interested in publishing special issues, and review, perspective, and roadmap articles in topics relevant to the revised themes. The new aims and scope and full list of current editorial board members is given below, please see the web-site for more details. *Revised Aims and Scope* Connection Science is an inter-disciplinary journal dedicated to exploring the convergence of the analytic and synthetic sciences of mind including psychology, neuroscience, philosophy, linguistics, cognitive science, computational modelling, artificial intelligence, analog and parallel computing, and robotics. A strong focus is on the simulation of living systems and their societies, including humans, and on the development of novel forms of synthetic life such as biomimetic robots. Articles arising from connectionist, probabilistic, dynamical, or evolutionary approaches, and that explore distributed adaptive computation or emergent order, are particularly relevant. The journal also encourages submission of research in systems-level computational neuroscience that seeks to understand sensorimotor integration, perception, cognition, or awareness in brain-like model systems. Manuscripts that employ empirical methodologies or that explore social, theoretical or philosophical issues will be considered where there is a clear relevance to the synthetic approach and/or its societal impacts. Review, road-mapping, or perspective papers are welcome; authors are advised to consult the Editor-in-Chief if considering a submission of this nature. *Editorial Board* Editor-in-Chief Tony J. Prescott, University of Sheffield, UK (http://www.abrg.group.shef.ac.uk/people/tony/) Founding Editor Noel E. Sharkey, University of Sheffield, UK Associate Editors Angelo Cangelosi, University of Plymouth, UK Andy Clark, University of Edinburgh, UK Garrison W. Cottrell, University of California, San Diego, USA Stefano Nolfi, University of Rome, Italy Amanda J. C. Sharkey, University of Sheffield, UK Paul Verschure, University of Pompeu Fabra, Spain Stefan Wermter, University of Hamberg, Germany Editorial Board Igor Aleksander, Imperial College London, UK Minoru Asada, Osaka University, Japan Christian Balkenius, Lund University, Sweden Jim Bednar, University of Edinburgh, UK Luc Berthouze, University of Sussex, UK Mark Bishop, Goldsmiths, UK Joanna Bryson, University of Bath, UK Ke Chen, University of Manchester, UK Hillel Chiel, Case Western Reserve University, USA Paul Cisek, University of Montreal, Canada Joachim Diederich, University of Melbourne, Australia Peter Dominey, Inserm, Lyon, France Georg Dorffner , University of Vienna, Austria Michael Dyer, University of California, Los Angeles, USA Michael J. Frank, Brown University, USA Philip Husbands, University of Sussex, UK J. A. Scott Kelso, Florida Atlantic University, USA Christoph von der Malsburg, University of Frankfurt, Germany Jonatas Manzolli, Universidade Estadual de Campinas, Brazil Giorgio Metta, Instituto Italiano di Technologia, Genoa, Italy Ben Mitchinson, University of Sheffield, UK Tim Pearce, University of Leicester, UK Giovanni Pezzulu, ICST, Padova, Italy Andrew Phillipides, University of Sussex, UK Kim Plunkett, University of Oxford, UK Ronan Reilly, University College Dublin, Ireland Anil K. Seth, University of Sussex, UK Murray Shanahan, Imperial College London, UK Narayanan Srinivasan, University of Allahabad, India Luc Steels, Free University of Brussels, Belgium Ron Sun, Rensselaer Polytechnic Institute, New York, USA Jun Tani, KAIST, South Korea Carme Torras, Polytechnic University of Catalonia, Spain Jeremy Wyatt, University of Birmingham, UK Xin Yao, University of Birmingham, UK Tom Ziemke, University of Sk?vde, Sweden Marco Zorzi. University of Padova, Italy Connection Science is now being published in association with ?The Convergent Science Network for Biomimetics and Neurotechnology? (CSN: http://csnetwork.eu) a European Union-sponsored international network for research in brain-inspired technologies. Tony J Prescott Connection Science, Editor-in-Chief -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerstin at nld.ds.mpg.de Tue Jan 14 11:21:41 2014 From: kerstin at nld.ds.mpg.de (Kerstin Mosch) Date: Tue, 14 Jan 2014 17:21:41 +0100 Subject: Connectionists: Bernstein Conference 2014 - Call for Workshop proposals Message-ID: <52D56415.30503@nld.ds.mpg.de> *The Bernstein Network invites proposals for the Workshops directly preceding the main Bernstein Conference 2014 in G?ttingen.* ************************************************************** Call for Workshop proposals: Workshops: September 2 & 3, 2014 (Main Bernstein Conference: September 3 - 5, 2014) Deadline of proposal submission: March 1, 2014 Notification of acceptance: March 15, 2014 Conference Registration starts: April 15, 2014 Early registration deadline: June 1, 2014 ************************************************************** The Bernstein Conference has become the largest European Conference in Computational Neuroscience and now regularly attracts more than 500 international participants. Since 2013, the Bernstein conference includes a series of pre-conference workshops. They provide an informal forum to discuss timely research questions and challenges in Computational Neuroscience and related fields. Workshops addressing controversial issues, open problems, and comparisons of competing approaches are encouraged. SCHEDULE: Sept 2, 2014, 13:30 - 17:30 & Sept 3, 9:00 - 12:30. You may apply for a half-day workshop, but preference will be given to full-day workshops (hence, which bridge across both days). Workshop costs: The Bernstein Conference does not provide financial support, but offers 5 free registrations for the main conference per workshop (assigned by organizers). For further information about the conference, please visit the website . DETAILS FOR WORKSHOP PROPOSALS: Submission form can be downloadedhere . Deadline for submission of WS-proposals: March 1, 2014 We are looking forward to seeing you in G?ttingen in September! WORKSHOP PROGRAM COMMITTEE Matthias Bethge (Bernstein Center T?bingen) Upinder Bhalla (NCBS, Bangalore) Carlos Brody (Princeton University) Gustavo Deco (University Pompeu Fabra, Barcelona) Alain Destexhe (CNRS, Gif-sur-Yvette) Gaute Einevoll(Norwegian University of Life Sciences, Aas) Wulfram Gerstner (EPFL, Lausanne) Andreas Herz (Bernstein Center Munich) Christian Machens (Champalimaud Neuroscience Programme, Lisbon) Eero Simoncelli (NYU, New York) Sara Solla (Northwestern University, Evanston) Misha Tsodyks (Weizmann Institute and Columbia University) Mark van Rossum (University of Edinburgh) Fred Wolf (Bernstein Center G?ttingen) Florentin W?rg?tter (General Conference Chair, Bernstein Focus Neurotechnology, G?ttingen) CONFERENCE ASSISTANTS Contact: Kerstin Mosch, Sabine Huhnold at contact at bccn-goettingen.de -- Dr. Kerstin Mosch Bernstein Center for Computational Neuroscience (BCCN) Goettingen Bernstein Focus Neurotechnology (BFNT) Goettingen Max Planck Institute for Dynamics and Self-Organization Am Fassberg 17 D-37077 Goettingen Germany T: +49 (0) 551 5176 - 405 E:kerstin at nld.ds.mpg.de I:www.bccn-goettingen.de I:www.bfnt-goettingen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.larochelle at usherbrooke.ca Mon Jan 13 22:01:31 2014 From: hugo.larochelle at usherbrooke.ca (Hugo Larochelle) Date: Mon, 13 Jan 2014 22:01:31 -0500 Subject: Connectionists: 2nd CfP: EACL-Workshop on Continuous Vector Space Models and their Compositionality (CVSC), 2nd edition Message-ID: <9A74D616-DFAA-4FE6-8AA4-E62A15B0E291@usherbrooke.ca> **************************************************************************************************** Workshop on Continuous Vector Space Models and their Compositionality (2nd edition) Co-located with EACL 2014, Gothenburg, Sweden April 27, 2014 Submission deadline: January 23, 2014 https://sites.google.com/site/cvscworkshop2014 **************************************************************************************************** Second Call for Papers (Apologies for multiple postings) In recent years, there has been a growing interest in algorithms that learn and use continuous representations for words, phrases, or documents in many natural language processing applications. Among many others, influential proposals that illustrate this trend include latent Dirichlet allocation, neural network based language models and spectral methods. These approaches are motivated by improving the generalization power of the discrete standard models, by dealing with the data sparsity issue and by efficiently handling a wide context. Despite the success of single word vector space models, they are limited since they do not capture compositionality. This prevents them from gaining a deeper understanding of the semantics of longer phrases or sentences. With the growing popularity of these neural and probabilistic methods of language processing, the scope of this second workshop is extended to theoretical and conceptual questions regarding: * their relation to unsupervised distributional representations, * the encompassing of the compositional aspects of formal models of semantics, * the role of linguistic theory in the design and development of these methods. Some such pertinent questions include: Should phrase representations and word representations be of the same sort? Could different linguistic levels require different modelling approaches? Is compositionality determined by syntax, and if so, how do we learn/define it? Should word representations be fixed and obtained distributionally, or should the encoding be variable? Should word representations be task-specific, or should they be general? In this workshop, we invite submissions of papers on continuous vector space models for natural language processing. Topics of interest include, but are not limited to: * learning algorithms for continuous vector space models, * their compositionality, * their use in NLP applications, * spectral learning for NLP, * neural networks for NLP, * phrase, sentence, and document-level distributional representations, * tensor models, * distributed semantic representations, * the role of syntax in compositional models, * formal and distributional semantic models. INVITED SPEAKERS The workshop will showcase presentations from two invited speakers: * Geoffrey Zweig (Microsoft Research) * Ivan Titov (University of Amsterdam, Netherlands) SUBMISSION INFORMATION Authors should submit a full paper of up to 8 pages in electronic, PDF format, with up to 2 additional pages for references. The reported research should be substantially original. The papers will be presented orally or as posters. All submissions must be in PDF format and must follow the EACL 2014 formatting requirements (http://www.eacl2014.org/files/eacl-2014-styles.zip). Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference should be avoided as well. Submissions must be made through the Softconf website set up for this workshop: https://www.softconf.com/eacl2014/CVSC/ Accepted papers will appear in the workshop proceedings, where no distinction will be made between papers presented orally or as posters. IMPORTANT DATES 23 January 2014 : Submission deadline 20 February 2014 : Notification of acceptance 3 March 2014 : Camera-ready deadline 27 April 2014 : Workshop ORGANIZERS Alexandre Allauzen (LIMSI-CNRS/Universit? Paris-Sud, France) Raffaella Bernardi (University of Trento, Italy) Edward Grefenstette (University of Oxford, UK) Hugo Larochelle (Universit? de de Sherbrooke, Canada) Christopher Manning (Stanford University, USA) Scott Wen-tau Yih (Microsoft Research, USA) PROGRAM COMMITTEE Nicholas Asher (IRIT-Toulouse) Marco Baroni (University of Trento) Yoshua Bengio (Universit? de Montr?al) Gemma Boleda (University of Texas) Antoine Bordes (Universit? Technologique de Compi?gne) Johan Bos (University of Groningen) L?on Bottou (Microsoft Research) Xavier Carreras (Universitat Polit?cnica de Catalunya) Lucas Champollion (New-York University) Stephen Clark (University of Cambridge) Shay Cohen (Columbia University) Ido Dagan (Bar Ilan University) Ronan Collobert (IDIAP Research Institute, Switzerland) Pino Di Fabbrizio (Amazon) Georgiana Dinu (University of Trento) Kevin Duh (Nara Institute of Science and Technology) Dean Foster (University of Pennsylvania) Alessandro Lenci (University of Pisa) Louise McNally (Universitat Pompeu Fabra) Fabio Massimo Zanzotto (Universit? degli Studi di Roma) Mirella Lapata (University of Edinburgh) Andriy Mnih (Gatsby Computational Neuroscience Unit) Larry Moss (Indiana University) Diarmuid ? Seaghdha (University of Cambridge) Sebastian Pado (Universit?t Stuttgart) Martha Palmer (University of Colorado) John Platt (Microsoft Research) Maarten de Rijke (University of Amsterdam) Mehrnoosh Sadrzadeh (University of London) Mark Steedman (University of Edinburgh) Chung-chieh Shan (Indiana University) Peter Turney (NRC) Jason Weston (Google) Guillaume Wisniewski (LIMSI-CNRS/Universit? Paris-Sud) From mpavone at dmi.unict.it Tue Jan 14 06:28:46 2014 From: mpavone at dmi.unict.it (Mario Pavone) Date: Tue, 14 Jan 2014 12:28:46 +0100 Subject: Connectionists: International Synthetic and Systems Biology Summer School - Biology meets Engineering and Computer Science, Taormina - Sicily, Italy June 15-19, 2014 Message-ID: <20140114122846.Horde.TjmcBuph4B9S1R9umPiDeiA@mbox.dmi.unict.it> ______________________________________________________ Call for Participation (apologies for multiple copies) ______________________________________________________ Synthetic and Systems Biology Summer School: "Biology meets Engineering and Computer Science" Taormina, Italy, June 15-19, 2014 W: http://www.taosciences.it/ssbss2014/ E: ssbss2014 at dmi.unict.it *Application Deadline: February 15 2014* Recent advances in DNA synthesis have increased our ability to build biological systems. Synthetic Biology aims at streamlining the design and synthesis of robust and predictable biological systems using engineering design principles. Designing biological systems requires a deep understanding of how genes and proteins are organized and interact in living cells: Systems Biology aims at elucidating the cellular organization at gene, protein, cell, tissue, organ and network level using computational and biochemical methods. The Synthetic and Systems Biology Summer School (SSBSS) is a full-immersion course on cutting-edge advances in systems and synthetic biology with lectures delivered by world-renowned experts. The school provides a stimulating environment for doctoral students, early career researches and industry leaders. Participants will also have the chance to present their results (Oral presentations or Posters) and to interact with their peers. *List of Speakers* - Uri Alon, Weizmann Institute of Science, Israel - Jef Boeke, Johns Hopkins University, USA - Jason Chin, MRC Laboratory of Molecular Biology, UK - Virginia Cornish, Columbia University, USA - Angela DePace, Harvard University, USA - Paul Freemont, Imperial College London, UK - Farren Isaacs, Yale University, USA - Tanja Kortemme, University of California San Francisco, USA - Giuseppe Nicosia, University of Catania, Italy - Sven Panke, ETH, Switzerland - Rahul Sarpeshkar, MIT, USA - Giovanni Stracquadanio, Johns Hopkins University, USA - Ron Weiss, MIT, USA Other speakers will be announced soon. *School Directors* - Jef Boeke, Johns Hopkins University, USA - Giuseppe Nicosia, University of Catania, Italy - Mario Pavone, University of Catania, Italy - Giovanni Stracquadanio, Johns Hopkins University, USA *Short Talk and Poster Submission* Students may submit a research abstract for presentation. School directors will review the abstracts and will recommend for poster or short-oral presentation. Abstract should be submitted by February 15, 2014. The abstracts will be published on the electronic hands-out material of the summer school. W: http://www.taosciences.it/ssbss2014/ E: ssbss2014 at dmi.unict.it -- Dr. Mario Pavone (PhD) Assistant Professor Department of Mathematics and Computer Science University of Catania V.le A. Doria 6 - 95125 Catania, Italy tel: 0039 095 7383038 fax: 0039 095 330094 Email: mpavone at dmi.unict.it http://www.dmi.unict.it/mpavone/ =========================================================================== 12th European Conference on Artificial Life - ECAL 2013 September 2-6, 2013 - Taormina, Italy http://mitpress.mit.edu/books/advances-artificial-life-ecal-2013 =========================================================================== From mlsp at NEURO.KULEUVEN.BE Tue Jan 14 04:29:48 2014 From: mlsp at NEURO.KULEUVEN.BE (2014 IEEE International Workshop on Machine Learning fo Signal Processing) Date: Tue, 14 Jan 2014 10:29:48 +0100 Subject: Connectionists: 2nd call for papers - 4th International Workshop on Cognitive, Information Processing Message-ID: <52D5038C.3020002@neuro.kuleuven.be> CIP2014 CALL FOR PAPERS http://cip2014.conwiz.dk Schedule Special session proposal: December 1, 2013 Submission of full paper: February 10, 2014 Notification of acceptance: March 15, 2014 Camera-ready paper: April 15, 2014 Advance registration before: April 5, 2014 Following the success of three previous editions, the 4th International Workshop on Cognitive Information Processing (CIP2014) aims at bringing together researchers from the machine learning, cognitive engineering, human-machine interaction, pattern recognition, statistical signal processing in an effort to promote and encourage cross-fertilization of ideas and tools. CIP 2014 will take place May 26-28 2014 at the Bella Sky Hotel, Copenhagen, Denmark. The workshop will feature keynote addresses and technical presentations, oral and poster, all of which will be included in the workshop proceedings. The papers will further be indexed and published in IEEE Xplore. Papers are solicited for the following areas in theory and applications: Theory Areas ? Machine learning theory and algorithms ? Cognitive principles for machine architectures and training ? Adaptive and sparsity-aware learning ? Cognitive psychology modeling ? Machine understanding and human-machine interaction ? Cognitive dynamic systems ? Cognitive fuzzy methods and techniques ? Cognitive approaches to decision making and game theory ? Collaborative sensing ? Algorithms and Tools for Large-Scale Machine Learning Application Areas ? Sound and audio systems ? Datamining, textmining, ? Sentiment analysis ? Social networks, social science ? Medicine, neuroimaging, bioinformatics, ? Energy and smart grids, advanced manufacturing, ? Robotics, artificial vision, ? Engineering systems, industrial processes, ? Economy and finance ? Cognitive communications: Modulation, networks, dynamic spectrum management and personalization ? Cognitive radar and sonar: Detection, estimation, tracking and target identification Special Sessions Special sessions are solicited in the same areas. Please suggest the topic and 3-5 potential speakers to Program Chair Jan Larsen (cip2014-programchair.conwiz.dk) before December 1, 2013. Student Paper Prize A prize of 500.00 EUR will recognize the best contribution whose first author is a graduate student. The award will be presented during the conference and consists of a honorarium (to be divided equally between all student authors of the paper), and a certificate for each such author. The award will be selected by a subcommittee of the program committee and based on the quality, originality, and clarity of the submission. Paper submission Prospective authors are invited to submit a double column paper of up to six pages. Accepted papers will be published in an online proceedings available to workshop registrants. The papers will further be indexed and published in IEEE Xplore. For more information and paper submission visit http://cip2014.conwiz.dk/paper_submission.htm You are very welcome to share this call for paper with relevant lists and individuals. Best regards, General Chair, Professor Lars Kai Hansen, DTU Compute, Technical University of Denmark, Denmark Vice Chair, Professor S?ren Holdt Jensen, Department of Electronic Systems, Aalborg University, Denmark Program Chair, Associate Professor Jan Larsen, DTU Compute, Technical University of Denmark, Denmark Sponsored by CoSound, Danish Sound, and IAPR Technical Co-Sponsors: IEEE Signal Processing Society, SP Chapter of the IEEE Denmark Section From lila at csd.uwo.ca Tue Jan 14 14:23:10 2014 From: lila at csd.uwo.ca (Lila Kari) Date: Tue, 14 Jan 2014 14:23:10 -0500 (EST) Subject: Connectionists: UCNC 2014 - Neural Computation Contributions Message-ID: UCNC 2014 - CALL FOR PAPERS The 13th International Conference on Unconventional Computation & Natural Computation University of Western Ontario, London, Ontario, Canada July 14-18, 2014 http://www.csd.uwo.ca/ucnc2014 http://www.facebook.com/UCNC2014 Submission deadline: March 7, 2014 OVERVIEW The International Conference on Unconventional Computation and Natural Computation has been a meeting where scientists with different backgrounds, yet sharing a common interest in novel forms of computation, human-designed computation inspired by nature, and the computational aspects of processes taking place in nature, present their latest results. Papers and poster presentations are sought in all areas, theoretical or experimental, that relate to unconventional computation and natural computation. Typical, but not exclusive, topics are: * Cellular automata, Neural computation, Evolutionary computation, Swarm intelligence, Ant algorithms, Artificial immune systems, Artificial life, Membrane computing, Amorphous computing; * Molecular (DNA) computing, Quantum computing, Optical computing, Hypercomputation - relativistic computation, Chaos computing, Physarum computing, Computation in hyperbolic spaces, Collision-based computing, Computations beyond the Turing model; * Computational Systems Biology, Genetic networks, Protein-protein networks, Transport networks, Synthetic biology, Cellular (in vivo) computing. INSTRUCTIONS FOR AUTHORS: See details on the above website which also contains information about important dates and committees. INVITED SPEAKERS Yaakov Benenson (ETH Zurich) - Synthetic Biology and Biomedicine Charles Bennett (IBM Research) - Quantum Information and Computation Hod Lipson (Cornell University) - Artificial Life, Evolutionary Robotics Nadrian Seeman (New York University) - Nanocomputation by DNA Self-assembly INVITED TUTORIALS Anne Condon (University of British Columbia, Canada) - Programming with Biomolecules Ming Li (University of Waterloo, Canada) - Approximating Semantics WORKSHOPS Computational Neuroscience - Organizer Mark Daley (University of Western Ontario, Canada) DNA Computing by Self-Assembly - Organizer Matthew Patitz (University of Arkansas, USA) Unconventional Computation in Europe - Organizers Martyn Amos (Manchester Metropolitan University, UK), Susan Stepney (University of York, UK) From barak at cs.nuim.ie Wed Jan 15 06:55:36 2014 From: barak at cs.nuim.ie (Barak A. Pearlmutter) Date: Wed, 15 Jan 2014 11:55:36 +0000 Subject: Connectionists: Postdocs / Research Programmer for Compositional Learning via Generalized Automatic Differentiation Message-ID: <877ga1bgsn.fsf@cs.nuim.ie> Postdocs / Research Programmer for Compositional Learning via Generalized Automatic Differentiation The goal of this project is to make a qualitative improvement in our ability to write sophisticated numeric code, by giving numeric programmers access to _fast_, _robust_, _general_, _accurate_ differentiation operators. To be technical: we are adding exact first-class derivative calculation operators (Automatic Differentiation or AD) to the lambda calculus, and embodying the combination in a production-quality fast system suitable for numeric computing in general, and compositional machine learning methods in particular. Our research prototype compilers generate object code competitive with the fastest current systems, which are based on FORTRAN. And the combined expressive power of first-class AD operators and function programming allows very succinct code for many machine learning algorithms, as well as for some algorithms in computer vision and signal processing. Specific sub-projects include: compiler and numeric programming environment construction; writing, simplifying, and generalising, machine learning and other numeric algorithms; and associated Type Theory/Lambda Calculus/PLT/Real Computation issues. TO THE PROGRAMMING LANGUAGE COMMUNITY, we seek to contribute a way to make numeric software faster, more robust, and easier to write. TO THE MACHINE LEARNING COMMUNITY, in addition to making it easier to write efficient numeric codes, we seek to contribute a system that embodies "compositionality", in that gradient optimisation can be automatically and efficiently performed on systems themselves consisting of many components, even when such components may internally take derivatives or perform optimisation. (Examples of this include, say, optimisation of the rules of a multi-player game to cause the players' actions to satisfy some desiderata, where the players themselves optimise their own strategies using simple models of their opponents which they optimise according to their opponents' observed behaviour.) To this end, we are seeking to fill three positions (postdoctoral or research programmer, or in exceptional cases graduate students) with interest and experience in at least one of: programming language theory, automatic differentiation, machine learning, numerics, mathematical logic. Informal announcement: http://www.bcl.hamilton.ie/~barak/ad-fp-positions.html Formal job postings on http://humanresources.nuim.ie/vacancies.shtml, in particular http://humanresources.nuim.ie/documents/JobSpecPostdoc2_Final.pdf and http://humanresources.nuim.ie/documents/JobSpecProgrammer_Final.pdf Inquiries to: -- Barak A. Pearlmutter Hamilton Institute & Dept Computer Science NUI Maynooth, Co. Kildare, Ireland From sima at cs.cas.cz Wed Jan 15 09:23:17 2014 From: sima at cs.cas.cz (Jiri Sima) Date: Wed, 15 Jan 2014 15:23:17 +0100 Subject: Connectionists: AVAST Fellowships in machine learning and big data at the Czech Academy of Sciences, Prague Message-ID: <52D699D5.7010600@cs.cas.cz> Two one-year fellowships for outstanding and enthusiastic researchers, PhD holders, from all over the world, open to cooperation with some of the scientific teams of the ICS and with AVAST specialists. Please, see the announcement at http://www.ustavinformatiky.cz/docs/postdoc_avast.pdf for more information. From malin.sandstrom at incf.org Wed Jan 15 04:21:24 2014 From: malin.sandstrom at incf.org (=?ISO-8859-1?Q?Malin_Sandstr=F6m?=) Date: Wed, 15 Jan 2014 10:21:24 +0100 Subject: Connectionists: Support for short neuroinformatics courses in 2015 - call for Letters of Intent Message-ID: [with apologies for cross-posting] Training in Neuroinformatics 2015 ? Call for Letters of Intent The International Neuroinformatics Coordinating Facility (INCF) is committed to supporting training in neuroinformatics. INCF now invites applications to run short courses lasting between two and seven days on any aspects of neuroinformatics at a level suitable for PhD students and beyond. The courses should be held in English and open to any researcher interested in neuroinformatics. International participation of faculty and students should be encouraged. The INCF offers financial support to successful proposals, which could cover ?hands on training? experiences, venue, travel and accommodation costs for a small number of speakers and participants. Financial support from other sources is encouraged. One condition of funding is that the course material be made available publicly, through INCF. Courses to be funded in this Call are envisaged to take place in 2015. Our aim is to support the development of new courses or the re?design of existing courses to incorporate/extend neuroinformatics coverage. Funding is available for one year only. The Letter of Intent must include information about Topic Organizers Target audience Justification for why INCF should fund this course Course description (up to 250 words). Dissemination plan Speakers - a preliminary list would be helpful Course duration Proposed dates and location Estimated number of participants Draft budget including the amount to be sought from the INCF The closing date for Letters of Intent is **7th March 2014**. Letters should be sent in PDF format per email to training-proposals at incf.org. The authors of the most promising Letters of Intent will be invited by end of March 2014 to submit a full proposal, with a deadline of 30th June 2014. Informal enquiries should be directed to Malin Sandstr?m ( malin.sandstrom at incf.org) on behalf of the INCF Training Committee. Printable PDF: http://incf.org/programs/training-committee/courses/call-for-letters-of-intent-2014 Best regards, Malin -- Malin Sandstr?m, PhD Community Engagement Officer malin.sandstrom at incf.org International Neuroinformatics Coordinating Facility Karolinska Institutet Nobels v?g 15 A SE-171 77 Stockholm Sweden http://www.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane.canu at insa-rouen.fr Wed Jan 15 02:29:03 2014 From: stephane.canu at insa-rouen.fr (=?windows-1252?Q?St=E9phane_Canu?=) Date: Wed, 15 Jan 2014 08:29:03 +0100 Subject: Connectionists: Postdoc: Gesture Recognition in Normandy (France) Message-ID: <52D638BF.9030700@insa-rouen.fr> GESTURE RECOGNITION POSTDOCTORAL RESEARCHER The following postdoctoral position is available to work on sequence learning for multimodal gesture recognition with St?phane Canu (INSA Rouen). Location: Normandy - Rouen (France) ? ITEKUBE Caen (France) Starting date: beginning of 2014 Duration: 2 years (1+1) Net salary: ranges between ~1,800 Euros and 2,400 Euros per month, commensurate with experience. PROJECT SUMMARY: this project consists of a joint project between Professor St?phane Canu of INSA Rouen and David Ulrich of ITEKUBE ITEKUBE has considerable experience building large multi touch tables (46inch) and wants to improve its HMI (human machine interface). By integrating machine learning it will be possible to expand gesture recognition from the simplest to the most complex one in a multimodal environment (for instance by using multi touch inputs and kinect as the same time), leading to a more intuitive and keyboard/mouse-free use of a computer. The successful candidate will be undertaking novel research in multimodal signal learning methods, real-time gesture representation and recognition, data fusion and intention prediction to be included in the next generation of multi touch table HMI among others. EXPERIENCE: Postdoctoral researcher (new Ph.D. or more experienced), extensive knowledge of support vector machines and time series modeling. Prior work with gesture recognition and/or signal recognition would be most appropriate. SOFTWARE DEVELOPMENT ENVIRONMENTS: Should be familiar with MatLab, and program in C++ on PC's with Windows 7/8 would also be desirable. Interested individuals should send a CV, representative publications, a statement of research interests, and three letters of reference to scanu at insa-rouen.fr and david at itekube.com -- Stephane Canu LITIS - INSA Rouen - Dep ASI asi.insa-rouen.fr/~scanu +33 2 32 95 98 44 From cookie at ucsd.edu Wed Jan 15 13:35:05 2014 From: cookie at ucsd.edu (Santamaria, Cookie) Date: Wed, 15 Jan 2014 18:35:05 +0000 Subject: Connectionists: Nengo Summer School, University of Waterloo - Application Deadline 02/15/2014 In-Reply-To: References: , Message-ID: Questions about the summer school and application process can be directed to Peter Blouw (pblouw at uwaterloo.ca) Hello! [All details about this school can be found online at http://www.nengo.ca/summerschool] The Centre for Theoretical Neuroscience at the University of Waterloo is inviting applications for an in-depth, two week summer school that will teach participants how to use the Nengo simulation package to build state-of-the-art cognitive and neural models. Nengo has been used to build what is currently the world's largest functional brain model, Spaun [1], and provides users with a versatile and powerful environment for simulating cognitive and neural systems. We welcome applications from all interested graduate students, research associates, postdocs, professors, and industry professionals. No specific training in the use of modelling software is required, but we encourage applications from active researchers with a relevant background in psychology, neuroscience, cognitive science, or a related field. [1] Eliasmith, C., Stewart T. C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science. Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266. [http://nengo.ca/publications/spaunsciencepaper] ***Application Deadline: February 15, 2014*** Format Participants are encouraged to bring their own ideas for projects, which may focus on testing hypotheses, modelling neural or cognitive data, implementing specific behavioural functions with neurons, expanding past models, or provide a proof-of-concept of various neural mechanisms. Projects can be focused on software, hardware, or a combination of both. Amongst other things, participants will have the opportunity to: - build perceptual, motor, and cognitive models with spiking neurons - model anatomical, electrophysiological, cognitive, and behavioural data - use a variety of single cell models within a large-scale model - integrate machine learning methods into biologically oriented models - use Nengo with your favorite simulator, e.g. Brian, NEST, Neuron, etc. - interface Nengo with a variety of neuromorphic hardware - interface Nengo with cameras and robotic systems of various kinds - implement modern nonlinear control methods in neural models - and much more? Hands-on tutorials, work on individual or group projects, and talks from invited faculty members will make up the bulk of day-to-day activities. There will be a weekend break on June 14-15, and fun activities scheduled for evenings throughout! Date and Location: June 8th to June 21st, 2014 at the University of Waterloo, Ontario, Canada. Applications: Please visit http://www.nengo.ca/summerschool, where you can find more information regarding costs, travel, lodging, along with an application form listing required materials. Questions about the summer school and application process can be directed to Peter Blouw (pblouw at uwaterloo.ca) We look forward to hearing from you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Wed Jan 15 15:41:33 2014 From: axel.hutt at inria.fr (Axel Hutt) Date: Wed, 15 Jan 2014 21:41:33 +0100 (CET) Subject: Connectionists: Permanent Research position (CR1) in Computational Neuroscience In-Reply-To: <188118815.2404243.1389818195926.JavaMail.root@inria.fr> Message-ID: <224464135.2404878.1389818493558.JavaMail.root@inria.fr> The French Research Institute INRIA (http://www.inria.fr/en/) is launching a permanent position "Young Experienced Scientist" (CR1) in Nancy / France. The Inria-team Neurosys (http://neurosys.loria.fr/) is looking for a young scientist working in computational neuroscience with a deeper interest in multi-scale modeling and/or multivariate data analysis in medical science. Candidates are eligible if they finished their PhD more than four years ago. Inria strongly encourages female candidates to apply for the position. Application deadline is February 17, 2014. For more detailed information on the position and the application details, please take a look at http://www.inria.fr/en/institute/recruitment/offers/young-graduate-scientist/competitive-selection-2014 or send an email to Axel Hutt (axel.hutt at inria.fr). From k.wong-lin at ulster.ac.uk Thu Jan 16 12:43:08 2014 From: k.wong-lin at ulster.ac.uk (Wong-Lin, Kongfatt) Date: Thu, 16 Jan 2014 17:43:08 +0000 Subject: Connectionists: Computational Neuroscience Ph.D. Studentships at the ISRC, University of Ulster Message-ID: The Intelligent Systems Research Centre (ISRC) at the University of Ulster, UK, invites applications for 3-year Ph.D. studentships. The application process for the Ph.D. studentship is opened with a closing date for applications on the 28th February 2014. A list of studentships offered for the academic year 2014-2015 and the projects' details can be found at: http://www.compeng.ulster.ac.uk/rgs/showPhDProposals.php?ri=3. The Computational Neuroscience Research Team (CNRT) at ISRC focuses on both fundamental brain and behavioural sciences, and their applications, including clinical neuroscience (http://isrc.ulster.ac.uk/computational-neuroscience-research-team/computational-neuroscience-research-team.html ). A functional brain mapping facility has recently been established at the ISRC. The available Ph.D. projects for 2014 under the CNRT are: 1. Astrocyte: A New Player in Brain Computation Supervisors: Liam McDaid, KongFatt Wong-Lin, Jim Harkin Collaboration: Prof. Sergey Kasparov, University of Bristol, UK 2. BITTS: A New Generation of Brain Like Supercomputers. Supervisors: Liam McDaid, Jim Harkin, Liam Maguire 3. BrainWave: A Radically New Learning Paradigm for Neural Networks Supervisors: Liam McDaid, Jim Harkin Collaboration: Alfonso Araque, University of Minnesota, USA 4. Computational Modelling of Electrical Brain Stimulation for Effective Treatment in Neuropsychiatric Disorder. Supervisors: KongFatt Wong-Lin, Girijesh Prasad Collaboration: Prof. Declan McLoughlin, Trinity College Dublin, Ireland 5. Computational Neuromodulation: Neural Circuit Modelling. Supervisors: KongFatt Wong-Lin, John J. Wade, Ammar Belatreche Collaboration: Prof. Kae Nakamura, Kansai Medical University, Japan 6. Computational Neuropsychiatry: Modelling the Biology of Depression. Supervisors: KongFatt Wong-Lin, T. Martin McGinnity 7. Investigating Alzheimer's Disease using Large Scale Models of Mammalian Thalamocortical Networks. Supervisors: Liam Maguire, Damien Coyle Other projects related to neural computation can also be found at the above website, under other ISRC research teams. All studentships, which are highly competitive, are expected to start in September 2014, and include tuition fees and an annual maintenance allowance for EU and non-EU students. All applicants should hold a first or upper second class honours degree in an appropriate subject, such as in computer science, electronics, physics, mathematics or neuroscience. Applicants must be highly motivated and willing to pursue research and develop skills across the disciplines. If you wish to apply for a studentship, please follow the instructions at: http://www.compeng.ulster.ac.uk/rgs/. Unless indicated, successful students will be based primarily at the ISRC with opportunities to interact with other related ISRC research teams (e.g. Bio-Inspired and Neuro-Engineering, Brain-Computer Interfacing and Assistive Technologies, and Cognitive Robotics), research groups from the (5* research rating) Biomedical Sciences Research Institute, the recent Centre for Stratified Medicine, and also opportunities for spin-outs. The ISRC is situated in the city of Derry~Londonderry, which has received the City of Culture 2013 award. Please note that some studentship (DEL Awards) have restrictions on residence eligibility - see guidance notes for details. For further information, please contact the ISRC Research Director, Professor Martin McGinnity, at tm.mcginnity at ulster.ac.uk and/or the corresponding project supervisors. ---- Dr. KongFatt Wong-Lin Computational Neuroscience Research Team Intelligent Systems Research Centre University of Ulster ________________________________ This email and any attachments are confidential and intended solely for the use of the addressee and may contain information which is covered by legal, professional or other privilege. If you have received this email in error please notify the system manager at postmaster at ulster.ac.uk and delete this email immediately. Any views or opinions expressed are solely those of the author and do not necessarily represent those of the University of Ulster. The University's computer systems may be monitored and communications carried out on them may be recorded to secure the effective operation of the system and for other lawful purposes. The University of Ulster does not guarantee that this email or any attachments are free from viruses or 100% secure. Unless expressly stated in the body of a separate attachment, the text of email is not intended to form a binding contract. Correspondence to and from the University may be subject to requests for disclosure by 3rd parties under relevant legislation. The University of Ulster was founded by Royal Charter in 1984 and is registered with company number RC000726 and VAT registered number GB672390524.The primary contact address for the University of Ulster in Northern Ireland is,Cromore Road, Coleraine, Co. Londonderry BT52 1SA -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.jirsa at univ-amu.fr Thu Jan 16 00:26:09 2014 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Thu, 16 Jan 2014 06:26:09 +0100 Subject: Connectionists: Postdoc in Marseille on The Virtual Brain and Ageing Message-ID: <31A1FD06-5C29-4EE9-B300-4465F49EE951@univ-amu.fr> Dear colleagues, thank you for posting this announcement. Best, Viktor Postdoctoral position in modeling and EEG data analyses of the aging Virtual Brain at Aix Marseille University (Laboratory V. Jirsa), France Title: Brain Dynamics in Older Adults Keywords: The Virtual Brain, nonlinear dynamics, brain connectivity, EEG/MEG, resting state, modeling Context: We seek two postdoctoral candidates who will join the Institut de Neurosciences des Syst?mes (INS) on the medical campus of La Timone in the South of Marseille. Both postdocs will work at the interface of brain modeling and EEG data analyses (with complementary emphases though) in the context of the Coord-Age project, which is a collaborative project on neuro-behavioral aging. The project consortium is composed of the Theoretical Neuroscience Group (V. Jirsa), the Behavior Group (J.J. Temprado), as well as other experts in biomechanics and motor control at the Institute of Movement Sciences. The postdocs will work under supervision of Viktor Jirsa and JJ Temprado, together with other members of the team (PhD students and postdocs in brain and behavior, and neuromuscular synergies). The successful candidates will study the dynamic mechanisms underlying the emergence of brain function from brain network dynamics in older adults. The objective of this research is to understand the effects of the couplings (connectivity, time delays) for the brain network dynamics during aging. The first aim is to build a large-scale brain network model using The Virtual Brain neuroinformatics platform (http://www.thevirtualbrain.org). The second aim is to analyze EEG data of young and older adults and validate the virtual brain model against empirical data. The approach is multidisciplinary, traverses multiple scales of organization and exploits experimental, theoretical and clinical techniques towards the understanding of brain dynamics. The results of the work will aid in the development of a larger conceptual framework on ageing and frailty, integrating other contributions from brain imaging and modeling. Profile: ? PhD in Physics, Mathematics, Engineering, or Neurosciences ? Expertise in nonlinear dynamics and neural modeling is helpful ? Programming skills for data analysis and modeling (Python, Matlab, Labview, Brain Vision, etc) are helpful ? Fluency in spoken and written English is obligatory Contract: ? 12 months renewable based on performance, starting as of January 2014 ? Salary: ~1900 Euros/month depending on experience ? Location: Campus of La Timone, Institut de Neuroscience des Syst?mes, Aix-Marseille University, France. Supervisor, contact and how to apply: Viktor Jirsa Institut de Neurosciences des Syst?mes Aix-Marseille University viktor.jirsa at univ-amu.fr http://ins.medecine.univmed.fr/ Email a CV, statement of interests, and at least two (2) letters of reference to Viktor Jirsa Application deadline: The position is open until filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lvdmaaten at gmail.com Thu Jan 16 15:18:22 2014 From: lvdmaaten at gmail.com (Laurens van der Maaten) Date: Thu, 16 Jan 2014 21:18:22 +0100 Subject: Connectionists: Fully funded PhD Studentship in Machine Learning at TU Delft Message-ID: PhD Student in Machine Learning at Delft University of Technology, The Netherlands Delft University of Technology has an opening for a fully funded PhD student in machine learning, to be supervised by Dr. Laurens van der Maaten. The research project is funded by the NWO Top grant "Learning from Corruption". It focuses on the development of new machine-learning techniques, and on applications of these techniques to domains such as computer vision and natural language understanding. In particular, the research will focus on new approaches for the regularisation of machine-learning models using an approach called marginalised corrupted features. Your task in the project will be to develop and investigate new machine-learning techniques and to apply these techniques to areas such as computer vision, natural language understanding, or bioinformatics problems. Requirements You have obtained an MSc or equivalent degree or expect to obtain one very soon. Interested applicants should have a background and interest in some or all of the following subjects, or a related discipline: machine learning, pattern recognition, and computer vision. The successful applicant will have: good programming skills; curiosity and good analytical skills; the ability to work in a multi-disciplinary team; good oral and written communication skills; proficiency in English. Conditions of employment TU Delft offers an attractive benefits package, including a flexible work week, free high-speed Internet access from home (with a contract of two years or longer), and the option of assembling a customised compensation and benefits package (the 'IKA'). Salary and benefits are in accordance with the Collective Labour Agreement for Dutch Universities. As a PhD candidate you will be enrolled in the TU Delft Graduate School. TU Delft Graduate School provides an inspiring research environment; an excellent team of supervisors, academic staff and a mentor; and a Doctoral Education Programme aimed at developing your transferable, discipline-related and research skills. Please visit www.phd.tudelft.nl for more information. For more information about this position, please contact Dr. Laurens van der Maaten, phone: +31 (0)15-2788434, e-mail:l.j.p.vandermaaten at tudelft.nl. To apply, please e-mail a detailed CV along with a letter of application by 17 February 2014 to Mrs. C.J.C. Kohlmann-van Noord, Hr-eemcs at tudelft.nl. When applying for this position, please refer to vacancy number EWI2014-02. Contract type: Temporary, 4 years. Organization Delft University of Technology (TU Delft) is a multifaceted institution offering education and carrying out research in the technical sciences at an internationally recognised level. Education, research and design are strongly oriented towards applicability. TU Delft develops technologies for future generations, focusing on sustainability, safety and economic vitality. At TU Delft you will work in an environment where technical sciences and society converge. TU Delft comprises eight faculties, unique laboratories, research institutes and schools. The Department of Intelligent Systems conducts research on processing and interpretation of data to enable man and machine to deal with the increasing volume and complexity of data and communication. One of our research groups is the Pattern Recognition and Bioinformatics Group (prb.tudelft.nl). Within the Pattern Recognition and Bioinformatics section, we perform research in the areas of machine learning, pattern recognition, computer vision, image analysis, and bioinformatics. Our focus is on analysing and interpreting multidimensional data using machine-learning approaches to solve problems in various domains, including art history, bioinformatics, social psychology, consumer science, and medicine. We teach machine learning, computer vision, and bioinformatics courses in the Computer Science undergraduate and graduate program. Additional information Laurens van der Maaten +31 (0)15-2788434 l.j.p.vandermaaten at tudelft.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From outreach at cnsorg.org Thu Jan 16 17:07:43 2014 From: outreach at cnsorg.org (Outreach) Date: Thu, 16 Jan 2014 15:07:43 -0700 Subject: Connectionists: CNS-2014: abstract submission and registration now open Message-ID: Organization for Computational Neurosciences (OCNS) 23rd Annual Meeting Qu?bec City, Canada July 26-31, 2014 Registration and Abstract submission are now open. Deadline for abstract submission is February 16. Note that one of the authors has to register as sponsoring author for the main meeting before abstract submission is possible. In case the abstract is notaccepted for presentation, the registration fee will be refunded. The main meeting (July 27 ? 29) will be preceded by a day of tutorials (July 26) and followed by two days of workshops (July 30 ?31). Invited Keynote Speakers: Chris Eliasmith, University of Waterloo, Canada Christof Koch, Allen Institute for Brain Science,USA Henry Markram, EPFL Lausanne, Switzerland Frances Skinner, TWRI/UHN, University of Toronto, Canada For up-to-date conference information, please visit http://www.cnsorg.org/cns-2014-quebec-city ---------------------------------------- OCNS is the international member-based society for computational neuroscientists. Become a member to be eligible for travel awards and more. Visit our website for more information: http://www.cnsorg.org From hava at cs.umass.edu Thu Jan 16 18:43:42 2014 From: hava at cs.umass.edu (Hava Siegelmann) Date: Thu, 16 Jan 2014 18:43:42 -0500 Subject: Connectionists: Post-Doc in Computational Modeling: U of Massachusetts Message-ID: <52D86EAE.9000104@cs.umass.edu> Postdoctoral Position Computational Neuroscience, Biological and Beyond Turing Computation University of Massachusetts, Amherst, collaboration with EPFL Switzerland Start date: ASAP -- This is a funded position An immediate opening is available for a Postdoctoral Research Associate in the BINDS laboratory of Prof. Hava Siegelmann. The research focuses on adaptive memory in neural structures (blocks), complexity computer science theory, superior brain-inspired computation, dynamical systems and complex systems, and perhaps some novel robotics as a demonstration of computational power. The position will run one year and extended based on performance. BACKGROUND: REQUIRED: High analytical skills -- math or physics, nonlinear dynamics, theoretical computer science in the field of higher-class complexity, perfect programming skills, Experience in machine learning techniques and neural networks, DESIRED: Understanding in human/animal learning, Some knowledge in human experiments, brain structure and development. Complex systems, dynamical systems _ Great communication skills -- both oral and written are musts. DUTIES: The Postdoc will play a prominent role in a project with Siegelmann. Applicants should send a CV or resume, statement of research interests and the names and contact information for two references to Prof. Hava Siegelmann at hava at cs.umass.edu -- Hava T. Siegelmann, Ph.D. Professor Director, BINDS Lab (Biologically Inspired Neural Dynamical Systems) Dept. of Computer Science Program of Neuroscience and Behavior University of Massachusetts Amherst Amherst, MA, 01003 Phone - Grant Administrator -- Michele Roberts: 413-545-4389 Fax: 413-545-1249 LAB WEBSITE: http://binds.cs.umass.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwang at cse.ohio-state.edu Thu Jan 16 21:46:26 2014 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Thu, 16 Jan 2014 21:46:26 -0500 Subject: Connectionists: NEURAL NETWORKS, January 2014 Message-ID: <52D89982.1090508@cse.ohio-state.edu> Neural Networks - Volume 49, January 2014 http://www.journals.elsevier.com/neural-networks Editorial: Faster Turnaround Kenji Doya, DeLiang Wang Letters Exponential passivity of memristive neural networks with time delays Ailong Wu, Zhigang Zeng Noise tolerance in a Neocognitron-like network Angelo Cardoso, Andreas Wichert Articles Modeling the BOLD correlates of competitive neural dynamics James Bonaiuto, Michael A. Arbib A model for the receptive field of retinal ganglion cells Myoung Won Cho, M.Y. Choi An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances Michael Fairbank, Shuhui Li, Xingang Fu, Eduardo Alonso, Donald Wunsch A trace ratio maximization approach to multiple kernel-based dimensionality reduction Wenhao Jiang, Fu-lai Chung Multitasking attractor networks with neuronal threshold noise Elena Agliari, Adriano Barra, Andrea Galluzzi, Marco Isopi Biologically relevant neural network architectures for support vector machines Magnus Jandel Generalization ability of fractional polynomial models Yunwen Lei, Lixin Ding, Yiming Ding Projective synchronization for fractional neural networks Juan Yu, Cheng Hu, Haijun Jiang, Xiaolin Fan Synchronization stability and firing transitions in two types of class I neuronal networks with short-term plasticity Honghui Zhang, Qingyun Wang, Xiaoyan He, Guanrong Chen Book Review: Introduction to Neural Engineering for Motor Rehabilitation Cara E. Stepp From olivier.faugeras at inria.fr Fri Jan 17 09:33:51 2014 From: olivier.faugeras at inria.fr (Olivier Faugeras) Date: Fri, 17 Jan 2014 15:33:51 +0100 Subject: Connectionists: Positions in computational neuroscience at Inria Message-ID: <52D93F4F.8090608@inria.fr> The NeuroMathComp Group at the Inria Sophia-Antipolis M?diterran?e Research Center, strongly encourages highly qualified candidates in the field of computational/mathematical neuroscience to apply to Inria's 2014 recruiting campaign. The NeuroMathComp Group at the Inria Sophia-Antipolis M?diterran?e Research Center is associated with the Mathematics Laboratory "Laboratoire Jean-Alexandre Dieudonn?" (LJAD) at Nice University. It includes as staff members Bruno Cessac, Pascal Chossat, Olivier Faugeras (Group Leader), James Inglis, Pierre Kornprobst, Romain Veltz, and about 10-15 postdocs and doctoral students. Other researchers working on computational/mathematical neuroscience include Mireille Bossy, Denis Talay and Etienne Tanr? from the TOSCA Group. We are located in Sophia-Antipolis, a thriving Industrial Research Park that resembles the Silicon Valley in California, both at the level of intellectual and technological excitement and at the level of the quality of life (Sea, Snow and Sun). For more information, see: http://www-sop.inria.fr/neuromathcomp/public/index.shtml and http://math.unice.fr There are three entry levels at Inria: CR2 for early-career candidates (about 2-5 years since PhD), CR1 (5-10 years with lots of publications), and DR2 (international superstars). The selection process is highly rigorous, and takes place at the regional level for CR2 and the national level for CR1 and DR2: there are two open CR2 positions for the Inria Sophia-Antipolis M?diterran?e Research Center which has flagged this year the area of computational neuroscience as one among its five hiring priorities. There are two open CR1 positions and eight open DR2 positions for the nine Inria Research Centers. These are permanent positions with no nationality requirements. The NeuroMathComp group is specifically looking for candidates at the CR2 and CR1 levels with a strong interest and experience in neuroscience modeling (computational and/or mathematical), training in mathematics or theoretical physics and excellent programming skills. The candidates are strongly advised to contact the faculty members of the group and to provide a CV, significant published papers and a preliminary research project. The deadline for submitting applications is February 17, 2014. The application, about 15-20 pages long, includes a summary of past research and future projects, and can be in English. Short-listing decision of candidates is due by mid April. Short-listed candidates will be invited for an oral presentation by mid-May. Final hiring decisions will be made in June. Information about the call for applications at the CR1 level is to be found at: http://www.inria.fr/en/institute/recruitment/offers/young-experienced-scientists/competitive-selection-2014 Information about the call for applications at the CR2 level is to be found at: http://www.inria.fr/en/institute/recruitment/offers/young-graduate-scientist/competitive-selection-2014 --------------------------------------------------------------------------------------- : Olivier Faugeras : Professor : Equipe INRIA NeuroMathComp Team : co-editor in chief, Journal of Mathematical Neuroscience : http://www-sop.inria.fr/members/Olivier.Faugeras/index.en.html : Email: olivier.faugeras at inria.fr : Tel: +334 92 38 78 31 : Sec: +334 92 38 78 30 : --------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From csutton at inf.ed.ac.uk Fri Jan 17 08:31:14 2014 From: csutton at inf.ed.ac.uk (Charles Sutton) Date: Fri, 17 Jan 2014 13:31:14 +0000 Subject: Connectionists: Faculty position in Statistical Machine Translation Message-ID: <5683558A-28C2-4A4E-A082-7EC9CDDD9F21@inf.ed.ac.uk> University of Edinburgh School of Informatics READERSHIP (ASSOCIATE PROFESSOR) IN STATISTICAL MACHINE TRANSLATION Applications are invited for the position of Reader (approximately equivalent to Associate Professor) in Statistical Machine Translation in the School of Informatics at the University of Edinburgh. Informatics at Edinburgh has one of the largest research groupings in natural language processing, computational linguisitcs, machine learning, spoken language processing, and cognitive science. The successful candidate, who will have a well-established international research reputation, will undertake original research and lead a research team in Statistical Machine Translation, in addition to teaching undergraduate, masters and doctoral level courses within the School of Informatics. Application deadline: 17 February 2014 Further details, and to apply: https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=024457 Informal queries: Prof Johanna Moore (j.moore at ed.ac.uk) or Prof Steve Renals (s.renals at ed.ac.uk) -- Charles Sutton * csutton at inf.ed.ac.uk * http://homepages.inf.ed.ac.uk/csutton Lecturer * School of Informatics * University of Edinburgh -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From rvegaru at usal.es Fri Jan 17 06:26:15 2014 From: rvegaru at usal.es (Roberto Vega Ruiz) Date: Fri, 17 Jan 2014 12:26:15 +0100 Subject: Connectionists: HAIS 2014: 4th CFP Message-ID: <374571A0-044C-4C4A-BB57-540A726772D4@usal.es> HAIS 2014: 4th CFP * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. The 9th International Conference on Hybrid Artificial Intelligence Systems. June 11th-13th, 2014 Salamanca, Spain HAIS 2014 http://hais14.usal.es The 9th International Conference on Hybrid Artificial Intelligence Systems (HAIS?14) combines symbolic and sub-symbolic techniques to construct more robust and reliable problem solving models. Hybrid intelligent systems are becoming popular due to their capabilities in handling many real world complex problems, involving imprecision, uncertainty and vagueness, high-dimensionality. They provide us with the opportunity to use both, our knowledge and row data to solve problems in a more interesting and promising way. HAIS Series of Conferences provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in this multidisciplinary research field. ---------------------------------------------------------------- *** PLENARY SPEAKERS: Prof. Amparo Alonso Betanzos University of Coru?a/President of AEPIA (Spain) Prof. Sung-Bae Cho, Yonsei University (Korea) -------------------------------------------------------------------- *** COMMITTEES*** *Program Chair* Emilio Corchado - University of Salamanca, Spain. Jeng-Shyang Pan - National Kaohsiung University of Applied Sciences, Taiwan Andr? C.P.L.F. de Carvalho - University of S?o Paulo, Brazil Marios Policarpou - University of Cyprus, Cyprus Ajith Abraham - Machine Intelligence Research Labs, Europe Michal Wozniak - Wroclaw University of Technology, Poland ---------------------------------------------------------------- ***SUBMISSION SYSTEM IS OPEN**** Please, submit your contributions through the following link: https://www.easychair.org/conferences/?conf=hais14 ---------------------------------------------------------------- ***JOURNAL SPECIAL ISSUES: Authors of the best papers presented at HAIS 2014 will be invited to submit an extended version to highly reputed international journals, Up to now: 1. Neurocomputing. Impact Factor: 1.634 2. Logic Journal of IGPL. Impact Factor: 1.136 ---------------------------------------------------------------- Technical Co-Sponsors by: IEEE Spain Section http://www.ieeespain.org IEEE.-Systems, Man, and Cybernetics Spanish Chapter http://www.ieee-smc.es/main/index.shtml Machine Intelligence Research Labs (MIR Labs) http://www.mirlabs.org/ Spanish Association for Artificial Intelligence (AEPIA) http://www.aepia.org -------------------------------------------------------------------- *** PAPER SUBMISSION AND PROCEEEDINGS (TENTATIVE) HAIS'14 proceedings will be published by Springer in its series of Lecture Notes in Artificial Intelligence - LNAI (part of its prestigious Lecture Notes in Computer Science - LNCS series). All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper to HAIS?14 will be reviewed by at least 3 members of the Program Committee. All accepted papers must be presented by one of the authors who must register for the conference and pay the fee. Papers must be prepared according to the LNCS-LNAI style template (http://www.springer.de/comp/lncs/authors.html) (templates are available in http://hais14.usal.es/?q=node/8) and must be no more than ten (10) pages long, including figures and bibliography. Additional pages (over 10 pages) will be charged at 150 Euro each. -------------------------------------------------------------------- *** TOPICS Topics are encouraged, but not limited to, the combination of at least two of the following areas in the field of Hybrid Intelligent Systems: - Fusion of soft computing and hard computing - Evolutionary Computation - Hybrid Machine learning techniques for time series analysis - Visualization Techniques - Ensemble Techniques - Data mining and decision support systems - Intelligent agent-based systems (complex systems), cognitive and Reactive distributed AI systems - Internet modelling - Human interface - Case base reasoning - Chance discovery - Applications in security, prediction, control, robotics, image and speech signal processing, time series analysis food industry, biology and medicine, business and management, knowledge management, artificial societies, chemicals, pharmaceuticals, geographic information systems, materials and environment engineering and so on. -------------------------------------------------------------------- *** IMPORTANT DATES *** Paper submission deadline 10th February 2014 Acceptance notification 10th March 2014 Submission of (tentatively accepted) revised papers 15th March 2014 Final version submission 31st March 2014 Conference dates 11th - 13th June 2014 Payment deadline 5th April 2014 -------------------------------------------------------------------- **SOCIAL MEDIA Twitter: https://twitter.com/hais2014 @hais2014 Facebook: https://www.facebook.com/hais2014 -------------------------------------------------------------------- *** CONTACT *** Dr. Emilio Corchado - University of Salamanca (Spain) (Chair) BISITE Research Group http://bisite.usal.es/ GICAP Research Group http://gicap.ubu.es/ Email: escorchado at usal.es Phone: +34 630736755 For more information about HAIS?14, please refer to the HAIS?14 website http://hais14.usal.es * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. HAIS ORGANIZATION -------------- next part -------------- An HTML attachment was scrubbed... URL: From nando at cs.ubc.ca Fri Jan 17 14:08:49 2014 From: nando at cs.ubc.ca (Nando de Freitas) Date: Fri, 17 Jan 2014 19:08:49 +0000 Subject: Connectionists: Funded PhD positions at Oxford/Warwick Message-ID: The Statistics Departments of Oxford and Warwick, supported by the EPSRC, will run a new Centre for Doctoral Training in the theory, methods and applications of Statistical Science for 21st Century data-?intensive environments and large-?scale models. An exciting training programme is delivered by world-leading researchers in Statistical Methods and Computational Statistics / Statistical Machine Learning from Oxford and Warwick. There are funded opportunities for students to work with our leading industrial partners and to travel to an international summer placement in some of the strongest Statistics groups in the USA, Europe and Asia. Applications are currently being accepted for the first intake of students who will start in October 2014. The deadline is approaching: http://www.ox.ac.uk/admissions/postgraduate_courses/course_guide/statistical_science.html http://www2.warwick.ac.uk/fac/sci/statistics/oxwasp and enquiries can be sent to oxwasp at warwick.ac.uk and *oxwasp at stats.ox.ac.uk* . Students will have, or be expected to obtain, a First Class Honours degree or Master?s degree in a subject that contains strong Mathematics training and have had significant exposure to Statistics or machine learning. Students must be motivated to do research in Statistics for high dimensional data analysis or for high dimensional modelling and become a future leader in statistical methodology and computational statistics. Thanks, Nando de Freitas -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.van.gerven at gmail.com Sat Jan 18 04:57:59 2014 From: marcel.van.gerven at gmail.com (Marcel van Gerven) Date: Sat, 18 Jan 2014 10:57:59 +0100 Subject: Connectionists: PRNI second call for papers In-Reply-To: References: Message-ID: SECOND CALL FOR PAPERS ** apologies for cross posting ** 4th International Workshop on Pattern Recognition in NeuroImaging (PRNI 2014) June 4-6 2014, Max Planck Institute for Intelligent Systems, T?bingen, Germany Website: http://www.prni.org Multivariate analysis of neuroimaging data has gained ground very rapidly in the community over the past few years, leading to impressive results in cognitive and clinical neuroscience. Pattern recognition and machine learning conferences regularly feature a neuroimaging workshop, while neuroscientific meetings dedicate sessions to new approaches to neural data analysis. Thus, a rich two-way flow has been established between disciplines. It is the goal of the 4th International Workshop on Pattern Recognition in NeuroImaging to continue facilitating exchange of ideas between scientific communities, with a particular interest in new approaches to the interpretation of neural data driven by new developments in pattern recognition and machine learning. IMPORTANT DATES Paper submission deadline: 7th of March, 2014 **submission website is now open** Acceptance notification: 4th of April, 2014 Workshop: June 4-6, 2014 TOPICS OF INTEREST PRNI welcomes original papers on multivariate analysis of neuroimaging data, using invasive and non-invasive imaging modalities, including but not limited to the following topics: * Learning from neuroimaging data - Classification algorithms for brain-state decoding - Optimisation and regularisation - Bayesian analysis of neuroimaging data - Connectivity and causal inference - Combination of different data modalities - Efficient algorithms for large-scale data analysis * Interpretability of models and results - High-dimensional data visualisation - Multivariate and multiple hypothesis testing - Summarisation/presentation of inference results * Applications - Disease diagnosis and prognosis - Real-time decoding of brain states - Analysis of resting-state and task-based data KEYNOTE SPEAKERS John-Dylan Haynes Klaus-Robert M?ller Russ Poldrack SUBMISSION GUIDELINES Authors should prepare full papers with a maximum length of 4 pages (double-column, IEEE style, PDF) for review. Reviews will be double-blind, i.e. submissions have to be anonymized. PROCEEDINGS Proceedings will be published by Conference Publishing Services in electronic format. They will be submitted for inclusion in IEEExplore and IEEE CS Digital Library online repositories, and submitted for indexing in IET INSPEC, EI Compendex (Elsevier), Thomson ISI, and others. BEST STUDENT PAPER AWARD A small number of papers will be selected for the Best Student Paper Award. To be eligible the paper?s first author must be a student, and the student must agree to present the paper at the workshop. Awardees will receive a travel allowance. VENUE The workshop will be held on the campus of the Max Planck Institute in T?bingen, Germany. T?bingen is a picturesque medieval university town, and can easily be reached by public transportation from Stuttgart airport (STR). Accommodation is available on or close to campus. A pre-conference barbecue will be held on June 3, 2014. ORGANIZATION General Chair: Moritz Grosse-Wentrup (MPI for Intelligent Systems, T?bingen, Germany) Program Chairs: Marcel van Gerven (Donders Institute for Brain, Cognition and Behaviour, Netherlands) & Nikolaos Koutsouleris (LMU Munich, Germany) Steering committee: Jonas Richiardi, Dimitri Van De Ville, Seong-Whan Lee, Yuki Kamitani, Janaina Mourao- Miranda, Christos Davatsikos, Ga?l Varoquaux ENDORSEMENTS PRNI 2014 is an official satellite meeting of the Organization for Human Brain Mapping and an endorsed event of the Medical Image Computing and Computer Assisted Intervention Society. -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine.liutkus at inria.fr Mon Jan 20 07:39:11 2014 From: antoine.liutkus at inria.fr (Antoine Liutkus) Date: Mon, 20 Jan 2014 13:39:11 +0100 Subject: Connectionists: Postdoctoral position: deep neural networks for source separation and noise-robust ASR Message-ID: <52DD18EF.5040505@inria.fr> (Apologies for any cross-posting - Please forward to anyone that may be interested) _ __POSTDOCTORAL POSITION _ *SUBJECT*: Deep neural networks for source separation and noise-robust ASR *LAB*: PAROLE team, Inria Nancy, France *SUPERVISORS*: Antoine Liutkus (antoine.liutkus at inria.fr) and Emmanuel Vincent (emmanuel.vincent at inria.fr) *START*: between November 2014 and January 2015 *DURATION*: 12 to 16 months *TO APPLY*: apply online before June 10 at http://www.inria.fr/en/institute/recruitment/offers/post-doctoral-research-fellowships/post-doctoral-research-fellowships/campaign-2014/%28view%29/details.html?nPostingTargetID=13790 (earlier application is preferred) Inria is the biggest European public research institute dedicated to computer science. The PAROLE team in INRIA Nancy, France, gathers 20+ speech scientists with a growing focus on speech enhancement and noise-robust speech recognition exemplified by the organization of the CHiME Challenge [1] and ISCA's Robust Speech Processing SIG [2]. The boom of speech interfaces for handheld devices requires automatic speech recognition (ASR) system to deal with a wide variety of acoustic conditions. Recent research has shown that Deep Neural Networks (DNNs) are very promising for this purpose. Most approaches now focus on clean, single-source conditions [3]. Despite a few attempts to employ DNNs for source separation [4,5,6], conventional source separation techniques such as [7] still outperform DNNs in real-world conditions involving multiple noise sources [8]. The proposed postdoctoral position aims to overcome this gap by incorporating the benefits of conventional source separation techniques into DNNs. This includes for instance the ability to exploit multichannel data and different characteristics for separation and for ASR. Performance will be assessed over readily available real-world noisy speech corpora such as CHiME [1]. Prospective candidates should have defended a PhD in 2013 or defend a PhD in 2014 in the area of speech processing, machine learning, signal processing or applied statistics. Proficient programming in Matlab, Python or C++ is necessary. Practice of DNN/ASR software (Theano, Kaldi) would be an asset. [1] http://spandh.dcs.shef.ac.uk/chime_challenge/ [2] https://wiki.inria.fr/rosp/ [3] G. Hinton, L. Deng, D. Yu, G. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition", IEEE Signal Processing Magazine, 2012. [4] S.J. Rennie, P. Fousek, and P.L. Dognin, "Factorial Hidden Restricted Boltzmann Machines for noise robust speech recognition", in Proc. ICASSP, 2012. [5] A.L. Maas, T.M. O'Neil, A.Y. Hannun, and A.Y. Ng, "Recurrent neural network feature enhancement: The 2nd CHiME Challenge", in Proc. CHiME, 2013. [6] Y. Wang and D. Wang. "Towards scaling up classification-based speech separation", IEEE Transactions on Audio, Speech and Language Processing, 2013. [7] A. Ozerov, E. Vincent, and F. Bimbot, "A general flexible framework for the handling of prior information in audio source separation", IEEE Transactions on Audio, Speech and Language Processing, 2012. [8] J. Barker, E. Vincent, N. Ma, H. Christensen, and P. Green, "The PASCAL CHiME Speech Separation and Recognition Challenge", Computer Speech and Language, 2013. -------------- next part -------------- An HTML attachment was scrubbed... URL: From irodero at cac.rutgers.edu Mon Jan 20 00:44:07 2014 From: irodero at cac.rutgers.edu (Ivan Rodero) Date: Mon, 20 Jan 2014 00:44:07 -0500 Subject: Connectionists: CSWC 2014 - Final Call for Papers In-Reply-To: <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> References: <51EC7783-DCAD-4364-B1DC-576C726BAA31@rutgers.edu> <6F339279-23CD-4553-95DF-B1F906F948E3@rutgers.edu> <0957F75F-5AB9-4144-B62D-87D225B34E42@rutgers.edu> <22CF346C-98EC-4D5B-9500-D0B9FE60551A@rutgers.edu> <99D17D7E-34C0-47B7-B641-C756E67D169A@rutgers.edu> <87202061-93AC-4066-89E7-77976097AFAB@rutgers.edu> Message-ID: <79403393-1690-4DCB-855A-1EE231D5ED2B@rutgers.edu> -------------------------------------------------------------------------------------------------- Please accept our apologies if you receive multiple copies of this CFP! -------------------------------------------------------------------------------------------------- ============================================================ 4th International Workshop on Cloud Services and Web 2.0 Technologies for Collaboration (CSWC 2014) As part of The 2014 International Conference on Collaboration Technologies and Systems (CTS 2014) http://cts2014.cisedu.info/2-conference/workshops/workshop-01-cswc May 19-23, 2014 The Commons Hotel, Minneapolis, Minnesota, USA In Cooperation with ACM, IEEE, and IFIP (Pending) Extended submission Deadline: January 30, 2014 Submissions could be for full papers, short papers, poster papers, or posters ============================================================ SCOPE AND OBJECTIVES Recent technology directions have highlighted the remarkable commercial developments in the cloud arena combined with a suite of Web 2.0 technologies around mashups, gadgets and social networking. These have clear relevance to collaborative systems and sensor grids that involve evolution of grid architectures to clouds or to fresh approaches such as social networking to building virtual organizations. This workshop explores this area and invites exploratory papers as well as mature research contributions. CSWC Possible topics include but are not limited to: ? Performance and experience using Clouds and Web 2.0 to support sensors or collaboration ? Relation of Grids and Clouds in their application to sensors or collaboration ? Data driven applications and use of emerging technologies like Mashups, gadgets and Hadoop ? Clouds for Social Networking and Virtual Organizations ? Security, Privacy, and Trust issues in Clouds and Web 2.0 ? MapReduce ? Virtualization ? Interoperability and Standardization ? e-Science ? Architectures ? Services and Applications ? Models for Managing Clouds of Clouds and including Inter-networking ? Mobile clouds -- Architectures and Performance SUBMISSION INSTRUCTIONS: You are invited to submit original and unpublished research works on above and other topics related to Cloud services and Web 2.0 technologies for Collaboration, distributed sensing and related topics. Submitted papers must not have been published or simultaneously submitted elsewhere. Submission should include a cover page with authors' names, affiliation addresses, fax numbers, phone numbers, and email addresses. Please, indicate clearly the corresponding author and include up to 6 keywords from the above list of topics and an abstract of no more than 400 words. The full manuscript should be at most 8 pages using the two-column IEEE format. Additional pages will be charged additional fee. Short papers (up to 4 pages), poster papers and poster (please refer to http://cts2014.cisedu.info/home/posters for the posters submission details) will also be accepted. Please include page numbers on all preliminary submissions to make it easier for reviewers to provide helpful comments. Submit a PDF copy of your full manuscript via email to the workshop organizers at https://www.easychair.org/conferences/?conf=cswc2014. Only PDF files will be accepted, sent by email to the workshop organizers. Each paper will receive a minimum of three reviews. Papers will be selected based on their originality, relevance, contributions, technical clarity, and presentation. Submission implies the willingness of at least one of the authors to register and present the paper, if accepted. Authors of accepted papers must guarantee that their papers will be registered and presented at the workshop. Accepted papers will be published in the Conference proceedings. Instructions for final manuscript format and requirements will be posted on the CTS 2014 Conference web site. It is our intent to have the proceedings formally published in hard and soft copies and be available at the time of the conference. The proceedings is projected to be included in the IEEE Digital Library and indexed by all major indexing services accordingly. If you have any questions about paper submission or the workshop, please contact the workshop organizers. IMPORTANT DATES Extended Paper Submissions: ---------------------------- January 30, 2014 Acceptance Notification: ----------------------------------- February 17, 2014 Camera Ready Papers and Registration Due: ------ March 07, 2014 Conference Dates: ------------------------------------------ May 19 - 23, 2014 WORKSHOP ORGANIZERS: Geoffrey Charles Fox School of Informatics and Computing Indiana University - Bloomington, Indiana, USA gcf at indiana.edu Ivan Rodero Department of Electrical and Computer Engineering Rutgers University, New Jersey, USA irodero at rutgers.edu Kyle Chard Argonne National Lab & University of Chicago Illinois, USA kyle at ci.uchicago.edu Leandro Navarro Moldes Department of Computer Architecture Universitat Polit?cnica de Catalunya, Spain leandro at ac.upc.edu TECHNICAL PRORAM COMMITTEE: All submitted papers will be reviewed by the Workshop's technical program committee members following similar criteria used in CTS 2014. Simon Caton, Karlsruhe Institute of Technology, Germany Ernesto Damiani, Universit? degli Studi di Milano, Italy Javier Diaz-Montes, Rutgers University, New Jersey, USA Hangwei Qian, VMware Inc., California, USA Judy Qiu, Indiana University, Indiana, USA Ioan Raicu, Illinois Institute of Technology, Illinois, USA Omer Rana, Cardiff University, U.K. Michela Taufer, University of Delaware, Delaware, USA Luis Vaquero, Hewlett-Packard Labs, U.K. Wenjun Wu, BeiHang University, China Choonhan Youn, SDSC, University of California-San Diego, California, USA Jia Zhang, Carnegie Mellon University - Silicon Valley, California, USA (To be completed...) For information or questions about Conference's paper submission, tutorials, posters, workshops, special sessions, exhibits, demos, panels and forums organization, doctoral colloquium, and any other information about the conference location, registration, paper formatting, etc., please consult the Conference?s web site at URL: http://cts2014.cisedu.info/ or contact one of the Conference's organizers or Co-Chairs. ============================================================= Ivan Rodero, Ph.D. Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering Rutgers, The State University of New Jersey Office: CoRE Bldg, Rm 625 94 Brett Road, Piscataway, NJ 08854-8058 Phone: (732) 993-8837 Fax: (732) 445-0593 Email: irodero at rutgers dot edu WWW: http://nsfcac.rutgers.edu/people/irodero ============================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From magg at informatik.uni-hamburg.de Mon Jan 20 06:38:30 2014 From: magg at informatik.uni-hamburg.de (Sven Magg) Date: Mon, 20 Jan 2014 12:38:30 +0100 Subject: Connectionists: 2nd CFP: ICANN 2014 - 24th Annual Conference on Artificial Neural Networks Message-ID: <52DD0AB6.6000007@informatik.uni-hamburg.de> 2nd Call for Papers =================================================================== ICANN 2014: 24th Annual Conference on Artificial Neural Networks 15 - 19 September 2014, University of Hamburg, Germany http://icann2014.org/ =================================================================== The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). In 2014 the University of Hamburg will organize the 24th ICANN Conference from 15th to 19th September 2014 in Hamburg, Germany. KEYNOTE SPEAKERS: Christopher M. Bishop (Microsoft Research, Cambridge, UK) Yann LeCun (New York University, NY, USA) Kevin Gurney (University of Sheffield, Sheffield, UK) Barbara Hammer (Bielefeld University, Bielefeld, Germany) Jun Tani (KAIST, Daejeon, Republic of Korea) Paul Verschure (Universitat Pompeu Fabra, Barcelona, Spain) ORGANIZATION: General Chair: Stefan Wermter (Hamburg, Germany) Program co-Chairs Alessandro E.P. Villa (Lausanne, Switzerland, ENNS President) Wlodzislaw Duch (Torun, Poland & Singapore, ENNS Past-President) Petia Koprinkova-Hristova (Sofia, Bulgaria) G?nther Palm (Ulm, Germany) Cornelius Weber (Hamburg, Germany) Timo Honkela (Helsinki, Finland) Local Organizing Committee Chairs: Sven Magg, Johannes Bauer, Jorge Chacon, Stefan Heinrich, Doreen Jirak, Katja Koesters, Erik Strahl VENUE: Hamburg is the second-largest city in Germany, home to over 1.8 million people. Situated at the river Elbe, the port of Hamburg is the second-largest port in Europe. The University of Hamburg is the largest institution for research and education in the north of Germany. The venue of the conference is the ESA building of the University of Hamburg, situated at Edmund-Siemers-Allee near the city centre and easily reachable from Dammtor Railway Station. Hamburg Airport can be reached easily via public transport. CONFERENCE TOPICS: ICANN 2014 will feature the main tracks Brain Inspired Computing and Machine Learning research, with strong cross-disciplinary interactions and applications. All research fields dealing with Neural Networks will be present at the conference. A non-exhaustive list of topics includes: Brain Inspired Computing: Cognitive models, Computational Neuroscience, Self-organization, Reinforcement Learning, Neural Control and Planning, Hybrid Neural-Symbolic Architectures, Neural Dynamics, Recurrent Networks, Deep Learning. Machine Learning: Neural Network Theory, Neural Network Models, Graphical Models, Bayesian Networks, Kernel Methods, Generative Models, Information Theoretic Learning, Reinforcement Learning, Relational Learning, Dynamical Models. Neural Applications for: Intelligent Robotics, Neurorobotics, Language Processing, Image Processing, Sensor Fusion, Pattern Recognition, Data Mining, Neural Agents, Brain-Computer Interaction, Neural Hardware, Evolutionary Neural Networks. PAPERS: Papers of maximum 8 pages length will be refereed to international standards by at least three referees. Accepted papers of contributing authors will be published in Springer-Verlag Lecture Notes in Compute Science (LNCS) series. Submission of papers will be online. More details are available on the conference web site. DEMONSTRATIONS: ICANN 2014 will host demonstrations to showcase research and applications of neural networks. Demonstrations are self-contained, i.e. independent of any presented talk or poster. For a demonstration proposal, we request a 1-page description of your demonstration and its features. Later, you will communicate which resources (space / duration / projector / internet / etc.) you require. Decisions about demonstrations will be made within two weeks after submission deadline. A full conference registration is required for the demonstration. We invite you to submit proposals for Demonstrations to: ICANN2014 at informatik.uni-hamburg.de TRAVEL AWARDS: As in previous years, the European Neural Network Society (ENNS) will offer at least five student travel awards of 400 Euro each for students presenting papers. In addition, the selected students will be able to register to the conference for free and will become ENNS members for the next year (2015). The deadline for sending the Travel Grant application (that includes a Letter of Interest to the PC chairs, Studentship Proof and detailed CV of the candidate) is the 14th of April, 2014. The award will be sent to the student by 28th April and paid during the conference. More details can be found on the website. DEADLINES: Submission of full papers: * 17 February 2014 * Notification of acceptance: 7 April 2014 Submission of Demonstration proposals: 21 April 2014 Camera-ready paper and registration: 5 May 2014 Conference dates: 15-19 September 2014 CONFERENCE WEBSITE: http://www.icann2014.org *********************************************** Professor Dr. Stefan Wermter Chair of Knowledge Technology Department of Computer Science University of Hamburg Vogt Koelln Str. 30 22527 Hamburg, Germany http://www.informatik.uni-hamburg.de/~wermter/ http://www.informatik.uni-hamburg.de/WTM/ *********************************************** From e.vasilaki at gmail.com Mon Jan 20 10:53:24 2014 From: e.vasilaki at gmail.com (Eleni Vasilaki) Date: Mon, 20 Jan 2014 15:53:24 +0000 Subject: Connectionists: New MSc in Computational Intelligence & Robotics at The University of Sheffield Message-ID: This MSc offers research-intensive training in engineering, computing and neuroscience. The course comprises a mixture of modules in computational neuroscience, robotics, machine learning, computational vision, multi-sensor data fusion, modelling and simulation of natural systems, and rapid control prototyping. You will be taught by internationally recognised experts from the departments of Automatic Control & Systems Engineering, Computer Science, and the neuroscience group in Psychology, including: Bryn Jones http://www.shef.ac.uk/acse/staff/blj Dawn Walker http://staffwww.dcs.shef.ac.uk/people/D.Walker Eleni Vasilaki http://staffwww.dcs.shef.ac.uk/people/E.Vasilaki Kevin Gurney http://www.shef.ac.uk/psychology/staff/academic/kevin-gurney Ling Shao http://lshao.staff.shef.ac.uk Neil Lawrence http://staffwww.dcs.shef.ac.uk/people/N.Lawrence Osman Tokhi https://www.shef.ac.uk/acse/staff/mot Paul Trodden http://www.sheffield.ac.uk/acse/staff/pt Roderich Gross http://www.shef.ac.uk/acse/staff/roderich-gross Roger Moore http://www.dcs.shef.ac.uk/~roger/ Sandor Veres http://www.sheffield.ac.uk/acse/staff/smv Stuart Wilson http://www.shef.ac.uk/psychology/staff/academic/stuart-wilson Thomas Hain http://www.dcs.shef.ac.uk/~th Tony Dodd http://www.shef.ac.uk/acse/staff/tjd The MSc will prepare you for a wide range of potential careers from fundamental research, through to engineering and science roles in major organisations. The multi-disciplinary nature of the MSc, and many of the techniques you will learn, are highly sought after in industry and research establishments worldwide. If you would like to obtain more information please visit http://www.shef.ac.uk/acse/prospectivepg/compintel . The University's Women in Engineering initiative is helping to raise the profile of talented female engineers and our learning culture ensures we have an environment where both women and men thrive through academic and personal achievement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.larochelle at usherbrooke.ca Mon Jan 20 20:07:35 2014 From: hugo.larochelle at usherbrooke.ca (Hugo Larochelle) Date: Mon, 20 Jan 2014 20:07:35 -0500 Subject: Connectionists: =?iso-8859-1?q?Tier_1_Canada_Research_Chair_at_Co?= =?iso-8859-1?q?mputer_Science_Department_of_Universit=E9_de_Sherbrooke?= Message-ID: ** Tier 1 Canada Research Chair / Universit? de Sherbrooke ** Universit? de Sherbrooke invites applications for a full time tenure-track position in the Computer Science Department and is looking for strong candidates, eligible for the Tier 1 Canada Research Chair program. Candidates from all disciplines of computer science are invited to apply. The Canada Research Chair program is dedicated to fostering excellence in research and supports the scientific program of the most accomplished and promising researchers. Tier 1 Chairs have seven year terms, renewable once, and are intended for outstanding researchers acknowledged by their peers as world leaders in their fields. Its yearly research funding is enough to support around 4 Ph.D. students. The Universit? de Sherbrooke has more than 40 000 students studying in all three cycles (B.Sc., M.Sc. and Ph.D.) and is among Canada's most innovative universities. It is situated 150 kilometres from Montreal, in the beautiful Eastern Townships region. Its Computer Science Department includes 19 professors organized in very active research teams, primarily in artificial intelligence (machine learning, data mining, planning), imaging and digital media (computer vision, medical imaging) and software engineering. Applicants have until February 28th 2014 to apply. For more information, please contact me directly or see: https://www.sofe.usherbrooke.ca/sofe-service/f?p=103:40:4827769410273::NO:RP,40:P40_T_PAGPREV,P40_T_OFFCLEINT:30,24702 . For more information on Tier 1 Canada Research Chairs, see : http://www.chairs-chaires.gc.ca/program-programme/nomination-mise_en_candidature-eng.aspx . Hugo Larochelle Assistant Professor Universit? de Sherbrooke hugo.larochelle at usherbrooke.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.risi at gmail.com Mon Jan 20 19:39:29 2014 From: sebastian.risi at gmail.com (Sebastian Risi) Date: Tue, 21 Jan 2014 01:39:29 +0100 Subject: Connectionists: Postdoctoral Position in Evolutionary Robotics at the IT University of Copenhagen (available immediately) Message-ID: Postdoctoral Position in Evolutionary Robotics at the IT University of Copenhagen (available immediately) The IT University of Copenhagen (www.itu.dk), Denmark invites applications for a full-time postdoctoral research position. *Starting date: immediately* The position is fully funded until the end of 2014. Questions about the positions can be directed to Assistant Professor Sebastian Risi, IT University of Copenhagen, sebr at itu.dk. We are seeking an outstanding postdoc with experience/interest in topics such as evolutionary robotics, neural networks, neuroevolution, deep learning, neuroplasticity, 3d printers, and artificial life. The successful candidate will work closely with Sebastian Risi ( www.cs.ucf.edu/~risi/) and Kasper Stoy (www.itu.dk/~ksty/) on an internally funded project on neuroevolution focussing on the advantage of having a human in the loop, evolution of morphology and control in the context of robotics. The specific project depends on the qualifications and interests of the applicant. Creative thinking, excellent problem solving skills, and good communication abilities in English are must-haves. Application material (CV, list of publications, brief research statement) should be send to sebr at itu.dk. General information: The IT University of Copenhagen (ITU) is a teaching and research-based tertiary institution concerned with information technology (IT) and the opportunities it offers. The IT University has more than 50 faculty members. Research and teaching in information technology span all academic activities which involve computers including computer science, information and media sciences, humanities and social sciences, business impact and the commercialization of IT. I am looking forward to your applications. Best regards, Sebastian Risi -- Dr. Sebastian Risi Assistant Professor IT University of Copenhagen, Room 5D08 Rued Langgaards Vej 7, 2300 Copenhagen, Denmark email: sebastian.risi at gmail.com, web: http://www.cs.ucf.edu/~risi/ mobile: +45-50250355, office: +45-7218-5127 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvegaru at usal.es Mon Jan 20 05:48:14 2014 From: rvegaru at usal.es (Roberto Vega Ruiz) Date: Mon, 20 Jan 2014 11:48:14 +0100 Subject: Connectionists: SOCO 2014: 3rd CFP & Special Sessions/Workshops. Message-ID: <6D24C3D1-4640-4F12-BF48-76A6FBC8207F@usal.es> SOCO 2014: 3rd CFP & Special Sessions/Workshops. * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. The 9th International Conference on Soft Computing Models in Industrial and Environmental Applications Bilbao, Basque Country, Spain. 25th - 27th June, 2014 http://soco14.deusto.es ---------------------------------------------------------------- Technical Co-Sponsors by: IEEE Spain Section http://www.ieeespain.org IEEE.-Systems, Man, and Cybernetics Spanish Chapter http://www.ieee-smc.es/main/index.shtml Machine Intelligence Research Labs (MIR Labs) http://www.mirlabs.org Spanish Association for Artificial Intelligence (AEPIA) http://www.aepia.org The International Federation for Computational Logic http://www.ifcolog.net ---------------------------------------------------------------- *** PLENARY SPEAKERS: Prof. Antonio Bahamonde - University of Oviedo Prof. Davide Balzarotti - Eurecom Graduate School and Research Center, France ---------------------------------------------------------------- ***JOURNAL SPECIAL ISSUES: Authors of the best papers presented at SOCO 2014 will be invited to submit an extended version to highly reputed international journals, Up to now: 1. Neurocomputing. Impact Factor: 1.634 2. Journal of Applied Logic. Impact Factor: 0.419 ---------------------------------------------------------------- ***SUBMISSION SYSTEM IS OPEN**** Please, submit your contributions through the following link: https://www.easychair.org/account/signin.cgi?conf=soco14 ---------------------------------------------------------------- *** PROCEEDINGS: SOCO'14 proceedings will be published by Springer in a special volume of Advances in Intelligent and Soft Computing (Springer), indexed by ISI Proceedings, DBLP and Springerlink, among others. All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper to SOCO'14 will be reviewed by at least two members of the Program Committee. ***CALL FOR SPECIAL SESSIONS AND WORKSHOPS: In addition to regular sessions, participants are encouraged to organize special sessions and workshops on specialized topics. Each special session should have at least 4 or 5 quality papers. Special session organizers will solicit submissions; conduct reviews jointly with the SOCO'14 PC and in the same way recommend accept/reject decisions on the submitted papers. Special Session submission deadline: 15th January, 2014 Submission of Special Sessions are welcome! http://soco14.deusto.es/?q=node/8 For more information, please, send an email to: escorchado at usal.es -------------------------------------------------------------------- *** Soft Computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate and analyze very complex issues and phenomena. The 9th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO'14) is mainly focused on its industrial applications. SOCO Series of Conferences provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in this multidisciplinary research field. SOCO'14 will be celebrated in collaboration with: CISIS'14 http://cisis14.deusto.es ICEUTE'14 http://iceute14.deusto.es -------------------------------------------------------------------- *** TOPICS The topics of interest include, but are not limited to: ? Green Computing ? Evolutionary Computing ? Neuro Computing ? Probabilistic Computing ? Immunological Computing ? Hybrid Methods ? Causal Models ? Case-based Reasoning ? Chaos Theory Fuzzy Computing ? Intelligent Agents and Agent Theory ? Interactive Computational Models The application fields of interest cover, but are not limited to: ? Decision Support ? Process and System Control ? System Identification and Modelling ? Optimization ? Signal or Image Processing ? Vision or Pattern Recognition ? Condition Monitoring ? Fault Diagnosis ? Systems Integration ? Internet Tools ? Human Machine Interface ? Time Series Prediction ? Robotics ? Motion Control & Power Electronics ? Biomedical Engineering ? Virtual Reality ? Reactive Distributed AI ? Telecommunications ? Consumer Electronics ? Industrial Electronics ? Manufacturing Systems ? Power and Energy ? Data Mining ? Data Visualisation ? Intelligent Information Retrieval ? Bio-inspired Systems ? Autonomous Reasoning ? Intelligent Agents -------------------------------------------------------------------- ***PAPER SUBMISSION AND PROCEEDINGS: SOCO'14 proceedings will be published by Springer in special volume of Advances in Intelligent and Soft Computing (Springer), indexed by ISI Proceedings, DBLP and Springerlink, among others. All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper to SOCO'14 will be reviewed by at least two members of the Program Committee. Papers must be prepared according to Springer?s templates (MS Word or LaTeX format) for the Advances in Intelligent and Soft Computing volume series (www.springer.com/series/4240) and must be no more than ten (10) pages long, including figures and bibliography. Additional pages (over 10 pages) will be charged at 150 Euro each. SUBMISSION SYSTEM: https://www.easychair.org/account/signin.cgi?conf=soco14 ---------------------------------------------------------------- ***COMMITTEES *Honorary Chair* Jos? Mar?a Guibert - Rector of University of Deusto, Spain Amparo Alonso Betanzos - University of Coru?a and President of AEPIA (Spain) Costas Stasopoulos - Director-Elect. IEEE Region 8 Hojjat Adeli - Professor of the Ohio State University, U.S.A. *General Chair* Emilio Corchado - University of Salamanca, Spain *Program Chair* Pablo Garc?a Bringas - University of Deusto, Spain Fanny Klett - Learning Partnership Laboratory, Germany Ajith Abraham - Machine Intelligence Research Labs (Europe) ?lvaro Herrero - University of Burgos, Spain Bruno Baruque - University of Burgos, Spain Emilio Corchado - University of Salamanca, Spain -------------------------------------------------------------------- *** IMPORTANT DATES *** Paper submission deadline 20th February, 2014 Special Session submission deadline 15th January, 2014 Acceptance notification 20th March, 2014 Submission of (tentatively accepted) revised papers 1st April, 2014 Final version submission 7th April, 2014 Payment deadline 10th June, 2014 Conference dates 25th - 27th June, 2014 -------------------------------------------------------------------- *** CONTACT *** Dr. Pablo Garc?a Bringas (Co-Chair) University of Deusto (Spain) Email: pablo.garcia.bringas at deusto.es Phone: +34 94 413 90 73 Dr. Emilio Corchado (Co-Chair) University of Salamanca (Spain) Deusto Institute of Technolgy - DeustoTech http://www.deustotech.deusto.es University of Deusto http://www.deusto.es BISITE Research Group http://bisite.usal.es GICAP Research Group http://gicap.ubu.es University of Deusto Email: pablo.garcia.bringas at deusto.es Phone: +34 94 413 90 73 For more information about SOCO'14, please refer to the SOCO?14 website: http://soco14.deusto.es * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. Dr. Emilio S. Corchado escorchado at usal.es Chair IEEE Spain Section Profesor Titular de Universidad Departamento de Inform?tica y Autom?tica University of Salamanca phone: 0034 630 736 755 skype: emicorchado Address: Plaza de la Merced, s/n Faculty of Science CP. 37008 University of Salamanca Salamanca, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurajones at knowalliance.org Mon Jan 20 10:01:31 2014 From: laurajones at knowalliance.org (Laura Jones) Date: Mon, 20 Jan 2014 15:01:31 +0000 Subject: Connectionists: Intelligent Decision Tools - last call for Papers Message-ID: 6th International Conference on Intelligent Decision Technologies (IDT-2014) Chania - Crete, Greece 18 - 20 June 2014 http://idt-14.kesinternational.org/ ------------------------------------------- **Last call - Deadline a week today** We are delighted to invite contributions to the 6th International Conference on Intelligent Decision Technologies, organised by KES International. IDT-2014 is an international scientific conference for research in the area of Intelligent Decision Technologies with subjects considering theory and application. This conference is dedicated to improving decision making. The conference is interdisciplinary in nature and will consist of keynote talks, oral and poster presentations, invited sessions and workshops, on the applications and theory of intelligent decision systems and related areas. It will provide excellent opportunities for the presentation of interesting new research results and discussion about them, leading to knowledge transfer and generation of new ideas. The conference will be co-located under the KES Smart Digital Futures umbrella with our other Intelligent Systems conferences: IIMSS, AMSTA along with the new STET (Smart Technology-based Education and Training) conference. Registration gives access to all of these conferences, together with a paper published in one set of proceedings. Please see the Smart Digital Futures website http://smartfutures.kesinternational.org for more information. The conference proceedings will be published in the IOS Frontiers of Artificial Inteligence and Applications series (FAIA), indexed in SciVerse Scopus, ACM Digital Library, DBLP, Google Scholar and Zentralblatt MATH, and in previous years in ISI Conference Proceedings Citation Index, CPCi (T-R decides on a year by year basis which conferences to index). In addition, authors of selected high quality papers will be invited to submit a chapter for publication in an edited book published by Springer. Papers are invited on theoretical and applied topics related to the scope of the conference. =============== Conference Scope =============== General Topics of Interest Authors should relate their topic to improving some aspect of decision making. Topics related to intelligent decision making include, but are not limited to, intelligent agents, fuzzy logic, multi-agent systems, artificial neural networks, genetic algorithms, expert systems, intelligent decision making support systems, information retrieval systems, geographic information systems, knowledge management systems. IDT Applications IDT's have the potential to support decision making in many areas including management, international business, finance, accounting, marketing, healthcare, medical and diagnostic systems, military decisions, production and operation, networks, traffic management, crisis response, human-machine interfaces, financial and stock market monitoring and prediction, robotics. Emergent Intelligent Decision Technologies Virtual decision environments, social networking, 3D human-machine interfaces, cognitive interfaces, collaborative systems, intelligent web mining, e-commerce, e-learning, e-business, bioinformatics, evolvable systems, virtual humans, designer drugs. ============= Invited Sessions ============= Here are the current Workshop and Invited Sessions: WS01: Reasoning Based Intelligent Systems Chair: Dr. Kazumi Nakamatsu University of Hyogo, Japan Co-chaired by Dr. Jair Minoro Abe University of Sao Paulo, Brazil IS01: Autonomous Systems Chair: Dr. Arthur Filippidis Defence Science Technology Organisation (DSTO), Australia Co-chaired by Dr. Despina Filippidis Defence Science Technology Organisation (DSTO), Australia IS02: Advances in Computational Intelligence, Decision Making and Pattern Recognition Chair: Prof. Margarita N. Favorskaya Siberian State Aerospace University, Russia IS03: Intelligent Analysis of Built Environment BIG Data Chair: Prof. Arturas Kaklauskas Vilnius Gediminas Technical University, Lithuania Co-chaired by Prof. Dr. Mark Dyer Trinity College Dublin IS04: Interdisciplinary Approaches in Business Intelligence Research and Practice (IABIRP 2014) Chair: Prof. Ivan Lukovi? University of Novi Sad, Serbia Co-chaired by Prof. Mirjana Ivanovi? University of Novi Sad, Serbia IS05: Decision Making Theory Chair: Prof. Eizo Kinoshita Meijo University, Japan Co-chaired by Prof. Takao Ohya Kokushikan University, Tokyo IS06: Mastering Data-Intensive Collaboration through the Synergy of Human and Machine Reasoning Chair: Prof. Nikos Karacapilidis University of Patras & CTI, Greece Co-chaired by Dr. Lydia Lau University of Leeds, UK Co-chaired by Prof. Pavlos Peppas University of Patras, Greece IS07: Soft Computing in Industrial and Management Engineering: Theories and Applications Chair: Dr. Shing Chiang Tan Multimedia University, Malaysia Co-chaired by Prof. Junzo Watada Waseda University, Japan Co-chaired by Prof. Chee Peng Lim Deakin University, Australia IS08: Mixed-Integer Bilevel Programming: Application to Equilibrium, Variational Inequality and Combinatorial Problems Chair: Dr. Assoc. Prof. Vyacheslav V. Kalashnikov Tecnol?gico de Monterrey (ITESM), Mexico; Central Economics and Mathematics Institute (CEMI) Russia; Sumy State University, Ukraine. =============== Dates & Deadlines =============== General Track Papers Submission of Papers: 27 January 2014 Notification of Acceptance: 24 February 2014 Upload of Final Publication Files: 10 March 2014 Invited Session and Workshops Submission of Invited Session Proposals: Continue to submit Submission of Papers: Set by Session Chair Upload of Final Publication Files: 10 March 2014 (Publication files are the wordprocessor source in MS Word or LaTeX together with a final PDF.) Early Registration Deadline All delegates to main conference: 14 March 2014 Every paper for inclusion in the published proceedings must have at least one author who has registered for the conference with payment by: 14 March 2014. Conference Sessions: 18 - 20 June 2014 ======= Location ======= The conference will take place on the island of Crete in Greece, at the Minoa Palace Resort & Spa a luxury 5* beachside hotel, built within maintained gardens and having a dedicated conference centre. The Minoa Palace lies 12km west of the city of Chania and a 30 minute drive from Chania International Airport. Chania is one of the most beautiful cities in Greece and thought by many to be the most picturesque in Greece. Chania is a bustling city with both old and modern sections famed for the Venetian Harbour, the old port, the narrow shopping streets and waterfront restaurants. =============== Further Information =============== Please note that the above deadlines are provisional and subject to change. For further information on all the above, please visit the conference website at the top of the page. For general enquiries about the conference, please contact: idt-14 at kesinternational.org For registration enquiries, please contact: registration at kesinternational.info (if you have already registered please ensure you quote your registration order number). You can follow us for updates on: Twitter: @KESIntl and @KESIntsys Facebook: https://www.facebook.com/KesInternational From viktor.jirsa at univ-amu.fr Mon Jan 20 14:45:58 2014 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Mon, 20 Jan 2014 20:45:58 +0100 Subject: Connectionists: Postdoc position in EEG data analysis - The aging brain Message-ID: <152E2630-5C83-4D44-9F7A-CCAEB05EB463@univ-amu.fr> Postdoctoral position in Brain-Behavior study and EEG data analyses of the Coord-Age project at Aix Marseille University, France Title: EEG data recording and analysis for the study of brain-behavior relationships in older Adults Keywords: nonlinear dynamics, EEG, spatiotemporal data analysis Context: We seek a postdoctoral candidate who will take part in the collaborative Coord-Age project (Institut des Sciences du Mouvement and Institut des Neurosciences des Syst?mes). The project consortium is composed of the Behavior Group (J.J. Temprado) and Theoretical Neuroscience Group (V. Jirsa), as well as other experts in biomechanics and motor control at the Institute of Movement Sciences. The postdoc will work under the direct supervision of Jean Jacques Temprado and Viktor Jirsa in collaboration with other members of the team (other postdocs in brain and behavior, modeling and neuromuscular synergies). The successful candidates will study the relationship between dynamic mechanisms underlying the emergence of brain function and coordinated behavior. The objective of this research is to understand the correlated manifestations of variability at brain and behavioral levels. He/she will participate in the experimental and theoretical program and will be responsible for the EEG data analysis in older adults. Specifically, a successful candidate will have some prior experience in spatiotemporal data analysis (preference EEG/MEG, knowledge of PCA, ICA, etc.), nonlinear dynamic measures (mutual information, MSE, etc.) and excellent Matlab (maybe Python) skills. The candidate will be fully invested technically and conceptually in the experimental program in close collaboration with other postdocs. The results of the work will aid in the development of a larger conceptual framework on ageing and frailty, integrating other contributions from brain imaging and modeling. Profile: ? PhD in Physics, Mathematics, Engineering, Neurosciences ? Expertise in nonlinear methods of time series analysis, EEG data processing, kinematic analysis ? Programming skills for data recording and analysis (Labview and Matlab) ? Fluency in spoken and written English is obligatory, French will be also appreciated. Contract ? 12 months renewable based on performance, starting as of January 2014 ? Salary: ~1900 Euros/month depending on experience (<3 years of > 3 years) ? Location: Campus of Luminy, Institute of Movement Sciences and Campus of La Timone, Institut de Neuroscience des Syst?mes , Aix-Marseille University, France Contact and how to apply: JJ Temprado Institute of Movement Sciences Aix-Marseille University jean-jacques.temprado at univ-amu.fr Email a CV, statement of interests, and at least two (2) letters of reference to Jean Jacques Temprado Application deadline: The position is open until filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.devlin at york.ac.uk Tue Jan 21 06:43:28 2014 From: sam.devlin at york.ac.uk (Sam Devlin) Date: Tue, 21 Jan 2014 11:43:28 +0000 Subject: Connectionists: Adaptive Learning Agents (ALA) Workshop 2014: Best Paper Award Announcement and Submission Deadline Extension Message-ID: We are pleased to announce that the Adaptive Learning Agents Workshop (ALA) @ AAMAS 2014 will be awarding a best paper prize sponsored by FOCAS ( www.focas.eu). The summer school will be held 23-27 June in Crete and will consist of lectures and case studies done in small teams. FOCAS will also feature the winning paper on their website (www.focas.eu). Given this development we are opting to extend our paper submission deadline to January 31st. The full call for papers is listed below. -- Final CFP: Adaptive Learning Agents Workshop (ALA) @ AAMAS 2014 Final Call For Papers: Adaptive and Learning Agents Workshop 2014 (Paris, France) We apologize if you receive more than one copy. Please share with colleagues and students. Paper deadline: JANUARY 31, 2014 ALA 2014: Adaptive and Learning Agents Workshop held at AAMAS 2014 (Paris, France). The ALA workshop has a long and successful history and is now in its sixth edition. The workshop is a merger of European ALAMAS and the American ALAg series which is usually held at AAMAS. Details may be found on the workshop web site: http://swarmlab.unimaas.nl/ala2014/ Adaptive and Learning Agents, particularly those in a multi-agent setting are becoming more and more prominent as the sheer size and complexity of many real world systems grows How to adaptively control, coordinate and optimize such systems is an emerging multi-disciplinary research area at the intersection of Computer Science, Control theory, Economics, and Biology. The ALA workshop will focus on agent and multi-agent systems which employ learning or adaptation. The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design). This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular emphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to: * Novel combinations of reinforcement and supervised learning approaches * Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc. * Supervised multi-agent learning * Reinforcement learning (single and multi-agent) * Planning (single and multi-agent) * Reasoning (single and multi-agent) * Distributed learning * Adaptation and learning in dynamic environments * Evolution of agents in complex environments * Co-evolution of agents in a multi-agent setting * Cooperative exploration and learning to cooperate and collaborate * Learning to cooperate and collaborate * Learning trust and reputation * Communication restrictions and their impact on multi-agent coordination * Design of reward structure and fitness measures for coordination * Scaling learning techniques to large systems of learning and adaptive agents * Emergent behaviour in adaptive multi-agent systems * Game theoretical analysis of adaptive multi-agent systems * Neuro-control in multi-agent systems * Bio-inspired multi-agent systems * Applications of adaptive and learning agents and multi-agent systems to real world complex systems This year we will be presenting a best paper award sponsored by FOCAS ( www.focas.eu). The prize will be a free place at the FOCAS Summer School including accommodation and meals, but not including travel. The summer school will be held 23-27 June in Crete and will consist of lectures and case studies done in small teams. FOCAS will also feature the winning paper on their website (http://focas.eu). The workshop will also include a half day tutorial on multi-agent reinforcement learning. Previous versions of this tutorial were successfully run at EASSS 2004 (the European Agent Systems Summer School), ECML 2005, ICML 2006, EWRL 2008 and AAMAS 2009-2013, and ECML 2013 with different collaborators. The ALA 2014 edition will include revised and updated content with a new focus on reward shaping covering in particular depth - difference rewards and potential-based reward shaping. ******************************************************* Submission Details Papers can be submitted through Easychair: https://www.easychair.org/conferences/?conf=ala20140 Submissions may be up to 8 pages in the ACM proceedings format (i.e., the same as AAMAS papers in the main conference track). Accepted work will be allocated time for oral presentation during the one day workshop. Papers accepted at the workshop will also be eligible for inclusion in a special issue published after the workshop. ******************************************************* * Submission Deadline: January 31, 2014 * Notification of acceptance: February 19, 2014 * Camera-ready copies: March 10, 2014 * Workshop: May 5-6, 2014 ******************************************************* -- Sam Devlin Research Associate York Centre for Complex Systems Analysis The University of York Deramore Lane, York, YO10 5GH w: http://www.cs.york.ac.uk/~devlin/ Disclaimer: http://www.york.ac.uk/docs/disclaimer/email.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Tue Jan 21 15:49:58 2014 From: grlmc at urv.cat (GRLMC - URV) Date: Tue, 21 Jan 2014 21:49:58 +0100 Subject: Connectionists: SLSP 2014: 1st call for papers Message-ID: <000001cf16ea$70b74110$6400a8c0@GRLMC.local> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject line* **************************************************************************** ****** 2nd INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING SLSP 2014 Grenoble, France October 14-16, 2014 Organised by: ?quipe GETALP Laboratoire d?Informatique de Grenoble Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University http://grammars.grlmc.com/slsp2014/ **************************************************************************** ****** AIMS: SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between subdomains and people will hopefully happen. In SLSP 2014, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology. VENUE: SLSP 2014 will take place in Grenoble, at the foot of the French Alps. SCOPE: The conference invites submissions discussing the employment of statistical methods (including machine learning) within language and speech processing. The list below is indicative and not exhaustive: phonology, phonetics, prosody, morphology syntax, semantics discourse, dialogue, pragmatics statistical models for natural language processing supervised, unsupervised and semi-supervised machine learning methods applied to natural language, including speech statistical methods, including biologically-inspired methods similarity alignment language resources part-of-speech tagging parsing semantic role labelling natural language generation anaphora and coreference resolution speech recognition speaker identification/verification speech transcription speech synthesis machine translation translation technology text summarisation information retrieval text categorisation information extraction term extraction spelling correction text and web mining opinion mining and sentiment analysis spoken dialogue systems author identification, plagiarism and spam filtering STRUCTURE: SLSP 2014 will consist of: invited talks invited tutorials peer-reviewed contributions INVITED SPEAKERS: to be announced PROGRAMME COMMITTEE: Sophia Ananiadou (Manchester, UK) Srinivas Bangalore (Florham Park, US) Patrick Blackburn (Roskilde, DK) Herv? Bourlard (Martigny, CH) Bill Byrne (Cambridge, UK) Nick Campbell (Dublin, IE) David Chiang (Marina del Rey, US) Kenneth W. Church (Yorktown Heights, US) Walter Daelemans (Antwerpen, BE) Thierry Dutoit (Mons, BE) Alexander Gelbukh (Mexico City, MX) Ralph Grishman (New York, US) Sanda Harabagiu (Dallas, US) Xiaodong He (Redmond, US) Hynek Hermansky (Baltimore, US) Hitoshi Isahara (Toyohashi, JP) Lori Lamel (Orsay, FR) Gary Geunbae Lee (Pohang, KR) Haizhou Li (Singapore, SG) Daniel Marcu (Los Angeles, US) Carlos Mart?n-Vide (Tarragona, ES, chair) Manuel Montes-y-G?mez (Puebla, MX) Satoshi Nakamura (Nara, JP) Shrikanth S. Narayanan (Los Angeles, US) Vincent Ng (Dallas, US) Joakim Nivre (Uppsala, SE) Elmar N?th (Erlangen, DE) Maurizio Omologo (Trento, IT) Mari Ostendorf (Seattle, US) Barbara H. Partee (Amherst, US) Gerald Penn (Toronto, CA) Massimo Poesio (Colchester, UK) James Pustejovsky (Waltham, US) Ga?l Richard (Paris, FR) German Rigau (San Sebasti?n, ES) Paolo Rosso (Valencia, ES) Yoshinori Sagisaka (Tokyo, JP) Bj?rn W. Schuller (London, UK) Satoshi Sekine (New York, US) Richard Sproat (New York, US) Mark Steedman (Edinburgh, UK) Jian Su (Singapore, SG) Marc Swerts (Tilburg, NL) Jun'ichi Tsujii (Beijing, CN) Gertjan van Noord (Groningen, NL) Renata Vieira (Porto Alegre, BR) Dekai Wu (Hong Kong, HK) Feiyu Xu (Berlin, DE) Roman Yangarber (Helsinki, FI) Geoffrey Zweig (Redmond, US) ORGANISING COMMITTEE: Laurent Besacier (Grenoble, co-chair) Adrian Horia Dediu (Tarragona) Benjamin Lecouteux (Grenoble) Carlos Mart?n-Vide (Tarragona, co-chair) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (including eventual appendices) and should be prepared according to the standard format for Springer Verlag's LNAI/LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://www.easychair.org/conferences/?conf=slsp2014 PUBLICATIONS: A volume of proceedings published by Springer in the LNAI/LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The period for registration is open from January 16, 2014 to October 14, 2014. The registration form can be found at: http://grammars.grlmc.com/slsp2014/Registration.php DEADLINES: Paper submission: May 7, 2014 (23:59h, CET) Notification of paper acceptance or rejection: June 18, 2014 Final version of the paper for the LNAI/LNCS proceedings: June 25, 2014 Early registration: July 2, 2014 Late registration: September 30, 2014 Submission to the post-conference journal special issue: January 16, 2015 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SLSP 2014 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34 977 559 543 Fax: +34 977 558 386 ACKNOWLEDGEMENTS: Departament d?Economia i Coneixement, Generalitat de Catalunya Laboratoire d?Informatique de Grenoble Universitat Rovira i Virgili From muftimahmud at gmail.com Tue Jan 21 17:05:26 2014 From: muftimahmud at gmail.com (Mufti Mahmud) Date: Tue, 21 Jan 2014 23:05:26 +0100 Subject: Connectionists: Call for Participation: CSNII School on Neurotechniques 2014, 10-15 March 2014, Padova, Italy. Message-ID: Dear All, Sorry for the cross posting of this message. I am happy to communicate that the NeuroChip Laboratory of University of Padova, Padova, Italy is organizing a school on Neurotechniques with special emphasis on "Tools for Investigating the Function of Neural Circuits". The school will be held from 10-15 March 2014 and the deadline for application is 20 February, 2014. The following topics will be covered in the school: 1. Multi-electrode and multi-transistor arrays 2. Implantable probes for acute and chronic recording 3. Carbon nanotubes for brain-chip interfacing 4. Calcium imaging 5. Voltage sensitive dye imaging 6. Bacterial rhodopsins and optogenetics 7. Pharmacological and functional magnetic resonance imaging 8. Mixed electrophysiological, genetic and behavioral approaches 9. Information theory & analysis of neuronal population signals 10. Closed-loop electrophysiology and hybrid systems Thank you for your attention. Best regards, Mufti Mahmud Call for Participation ------------------------------ CSNII School on Neurotechniques 2014: The Toolbox for Investigating the Function of Neural Circuits 10-15 March 2014 NeuroChip Lab, University of Padova, Italy OBJECTIVE Investigating information processing and identifying operational rules of brain neural circuits relies on the capability to selectively record and stimulate multiple neurons within a network. The toolbox of available techniques conceived to meet this need is rapidly expanding. The CSNII School on Neurotechniques 2014 will offer an overview on advanced electrical- and light-based recording methods of neuronal excitability, focusing on those that are most relevant for the investigation of neural networks ?in vitro? and ?in vivo? and for application in neuroprosthetics. Techniques derived from information theory for the analysis of neuronal population signals will be presented and practical demonstrations of experimental recording sessions provided. The school targets doctoral students, post-doctoral researchers, scientists working in the related fields. Since the school's topics are interdisciplinary covering neurobiology, neuronal biophysics, microscopy, and electronics, preliminary knowledge in these fields is highly recommended. REGISTRATION To apply, please register (http://www.cyberrat.eu/school.php) on or before February 20, 2014. If you have any further questions, please contact marta.maschietto at unipd.it The registration is FREE. The registration is sponsored by the European Commission through the CSNII project under FP7 (FET Proactive Intiative 'Neuro-Bio-Inspired Systems - NBIS'). Travel and accommodation costs are not funded. LOCATION The school will be hosted by the NeuroChip Laboratory, Department of Biomedical Sciences, University of Padova, Via f. Marzolo 3, 35131 - Padova, Italy. The location is very close the city centre and is easily reachable from the railway or bus station of Padova. WHEN March 10-15, 2014. Target audience (N = ~30 active full-time): PhD students, postdoctoral researchers, scientists, members of associated EC CSNII consortia. -- Mufti Mahmud, PhD Marie Curie Research Fellow Theoretical Neurobiology & Neuroengineering Lab University of Antwerp T6.63, Universiteitsplein 1 2610 - Wilrijk, Belgium Lab: +32 3 265 2610 http://www.muftimahmud.co.nr/ & Assistant Professor (on leave) Institute of Information Technology Jahangirnagar University Savar, 1342 - Dhaka, Bangladesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From nando at cs.ubc.ca Wed Jan 22 06:22:29 2014 From: nando at cs.ubc.ca (Nando de Freitas) Date: Wed, 22 Jan 2014 11:22:29 +0000 Subject: Connectionists: Four-Year Scholarships on Autonomous Intelligent Machines and Systems at Oxford Message-ID: [Apologies for duplicates] The application deadlines are 14th February 2014 and 14th March 2014. Full detail of the programme and information on how to apply can be found at www.eng.ox.ac.uk/aims *UNIVERSITY OF OXFORD **EPSRC Centre for Doctoral Training* *In Autonomous Intelligent Machines and Systems (AIMS) * Candidates are invited to apply to the University of Oxford?s 4-year Autonomous Intelligent Machines and Systems (AIMS) PhD (DPhil) programme. This new and exciting programme provides a completely new perspective to solving the issues of intelligent, autonomous systems by adopting an inter-disciplinary approach. In the next decade our society will be revolutionised by Autonomous, Intelligent Machines and Systems, which can learn, adapt and act independently of human control. The UK has the opportunity to become a world-leader in developing these technologies for sectors as diverse as energy, transport, environment, manufacturing and aerospace. Our CDT will deliver highly-trained individuals versed in the underpinning sciences of robotics, computer vision, wireless embedded systems, machine learning, control and verification. The CDT will advance practical models and techniques to enable computers and robots to make decisions under uncertainty, scale to large problem domains and be verified and validated. Holding one of these studentships will allow you to study the problems and opportunities in Autonomous, Intelligent Systems from many different perspectives, to understand the real-world challenges, and to make a contribution to solving some of the most significant problems society faces today. Applications will be considered from those with degrees at undergraduate (1 st or 2:1) and master?s level (distinction). Funding is provided by EPSRC according to usual eligibility rules: Full studentships are available to UK candidates; and Partial studentships (fees only) are available for EU candidates. At present we are not able to offer these studentships to international candidates but we do welcome applications from self-funded students. For more details, to ask questions, or to learn how to apply, please contact the CDT at aims-cdt at robots.ox.ac.uk . Thanks, Nando -------------- next part -------------- An HTML attachment was scrubbed... URL: From agostino.gibaldi at unige.it Wed Jan 22 11:58:50 2014 From: agostino.gibaldi at unige.it (Agostino Gibaldi) Date: Wed, 22 Jan 2014 17:58:50 +0100 Subject: Connectionists: =?utf-8?q?Special_Issue_on_=E2=80=9CEmerging_Spat?= =?utf-8?q?ial_Competences=3A_From_Machine_Perception_to_Sensorimotor_Inte?= =?utf-8?b?bGxpZ2VuY2XigJ0=?= Message-ID: <52DFF8CA.30106@unige.it> Dear Researcher, due to multiple requestes, we kindly inform you that the deadline for the Special Issue on ?Emerging Spatial Competences: From Machine Perception to Sensorimotor Intelligence? for the RAS - Robotics and Autonomous Systems, has been extended to February 15, 2014. The Special Issue (see below for a more detailed description) aims to investigate how the mutual influence between the perception of the environment and the interaction with it can be extended to support co-evolution mechanisms of perceptual and motor processes. * Robotics and Autonomous Systems Journal* * * ** *Special Issue on ?Emerging Spatial Competences: From Machine Perception to Sensorimotor Intelligence?* CALL FOR PAPERS *_Aims and Objectives_*** Following the recent evolution of robotics and AI in different fields of application, the increasing complexity of the *actions* that an artificial**agent needs to perform, is directly dependent on the complexity of the *sensory information *that it can acquire and *interpret*, /i.e./ *perceive*. From this point of view, an efficient and internal representation of the sensory information is at the base of a robot to develop a *human-like capability* of interaction with the surrounding environment. Particularly, in the space at a *reachable distance*, not only visual and auditory, but also tactile and proprioceptive information rise to be relevant to gain a comprehensive spatial cognition. This information, coming from different senses, can be in principle integrated and used to experience an awareness of the environment both to actively interact with it, and to calibrate the interaction itself. Besides, the early sensory and sensorimotor mechanisms, that at a first glance may appear simple processes, are grounded on highly structured and complex algorithms that are far from being understood and modeled. By exploiting an early synergy between *sensing modules*and *motor control*, the loop between action and perception comes to be not just closed at system level, but shortened at an inner one. This would allow not only the emergence of *spatial competences* but also their *continuous adaptation*to changes in the environment or in the body, which could modify its interactions with the world. The aim of this special issue is to survey a state of the art of methodologies, concepts, algorithms and techniques that would serve as bricks on which to build and develop artificial agents with such a spatial competence; perceptual and cognitive understanding of space should emerge from sensorimotor exercise. The *action-perception loop* has never been so close! *_Paper Submission_*** We invite original contributions that provide novel solutions to address the relevant topics including but not limited to: ?Theoretical or practical aspects of machine sensing (for computer vision, robot audition, artificial touch, etc.) ?Multisensory data fusion, processing, learning and integration ?Computational neural modeling ?Embodied robotics: perception, cognition, and behaviors ?Machine learning for sensorimotor control and intelligence ?Neural networks: models, theories, learning algorithms and applications ?Engineering application of sensorimotor intelligence to pattern recognition, computer vision, speech recognition, human-robot interactions. As a follow-up of the IJCNN 2013 special session, we invite in particular the special session participants to submit profoundly extended versions of their conference submission to go through a new peer review process, together with contributions not published in the conference proceedings. ** Papers should be typeset according to the format instructions for the Robotics and Autonomous Systems Journal, available on the Elsevier web site (http://www.elsevier.com/journals/robotics-and-autonomous-systems/0921-8890/guide-for-authors). *_Important Dates_* ?*February 15, 2014***January 31, 2014: Paper submission deadline ?March 31, 2014: Notification of paper acceptance ?April 30, 2014: Camera ready paper submission ?Late Spring 2014: Expected publication date *_Guest editors_*** *Agostino Gibaldi*, agostino.gibaldi at unige.it Department of Informatics, Bioengineering, Robotics and System Engineering University of Genoa, Italy Advanced ResearchCenteron Electronic Systems (ARCES) University of Bologna, Italy /Agostino Gibaldi received his degree in Biomedical Engineering from the University of Genoa, Italy, in 2007, and his Ph.D. in 2011. Since the master thesis he is with the Physical Structure of Perception and Computation (PSPC) Group where he is actually a post doc. Recently, he joined the Computer Vision Group of the //Advanced Research Center on Electronic Systems (ARCES), working on data analysis computer aided diagnosis for CT perfusion related to tumour lesions. //His research interests are related to cortical models of V1, MT and MST areas, in relation with the estimation of disparity, the control of vergence eye movements, and the optic flow analysis for navigation, for their real-time implementation on robot platforms so to obtain active behaviours and adaptation to the environment. Aside, he also worked on neural networks and learning, eye tracking algorithms, camera calibration, 3D data modelling for virtual reality, CT perfusion and image registration./ // *Silvio P. Sabatini* , silvio.sabatini at unige.it Department of Informatics, Bioengineering, Robotics and System Engineering University of Genoa, Italy http://www.pspc.unige.it/~silvio/home/ /Silvio P. Sabatini received the Laurea Degree in Electronics Engineering and the Ph.D. in Computer Science from the //University//of //Genoa//in 1992 and 1996. He is currently Associate Professor of Bioengineering at the Department of Informatics, Bioengineering, Robotics and System Engineering of the //University//of //Genoa//. In 1995 he promoted the creation of the ?Physical Structure of Perception and Computation? (PSPC) Lab to develop models that capture the ?physicalist? nature of the information processing occurring in the visual cortex, to understand the signal processing strategies adopted by the brain, and to build novel algorithms and architectures for artificial perception machines. His research interests relate to visual coding and multidimensional signal representation, early-cognitive models for visually-guided behavior, and robot vision. He is author of more than 100 papers in peer-reviewed journals, book chapters and international conference proceedings./ // *Sylvain Argentieri* , sylvain.argentieri at upmc.fr Institute for Intelligent Systems and Robotics (ISIR) Universit? Pierre et Marie Curie, Paris, France http://www.isir.upmc.fr/index.php?op=view_profil&id=113&old=N&lang=fr /Sylvain Argentieri received his Master's degrees in Robotics from the Pierre et Marie Curie University, Paris, and in Electronics from Ecole Normale Sup?rieure, Cachan, France, in 2003. He then received his Ph.D. in Computer Science from the //Paul////Sabatier////University//, //Toulouse//, //France//, in 2006. After two years as an Assistant Professor at LAAS-CNRS (Laboratory for Analysis and Architecture of Systems) in the same University, he is now Associate Professor at the "Active Multimodal Perception" group in the Institute for Intelligent Systems and Robotics of the //Pierre//et //Marie////Curie////University//since 2008. He also obtained in 2002 the highest teaching diploma in //France//(Agr?gation externe) in Electronical Science. His research interests relate to artificial audition in a robotics context, from array processing methods to binaural approaches, for sound source localization, speaker recognition, human-robot interaction, etc. He is also interested in active approaches to multimodal perception and sensorimotor integration./ // *Zhengping Ji*, jizhengp at gmail.com Advanced Image Research Laboratory (AIRL) Samsung, Pasadena, CA, U.S.A http://cnls.lanl.gov/External/people/Zhengping_Ji.php /Zhengping Ji received his B.S. degree in Electrical Engineering from Sichuan University//, //China, in 2003 and the Ph.D. in Computer Science from Michigan State University, USA, in 2008. From 2009 to 2010, he held a postdoctoral fellow position at the Center for the Neural Basis of Cognition, //Carnegie////Mellon////University//, working on the DARPA RealNose Project. After that, he spent two years in Los Alamos National Laboratory, where he was a Research Associate conducting researches on computational modelling of the brain?s visual pathways. He is now a Senior Research Scientist at Advanced Image Research Laboratory of Samsung Electronics. His current research interests lie in computer vision, computational neuroscience and machine learning. Specifically, he seeks to develop a series of deep learning models to generate cortex-like hierarchical sparse representation for a variety of tasks in vision, including generic object recognition, object detection and segmentation, image denoising and compression, and vision-based autonomous navigation. He is a Vice Chair of Task Force on Bio-Inspired Self-Organizing Collective Systems at IEEE Computational Intelligence Society, and a committee member of the Brain-Mind Institute, USA./ // -------------- next part -------------- An HTML attachment was scrubbed... URL: From jms at isep.ipp.pt Wed Jan 22 07:04:54 2014 From: jms at isep.ipp.pt (Jorge M. Santos) Date: Wed, 22 Jan 2014 12:04:54 -0000 Subject: Connectionists: Post-Doc Position: Project "Reusable Deep Neural Networks: Applications to biomedical data" In-Reply-To: <52DF9E67.5050902@ua.pt> References: <52DF9E67.5050902@ua.pt> Message-ID: <063b01cf176a$29c03bb0$7d40b310$@isep.ipp.pt> Dear colleagues, I kindly ask you to forward this e-mail to whom may be interested. 1 Post-Doc position is available at INEB (www.ineb.up.pt), Portugal under the project "Reusable Deep Neural Networks: Applications to biomedical data". For more details please visit www.ineb.up.pt - "Open positions". You can also find information in http://www.eracareers.pt/opportunities/index.aspx?task=global&jobId=41805 or http://www.eracareers.pt/opportunities/index.aspx?task=showAnuncioOportuniti es&jobId=41805&lang=pt&idc=1 The application deadline is February 1. Best wishes Lu?s Silva From ksatoh at nii.ac.jp Wed Jan 22 23:25:08 2014 From: ksatoh at nii.ac.jp (Ken Satoh) Date: Thu, 23 Jan 2014 13:25:08 +0900 Subject: Connectionists: CALL for Workshop Proposals: JSAI-isAI 2014 in Yokohama, Japan Message-ID: <28DB2EB48FB849B3924A901625E9C6BF@niiks2010PC> The Sixth JSAI International Symposia on AI (JSAI-isAI 2014) November 23 - 25, 2014 Sponsored by: The Japanese Society for Artificial Intelligence (JSAI) Venue: Raiousha Building, Keio University, Kanagawa http://www.ai-gakkai.or.jp/isai/ Call for Workshop Proposals Submission deadline: March 23, 2014 Submit a proposal to "JSAI-isAI [at] ai-gakkai.or.jp" by e-mail The sixth JSAI International Symposia on AI (JSAI-isAI) will take place at Keio University, Kanagawa on November 23rd - 25th, 2014. JSAI-isAI is an event that hosts several international workshops at the same location. This is in succession to the international workshops co-located with JSAI annual conferences since 2001. JSAI invites proposals for the sixth JSAI-isAI. JSAI-isAI brings together a set of workshops at a common site, providing a unique and intimate forum for colleagues in a given discipline. JSAI-isAI also provides an important opportunity for AI researchers to get together and share their knowledge. In previous symposia, selected papers were published from Springer LNAI series (http://www.springer.com/series/1244). Prospective workshop organizers should send a proposal (maximum three pages) with the following sections to the JSAI-isAI 2014 Committee (JSAI-isAI [at] ai-gakkai.or.jp). - Title of the workshop - Objectives, and scope - Names and contacts of key organizers and a tentative list of the program committee members. - Expected number of papers, attendees, and preliminary workshop format. Proposals will be reviewed by the JSAI-isAI 2014 Committee. The form for workshop proposals is shown below. ------------------------------------------------------------ Workshop title: Abstract (within 400 words): #Describe objective, scope Expected number of papers: Expected number of attendees: Preliminary workshop format: #Describe whether the workshop will include panels, posters, and etc. Past experiences in JSAI-isAI (if any): #Describe whether you have organized workshops in JSAI-isAI and how #many participants you have got in the past Information of Workshop leader (1) Name: Affiliation: Postal address: Telephone: Fax e-mail: Experiences with conference/workshop (if any): Information of Workshop co-leader (2) (if any) Name: Affiliation: Postal address: Telephone: Fax e-mail: Experiences with conference/workshop (if any): The tentative list of members in the program committee (3): ------------------------------------------------------------ Important Dates Workshop Proposal Due : 23 March 2014 Workshop Notification : 9 April 2014 Release of Workshop Call for Paper : April 2014 Workshop Submission Deadline : August 2014 Workshop Author Notification : September 2014 Workshop Camera-ready : October 2014 Workshop Date : 23-25, November 2014 Notes The organizers and chairs of the workshop shall have full control on the call for papers, forming of program committees, review and selection of papers as well as planning the workshop program. Furthermore, the organizers and chairs of the workshop should specify in the web page that the workshop is `with a support of The Japanese Society for Artificial Intelligence' and `in association with forth JSAI International Symposia on AI (JSAI-isAI 2014).' The organizers and chairs of the workshop are also responsible for preparing a short review of the workshop, to be printed in the journal of JSAI. --------- From rsousa at dcc.fc.up.pt Thu Jan 23 10:30:44 2014 From: rsousa at dcc.fc.up.pt (Ricardo Sousa) Date: Thu, 23 Jan 2014 15:30:44 +0000 Subject: Connectionists: [visum2014] - 2nd VISion Understanding and Machine intelligence summer school (Call for Participation) In-Reply-To: <02df01cf12d4$a02e2860$e08a7920$@fe.up.pt> References: <024001cf12bf$30f415f0$92dc41d0$@fe.up.pt> <02df01cf12d4$a02e2860$e08a7920$@fe.up.pt> Message-ID: <52E135A4.8040207@dcc.fc.up.pt> Call for Participation 2^nd *VISion Understanding and Machine intelligence summer school* http://www.fe.up.pt/visum/ http://www.youtube.com/watch?v=gCt8xVvUP2o We invite everyone interested in computer vision to attend the 2nd *VISion Understanding and Machine intelligence summer school* between 3-10 of July. Important Dates Application Deadline: March 1, 2014 Decision: March 9, 2014 Early Registration: April 10, 2014 Late Registration: May 10, 2014 Summer School: July 3, 2014 Visum will comprise three main tracks ? Fundamental subjects ? Examples of computer vision applications ? Industrial session Each one with extensive practical "hands-on" sessions *Topics* ? Machine learning on computer vision o Professor Jaime S. Cardoso, Universidade Porto ? Image segmentation o Professor Cristian Sminchisescu, Lund University ? Human computer interaction o Professor Russell Beale, University of Birmingham ? RGB-D cameras o Doctor Juergen Sturm, Technical University of Munich ? Biometrics o Professor Hugo Proen?a, University of Beira Interior ? Feature detectors o Professor Vincent Lepetit, ?COLE POLYTECHNIQUE F?D?RALE DE LAUSANNE ? Applications o Professor Andr? Mar?al, Universidade Porto Industry Track * /Siemens S.A./ /- /Eng. Jo?o Dias// * /E-Commerce Sonae MC/ / - /Eng. Nuno Almeida// * /Albatroz Engineering/ /- /Eng. S?rgio Copeto// Venue visum will take place in Porto, Portugal's second-largest city, European Best Destination 2012 and Lonely Planet's top 10 for 2013. Here, you can find famous baroque style monuments, the worldwide known Port Wine cellars, always having the World Heritage Douro River as the background of this youthful, active and charming city. Visum's program includes social activities. Please visit our webpage for up to date information: http://www.fe.up.pt/visum/. We are looking forward to your participation! visum organising committee http://www.fe.up.pt/visum visum at fe.up.pt Follow us on: Facebook: http://www.facebook.com/pages/visum/402527539813446 GooglePlus: https://plus.google.com/104076275960053201744/posts Twitter: http://www.twitter.com/visumschool -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 246 bytes Desc: OpenPGP digital signature URL: From weng at cse.msu.edu Thu Jan 23 13:08:30 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Thu, 23 Jan 2014 13:08:30 -0500 Subject: Connectionists: Workshop Progress in Brain-Like Computing, February 5-6 2014 In-Reply-To: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> Message-ID: <52E15A9E.5080207@cse.msu.edu> Dear Anders, Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: the wide gap between neuron-like computing and well-known highly integrated brain functions. Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. -John On 1/13/14 12:14 PM, Anders Lansner wrote: > > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing > is one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is > arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in > processing real-time unstructured information and uncertain data as > well as in controlling a complex mechanical platform with multiple > degrees of freedom like the human body. Intense experimental research > complemented by computational and informatics efforts are gradually > increasing our understanding of underlying processes and mechanisms in > small animal and mammalian brains and are beginning to shed light on > the human brain. We are now approaching the point when our knowledge > will enable successful demonstrations of some of the underlying > principles in software and hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other > leading names in the field, to provide an overview of the > state-of-the-art in theoretical, software, and hardware aspects of > brain-like computing. > > > List of speakers > > *Speaker* > > > > *Affiliation* > > Giacomo Indiveri > > > > ETH Z?rich > > Abigail Morrison > > > > Forschungszentrum J?lich > > Mark Ritter > > > > IBM Watson Research Center > > Guillermo Cecchi > > > > IBM Watson Research Center > > Anders Lansner > > > > KTH Royal Institute of Technology > > Ahmed Hemani > > > > KTH Royal Institute of Technology > > Steve Furber > > > > University of Manchester > > Kazuyuki Aihara > > > > University of Tokyo > > Karlheinz Meier > > > > Heidelberg University > > Andreas Schierwagen > > > > Leipzig University > > *For signing up to the Workshop please use the registration form found > at _http://bit.ly/1dkuBgR_* > > *You need to sign up before January 28^th .* > > *Web page: > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > *** > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se , +46-70-2166122 > > > > ------------------------------------------------------------------------ > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod > f?r avast! Antivirus ?r aktivt. > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Thu Jan 23 13:48:18 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Thu, 23 Jan 2014 11:48:18 -0700 Subject: Connectionists: Workshop Progress in Brain-Like Computing, February 5-6 2014 In-Reply-To: <52E15A9E.5080207@cse.msu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> Message-ID: > I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. For clarification, active discussion on this mailing list is strongly encouraged, and used to be much more common. The 5,000 members of connectionists will never be in the same room together, so please take the opportunity this list provides to have more informal, public conversations than is possible in any other venue. Thanks, Brian Mingus Connectionists moderator http://grey.colorado.edu/mingus On Thu, Jan 23, 2014 at 11:08 AM, Juyang Weng wrote: > Dear Anders, > > Interesting topic about the brain! But Brain-Like Computing is misleading > because neural networks have been around for at least 70 years. > > I quote: "We are now approaching the point when our knowledge will enable > successful demonstrations of some of the underlying principles in software > and hardware, i.e. brain-like computing." > > What are the underlying principles? I am concerned that projects like > "Brain-Like Computing" avoid essential issues: > the wide gap between neuron-like computing and well-known highly > integrated brain functions. > Continuing this avoidance would again create bad names for "brain-like > computing", just such behaviors did for "neural networks". > > Henry Markram criticized IBM's brain project which does miss essential > brain principles, but has he published such principles? > Modeling individual neurons more and more precisely will explain highly > integrated brain functions? From what I know, definitely not, by far. > > Has any of your 10 speakers published any brain-scale theory that bridges > the wide gap? Are you aware of any such published theories? > > I am sorry for giving a CC to the list, but many on the list said that > they like to hear discussions instead of just event announcements. > > -John > > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing is > one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is > arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in > processing real-time unstructured information and uncertain data as well as > in controlling a complex mechanical platform with multiple degrees of > freedom like the human body. Intense experimental research complemented by > computational and informatics efforts are gradually increasing our > understanding of underlying processes and mechanisms in small animal and > mammalian brains and are beginning to shed light on the human brain. We are > now approaching the point when our knowledge will enable successful > demonstrations of some of the underlying principles in software and > hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other leading > names in the field, to provide an overview of the state-of-the-art in > theoretical, software, and hardware aspects of brain-like computing. > List of speakers > > *Speaker* > > *Affiliation* > > Giacomo Indiveri > > ETH Z?rich > > Abigail Morrison > > Forschungszentrum J?lich > > Mark Ritter > > IBM Watson Research Center > > Guillermo Cecchi > > IBM Watson Research Center > > Anders Lansner > > KTH Royal Institute of Technology > > Ahmed Hemani > > KTH Royal Institute of Technology > > Steve Furber > > University of Manchester > > Kazuyuki Aihara > > University of Tokyo > > Karlheinz Meier > > Heidelberg University > > Andreas Schierwagen > > Leipzig University > > > > *For signing up to the Workshop please use the registration form found at > http://bit.ly/1dkuBgR * > > *You need to sign up before January 28th.* > > *Web page: > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > * > > > > > > > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se, +46-70-2166122 > > > > > ------------------------------ > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! > Antivirus ?r aktivt. > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Colin.Wise at uts.edu.au Thu Jan 23 16:31:55 2014 From: Colin.Wise at uts.edu.au (Colin Wise) Date: Fri, 24 Jan 2014 08:31:55 +1100 Subject: Connectionists: REMINDER - AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 29 January 2014 Message-ID: <8112393AA53A9B4A9BDDA6421F26C68A016E461F921F@MAILBOXCLUSTER.adsroot.uts.edu.au> Dear Colleague, REMINDER - AAI Short Course - Advanced Data Analytics - an Introduction - Wednesday 29 January 2014 https://shortcourses-bookings.uts.edu.au/Clientview/Schedules/ScheduleDetail.aspx?ScheduleID=1540&EventID=1273 The AAI short course 'Advanced Data Analytics - an Introduction' may well be of interest to you and your organisation and key personnel. This Data Analytics introductory short course will provide an early and rewarding understanding of the level of analytics which your organisation and your people should be seeking. Course outcomes Upon completion of this course students will: * Understand why advanced data analytics is essential to your business success * Understand the key terms and concepts used in advanced data analytics * Understand relations of big data, clouding computing and analytics * Be familiar with basic skills of statistics in data analytics, including descriptive analysis, regression, multivariate data analysis * Learning basic data mining and data warehousing, visualization and reporting, such as supervised vs unsupervised methods, clustering, association rule and frequent mining and so on * Knowing key techniques in machine learning, such as Parametric and non-parametric models, learning and inference, Maximum-likelihood estimation, and Bayesian approaches and so on * Be given the introduction of social media analytics, multimedia analytics, and the real projects or case studies conducted in AAI Future short courses on Data Analytics and Big Data may be viewed at http://analytics.uts.edu.au/shortcourses/structure.html First in a series of advanced data analytic short courses - register here Link Happy to discuss at your convenience. Regards. Colin Wise Operations Manager Advanced Analytics Institute (AAI) Blackfriars Building 2, Level 1 University of Technology, Sydney (UTS) Email: Colin.Wise at uts.edu.au Tel. +61 2 9514 9267 M. 0448 916 589 AAI: www.analytics.uts.edu.au/ AAI Email Policy - should you wish to not receive these communication on Data Analytics Learning please reply to our email (sender) with UNSUBSCRIBE in the Subject. We will delete you from our database. Thank you for your patience and consideration. UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at jan-peters.net Fri Jan 24 11:58:00 2014 From: mail at jan-peters.net (Jan Peters) Date: Fri, 24 Jan 2014 17:58:00 +0100 Subject: Connectionists: IROS explicitly encourages Machine Learning Papers Message-ID: <51EA764F-A600-48A4-A149-D09A0861E08E@jan-peters.net> Hi there, IROS 2014 explicitly encourages papers on neural networks & machine learning to be submitted, especially on the topics: - Neural and Fuzzy Control - Neurorobotics - Robot Reinforcement Learning - Imitation Learning and Inverse Reinforcement Learning - Learning Control - Model Learning - Motor Skill Learning - Robot Learning - Learning from Demonstration - Visual Learning Please see the more general IROS 2014 Call for Papers (Deadline: Feb. 5, 2014) below. Hope to see you in Chicago! Best wishes, -Jan Peters ================================================================================== 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014) Chicago, Illinois, USA, September 14?18, 2014 URL: http://www.iros2014.org/ ================================================================================== The organizers of IROS 2014 invite you to submit your papers, and your workshop/tutorial proposals, to the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IMPORTANT DATES: * Paper Submissions: Feb. 5, 2014 * Workshop/Tutorial Proposals: Feb. 5, 2014 * Notification of workshop/tutorial acceptance: March 21, 2014 * Notification of paper acceptance: May 21, 2014 * Final papers due: June 21, 2014 * Conference: Sept. 14-18, 2014 SUBMISSION INSTRUCTIONS: http://www.iros2014.org/contributing/call-for-papers Every accepted paper at IROS 2014 will be allocated both a short oral presentation and an interactive presentation, maximizing exposure and opportunities for interaction with colleagues. ================================================================================== VENUE: The venue for IROS 2014 is the historic Palmer House Hilton, close by Millennium Park, Lake Michigan, Navy Pier, the Magnificent Mile, the Museum Campus, and other Chicago attractions. The program will integrate workshops and tutorials, plenaries, mini-plenaries, interactive sessions, exhibits, robot demonstrations, an industrial innovation showcase, and social activities for attendees and guests. ================================================================================== IROS 2014 Conference Committee: * General Chair: Kevin Lynch (Northwestern University) * Program Chair: Lynne Parker (University of Tennessee) * Conference Paper Review Board Editor-in-Chief: Wolfram Burgard (University of Freiburg) * Workshops and Tutorials Chair: Tim Bretl (University of Illinois at Urbana-Champaign) * Interactive Sessions: Ani Hsieh (Drexel University) * Exhibits Chair: Martin Buehler (Covidien) ================================================================================== Inquiries? Please email info at iros2014.org. ================================================================================== From pfeiffer at ini.phys.ethz.ch Fri Jan 24 12:35:26 2014 From: pfeiffer at ini.phys.ethz.ch (Michael Pfeiffer) Date: Fri, 24 Jan 2014 18:35:26 +0100 Subject: Connectionists: MSc program in Neural Systems and Computation - University of Zurich and ETH Zurich Message-ID: <52E2A45E.2050904@ini.phys.ethz.ch> We are inviting applications for the MSc program in Neural Systems and Computation, an interdisciplinary program offered as a Joint Master Program by the University of Zurich and ETH Zurich, Switzerland. The program offers a theoretical and laboratory training in neural computation and systems neuroscience. It offers hands-on knowledge of data gathering, analysis and scientific presentation. Students join an international and interdisciplinary research community with expertise in neuroinformatics, advanced experimental techniques and neuromorphic engineering. Further information can be found on our homepage http://www.nsc.uzh.ch. We offer a specialized full-time Masters program open to students with a Bachelor's degree in the following disciplines: neurosciences, information technology, electrical engineering, biology, physics, computer sciences, chemistry, mathematics, and mechanical/chemical/control engineering. The core courses (all offered in English) provide a common foundation for students with different educational backgrounds, and cover the following: 1. Systems Neurosciences 2. Neural Computation and Theoretical Neuroscience 3. Neurotechnologies and Neuromorphic Engineering The application deadline for students starting in Fall Semester 2014 is *February 15th 2014*. Details about the application process and required documents can be found here: http://www.nsc.uzh.ch/?page_id=10 The program is affiliated with the Mathematics and Natural Sciences Faculty (MNF) at the University of Zurich (UZH) and the Information Technology and Electrical Engineering Department (D-ITET) of the ETH Zurich. All applications are handled by the University of Zurich. Application documents should be sent to nsc at ini.uzh.ch. Michael Pfeiffer -- ========================================= Dr. Michael Pfeiffer Group leader, Program coordinator NSC Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland Tel. +41 44 635 30 45 Fax +41 44 635 30 53 pfeiffer (at) ini.phys.ethz.ch http://www.ini.uzh.ch/~pfeiffer/ ========================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Fri Jan 24 12:27:01 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Fri, 24 Jan 2014 12:27:01 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> Message-ID: <52E2A265.9000306@cse.msu.edu> Dear Matthew: My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." To say that, you have not read the book: Natural and Artificial Intelligence . You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. -John On 1/23/14 6:15 PM, Matthew Cook wrote: > Dear John, > > I think all of us on this list are interested in brain-like computing, > so I don't understand your negativity on the topic. > > Many of the speakers are involved in efforts to build hardware that > works in a more brain-like way than conventional computers do. This > is not what is usually meant by research in neural networks. I > suspect the phrase "brain-like computing" is intended as an umbrella > term that can cover all of these efforts. > > I think you are reading far more into the announcement than is there. > Nobody is claiming a "brain-scale theory that bridges the wide gap," > or even close. To the contrary, the announcement is very cautious, > saying that intense research is "gradually increasing our > understanding" and "beginning to shed light on the human brain". In > other words, the research advances slowly, and we are at the > beginning. There is certainly no claim that any of the speakers has > finished the job. > > Similarly, the announcement refers to "successful demonstration of > some of the underlying principles [of the brain] in software and > hardware", which implicitly acknowledges that we do not have all the > principles. There is nothing like a claim that anyone has enough > principles to "explain highly integrated brain functions". > > You are concerned that this workshop will avoid the essential issue of > the wide gap between neuron-like computing and highly integrated brain > functions. What makes you think it will avoid this? We are all > interested in filling this gap, and the speakers (well, the ones who I > know) all either work on this, or work on supporting people who work > on this, or both. > > This looks like it will be a very nice workshop, with talks from > leaders in the field on a variety of topics, and I wish I were able to > attend it. > > Matthew > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > >> Dear Anders, >> >> Interesting topic about the brain! But Brain-Like Computing is >> misleading because neural networks have been around for at least 70 >> years. >> >> I quote: "We are now approaching the point when our knowledge will >> enable successful demonstrations of some of the underlying principles >> in software and hardware, i.e. brain-like computing." >> >> What are the underlying principles? I am concerned that projects >> like "Brain-Like Computing" avoid essential issues: >> the wide gap between neuron-like computing and well-known highly >> integrated brain functions. >> Continuing this avoidance would again create bad names for >> "brain-like computing", just such behaviors did for "neural networks". >> >> Henry Markram criticized IBM's brain project which does miss >> essential brain principles, but has he published such principles? >> Modeling individual neurons more and more precisely will explain >> highly integrated brain functions? From what I know, definitely >> not, by far. >> >> Has any of your 10 speakers published any brain-scale theory that >> bridges the wide gap? Are you aware of any such published theories? >> >> I am sorry for giving a CC to the list, but many on the list said >> that they like to hear discussions instead of just event announcements. >> >> -John >> >> >> On 1/13/14 12:14 PM, Anders Lansner wrote: >>> >>> >>> Workshop on Brain-Like Computing, February 5-6 2014 >>> >>> The exciting prospects of developing brain-like information >>> processing is one of the Deans Forum focus areas. >>> As a means to encourage progress in this research area a Workshop is >>> arranged February 5th-6th 2014 on KTH campus in Stockholm. >>> >>> The human brain excels over contemporary computers and robots in >>> processing real-time unstructured information and uncertain data as >>> well as in controlling a complex mechanical platform with multiple >>> degrees of freedom like the human body. Intense experimental >>> research complemented by computational and informatics efforts are >>> gradually increasing our understanding of underlying processes and >>> mechanisms in small animal and mammalian brains and are beginning to >>> shed light on the human brain. We are now approaching the point when >>> our knowledge will enable successful demonstrations of some of the >>> underlying principles in software and hardware, i.e. brain-like >>> computing. >>> >>> This workshop assembles experts, from the partners and also other >>> leading names in the field, to provide an overview of the >>> state-of-the-art in theoretical, software, and hardware aspects of >>> brain-like computing. >>> >>> >>> List of speakers >>> >>> *Speaker* >>> >>> >>> >>> *Affiliation* >>> >>> Giacomo Indiveri >>> >>> >>> >>> ETH Z?rich >>> >>> Abigail Morrison >>> >>> >>> >>> Forschungszentrum J?lich >>> >>> Mark Ritter >>> >>> >>> >>> IBM Watson Research Center >>> >>> Guillermo Cecchi >>> >>> >>> >>> IBM Watson Research Center >>> >>> Anders Lansner >>> >>> >>> >>> KTH Royal Institute of Technology >>> >>> Ahmed Hemani >>> >>> >>> >>> KTH Royal Institute of Technology >>> >>> Steve Furber >>> >>> >>> >>> University of Manchester >>> >>> Kazuyuki Aihara >>> >>> >>> >>> University of Tokyo >>> >>> Karlheinz Meier >>> >>> >>> >>> Heidelberg University >>> >>> Andreas Schierwagen >>> >>> >>> >>> Leipzig University >>> >>> *For signing up to the Workshop please use the registration form >>> found at _http://bit.ly/1dkuBgR_* >>> >>> *You need to sign up before January 28^th .* >>> >>> *Web page: >>> http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >>> *** >>> >>> ****************************************** >>> >>> Anders Lansner >>> >>> Professor in Computer Science, Computational biology >>> >>> School of Computer Science and Communication >>> >>> Stockholm University and Royal Institute of Technology (KTH) >>> >>> ala at kth.se , +46-70-2166122 >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod >>> f?r avast! Antivirus ?r aktivt. >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Fri Jan 24 15:24:02 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Fri, 24 Jan 2014 15:24:02 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> Message-ID: <52E2CBE2.6090803@cse.msu.edu> Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. -John On 1/24/14 2:19 PM, Gary Cottrell wrote: > Hi John - > > It's great that you have an over-arching theory, but if you want > people to read it, it would be better not to disrespect people in your > emails. You say you respect Matthew, but then you accuse him of > falling behind in the literature because he hasn't read your book. > Politeness (and modesty!) will get you much farther than the tone you > have taken. > > g. > > On Jan 24, 2014, at 6:27 PM, Juyang Weng > wrote: > >> Dear Matthew: >> >> My apology if my words are direct, so that people with short >> attention spans can quickly get my points. I do respect you. >> >> You wrote: "to build hardware that works in a more brain-like way >> than conventional computers do. This is not what is usually meant by >> research in neural networks." >> >> Your statement is absolutely not true. Your term "brain-like way" is >> as old as "brain-like computing". Read about the 14 neurocomputers >> built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the >> human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware >> will not solve the fundamental problems of the current human severe >> lack in understanding the brain, no matter how many computers are >> linked together. Neither will the current "Big Data" fanfare from >> NSF in U.S.. The IBM's brain project has similar fundamental flaws >> and the IBM team lacks key experts. >> >> Some of the NSF managers have been turning blind eyes to breakthrough >> work on brain modeling for over a decade, but they want to waste more >> taxpayer's money into its "Big Data" fanfare and other "try again" >> fanfares. It is a scientific shame for NSF in a developed country >> like U.S. to do that shameful politics without real science, causing >> another large developing country like China to also echo "Big Data". >> "Big Data" was called "Large Data", well known in Pattern Recognition >> for many years. Stop playing shameful politics in science! >> >> You wrote: "Nobody is claiming a `brain-scale theory that bridges the >> wide gap,' or even close." >> >> To say that, you have not read the book: Natural and Artificial >> Intelligence . You >> are falling behind the literature so bad as some of our NSF project >> managers. With their lack of knowledge, they did not understand that >> the "bridge" was in print on their desks and in the literature. >> >> -John >> >> On 1/23/14 6:15 PM, Matthew Cook wrote: >>> Dear John, >>> >>> I think all of us on this list are interested in brain-like >>> computing, so I don't understand your negativity on the topic. >>> >>> Many of the speakers are involved in efforts to build hardware that >>> works in a more brain-like way than conventional computers do. This >>> is not what is usually meant by research in neural networks. I >>> suspect the phrase "brain-like computing" is intended as an umbrella >>> term that can cover all of these efforts. >>> >>> I think you are reading far more into the announcement than is >>> there. Nobody is claiming a "brain-scale theory that bridges the >>> wide gap," or even close. To the contrary, the announcement is very >>> cautious, saying that intense research is "gradually increasing our >>> understanding" and "beginning to shed light on the human brain". In >>> other words, the research advances slowly, and we are at the >>> beginning. There is certainly no claim that any of the speakers has >>> finished the job. >>> >>> Similarly, the announcement refers to "successful demonstration of >>> some of the underlying principles [of the brain] in software and >>> hardware", which implicitly acknowledges that we do not have all the >>> principles. There is nothing like a claim that anyone has enough >>> principles to "explain highly integrated brain functions". >>> >>> You are concerned that this workshop will avoid the essential issue >>> of the wide gap between neuron-like computing and highly integrated >>> brain functions. What makes you think it will avoid this? We are >>> all interested in filling this gap, and the speakers (well, the ones >>> who I know) all either work on this, or work on supporting people >>> who work on this, or both. >>> >>> This looks like it will be a very nice workshop, with talks from >>> leaders in the field on a variety of topics, and I wish I were able >>> to attend it. >>> >>> Matthew >>> >>> >>> On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: >>> >>>> Dear Anders, >>>> >>>> Interesting topic about the brain! But Brain-Like Computing is >>>> misleading because neural networks have been around for at least 70 >>>> years. >>>> >>>> I quote: "We are now approaching the point when our knowledge will >>>> enable successful demonstrations of some of the underlying >>>> principles in software and hardware, i.e. brain-like computing." >>>> >>>> What are the underlying principles? I am concerned that projects >>>> like "Brain-Like Computing" avoid essential issues: >>>> the wide gap between neuron-like computing and well-known highly >>>> integrated brain functions. >>>> Continuing this avoidance would again create bad names for >>>> "brain-like computing", just such behaviors did for "neural networks". >>>> >>>> Henry Markram criticized IBM's brain project which does miss >>>> essential brain principles, but has he published such principles? >>>> Modeling individual neurons more and more precisely will explain >>>> highly integrated brain functions? From what I know, definitely >>>> not, by far. >>>> >>>> Has any of your 10 speakers published any brain-scale theory that >>>> bridges the wide gap? Are you aware of any such published theories? >>>> >>>> I am sorry for giving a CC to the list, but many on the list said >>>> that they like to hear discussions instead of just event >>>> announcements. >>>> >>>> -John >>>> >>>> >>>> On 1/13/14 12:14 PM, Anders Lansner wrote: >>>>> >>>>> >>>>> Workshop on Brain-Like Computing, February 5-6 2014 >>>>> >>>>> The exciting prospects of developing brain-like information >>>>> processing is one of the Deans Forum focus areas. >>>>> As a means to encourage progress in this research area a Workshop >>>>> is arranged February 5th-6th 2014 on KTH campus in Stockholm. >>>>> >>>>> The human brain excels over contemporary computers and robots in >>>>> processing real-time unstructured information and uncertain data >>>>> as well as in controlling a complex mechanical platform with >>>>> multiple degrees of freedom like the human body. Intense >>>>> experimental research complemented by computational and >>>>> informatics efforts are gradually increasing our understanding of >>>>> underlying processes and mechanisms in small animal and mammalian >>>>> brains and are beginning to shed light on the human brain. We are >>>>> now approaching the point when our knowledge will enable >>>>> successful demonstrations of some of the underlying principles in >>>>> software and hardware, i.e. brain-like computing. >>>>> >>>>> This workshop assembles experts, from the partners and also other >>>>> leading names in the field, to provide an overview of the >>>>> state-of-the-art in theoretical, software, and hardware aspects of >>>>> brain-like computing. >>>>> >>>>> >>>>> List of speakers >>>>> >>>>> *Speaker* >>>>> >>>>> >>>>> >>>>> *Affiliation* >>>>> >>>>> Giacomo Indiveri >>>>> >>>>> >>>>> >>>>> ETH Z?rich >>>>> >>>>> Abigail Morrison >>>>> >>>>> >>>>> >>>>> Forschungszentrum J?lich >>>>> >>>>> Mark Ritter >>>>> >>>>> >>>>> >>>>> IBM Watson Research Center >>>>> >>>>> Guillermo Cecchi >>>>> >>>>> >>>>> >>>>> IBM Watson Research Center >>>>> >>>>> Anders Lansner >>>>> >>>>> >>>>> >>>>> KTH Royal Institute of Technology >>>>> >>>>> Ahmed Hemani >>>>> >>>>> >>>>> >>>>> KTH Royal Institute of Technology >>>>> >>>>> Steve Furber >>>>> >>>>> >>>>> >>>>> University of Manchester >>>>> >>>>> Kazuyuki Aihara >>>>> >>>>> >>>>> >>>>> University of Tokyo >>>>> >>>>> Karlheinz Meier >>>>> >>>>> >>>>> >>>>> Heidelberg University >>>>> >>>>> Andreas Schierwagen >>>>> >>>>> >>>>> >>>>> Leipzig University >>>>> >>>>> *For signing up to the Workshop please use the registration form >>>>> found at _http://bit.ly/1dkuBgR_* >>>>> >>>>> *You need to sign up before January 28^th .* >>>>> >>>>> *Web page: >>>>> http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >>>>> *** >>>>> >>>>> ****************************************** >>>>> >>>>> Anders Lansner >>>>> >>>>> Professor in Computer Science, Computational biology >>>>> >>>>> School of Computer Science and Communication >>>>> >>>>> Stockholm University and Royal Institute of Technology (KTH) >>>>> >>>>> ala at kth.se , +46-70-2166122 >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> >>>>> Detta epostmeddelande inneh?ller inget virus eller annan skadlig >>>>> kod f?r avast! Antivirus ?r aktivt. >>>>> >>>>> >>>> >>>> -- >>>> -- >>>> Juyang (John) Weng, Professor >>>> Department of Computer Science and Engineering >>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>> 428 S Shaw Ln Rm 3115 >>>> Michigan State University >>>> East Lansing, MI 48824 USA >>>> Tel: 517-353-4388 >>>> Fax: 517-432-1061 >>>> Email:weng at cse.msu.edu >>>> URL:http://www.cse.msu.edu/~weng/ >>>> ---------------------------------------------- >>>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > [I am in Dijon, France on sabbatical this year. To call me, Skype > works best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those > who hustle. -- Abraham Lincoln > > "Of course, none of this will be easy. If it was, we would already > know everything there was about how the brain works, and presumably my > life would be simpler here. It could explain all kinds of things that > go on in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt > Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery > does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Fri Jan 24 14:19:06 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Fri, 24 Jan 2014 20:19:06 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E2A265.9000306@cse.msu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> Message-ID: Hi John - It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. g. On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: > Dear Matthew: > > My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. > > You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." > > Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. > > Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! > > You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." > > To say that, you have not read the book: Natural and Artificial Intelligence. You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. > > -John > > On 1/23/14 6:15 PM, Matthew Cook wrote: >> Dear John, >> >> I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. >> >> Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. >> >> I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. >> >> Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". >> >> You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. >> >> This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. >> >> Matthew >> >> >> On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: >> >>> Dear Anders, >>> >>> Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. >>> >>> I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." >>> >>> What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: >>> the wide gap between neuron-like computing and well-known highly integrated brain functions. >>> Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". >>> >>> Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? >>> Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. >>> >>> Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? >>> >>> I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. >>> >>> -John >>> >>> >>> On 1/13/14 12:14 PM, Anders Lansner wrote: >>>> Workshop on Brain-Like Computing, February 5-6 2014 >>>> >>>> The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. >>>> As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. >>>> >>>> The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. >>>> >>>> This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. >>>> >>>> List of speakers >>>> >>>> Speaker >>>> Affiliation >>>> Giacomo Indiveri >>>> ETH Z?rich >>>> Abigail Morrison >>>> Forschungszentrum J?lich >>>> Mark Ritter >>>> IBM Watson Research Center >>>> Guillermo Cecchi >>>> IBM Watson Research Center >>>> Anders Lansner >>>> KTH Royal Institute of Technology >>>> Ahmed Hemani >>>> KTH Royal Institute of Technology >>>> Steve Furber >>>> University of Manchester >>>> Kazuyuki Aihara >>>> University of Tokyo >>>> Karlheinz Meier >>>> Heidelberg University >>>> Andreas Schierwagen >>>> Leipzig University >>>> >>>> For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR >>>> You need to sign up before January 28th. >>>> Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >>>> >>>> >>>> >>>> ****************************************** >>>> Anders Lansner >>>> Professor in Computer Science, Computational biology >>>> School of Computer Science and Communication >>>> Stockholm University and Royal Institute of Technology (KTH) >>>> ala at kth.se, +46-70-2166122 >>>> >>>> >>>> >>>> >>>> Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. >>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Fri Jan 24 16:38:03 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Fri, 24 Jan 2014 14:38:03 -0700 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E2CBE2.6090803@cse.msu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> Message-ID: Personally, I think Big Data is a great thing, and I think we need well-funded people working at all levels of analysis. For example, Big Data will allow us to boot up a rather high fidelity brain on a supercomputer. This will help us determine the extent to which we can compress that representation into a simpler model that can perform the same essential computations. We can probably compress much of what the brain does into higher level models, however, you can implement all sorts of clever machinery using neurons, and we may find that certain things (such as the intricate circuitry of the cerebellum) should not be compressed too much. Unless it's a Kalman filter, in which case we may choose to swap it out for one. Peter Norvig has co-authored a great paper called The Unreasonable Effectiveness of Data which you can find here: http://goo.gl/klOZGA. It is focused on the fact that when you have gobs of data the algorithm doesn't matter as much, but I think it also goes to show that Big Data will make figuring out how the brain works simpler. Going further, having lots of data is a necessary requirement for approximating the Minimum Description Length model of the brain. This MDL model will compress irrelevant detail while leaving relevant detail intact. In order to demonstrate that our MDL model generalizes to real brains, we'll need as much brain data as we can get our hands on. https://en.wikipedia.org/wiki/Minimum_description_length We also need adequate funding for detailed single-neuron neurophysiological models, middle-of-the-ground attractor networks, more abstract normalized models and machine learning research in general, fMRI research, EEG research, etc. None of these areas should be neglected, and they should all be well-funded, as they all provide relevant constraints. Practically speaking, funding is limited and those who approve grants tend to use hyperbolic discounting just like the rest of us, so it can be challenging to convince them to give you money despite your relevance to our overall long-term goal, leading to frustration. I'd guess though that we can all agree that all areas of brain research are extremely valuable and underfunded. Luckily, people are starting to take note of how cool the brain is, and we have a technological singularity on the horizon. If we hold our breath, these problems are probably going to go away. $.02 Brian Mingus http://grey.colorado.edu/mingus On Fri, Jan 24, 2014 at 1:24 PM, Juyang Weng wrote: > Yes, Gary, you are correct politically, not to upset the "emperor" since > he is always right and he never falls behind the literature. > > But then no clear message can ever get across. Falling behind the > literature is still the fact. More, the entire research community that > does brain research falls behind badly the literature of necessary > disciplines. The current U.S. infrastructure of this research community > does not fit at all the brain subject it studies! This is not a joking > matter. We need to wake up, please. > > Azriel Rosenfeld criticized the entire computer vision filed in his > invited talk at CVPR during early 1980s: "just doing business as usual" and > "more or less the same" . However, the entire computer vision field still > has not woken up after 30 years! As another example, I respect your > colleague Terry Sejnowski, but I must openly say that I object to his "we > need more data" as the key message for the U.S. BRAIN Project. This is > another example of "just doing business as usual" and so everybody will not > be against you. > > Several major disciplines are closely related to the brain, but the > scientific community is still very much fragmented, not willing to wake > up. Some of our government officials only say superficial worlds like "Big > Data" because we like to hear. This cost is too high for our taxpayers. > > -John > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > Hi John - > > It's great that you have an over-arching theory, but if you want people > to read it, it would be better not to disrespect people in your emails. You > say you respect Matthew, but then you accuse him of falling behind in the > literature because he hasn't read your book. Politeness (and modesty!) will > get you much farther than the tone you have taken. > > g. > > On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: > > Dear Matthew: > > My apology if my words are direct, so that people with short attention > spans can quickly get my points. I do respect you. > > You wrote: "to build hardware that works in a more brain-like way than > conventional computers do. This is not what is usually meant by research > in neural networks." > > Your statement is absolutely not true. Your term "brain-like way" is as > old as "brain-like computing". Read about the 14 neurocomputers built by > 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", > IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the > fundamental problems of the current human severe lack in understanding the > brain, no matter how many computers are linked together. Neither will the > current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has > similar fundamental flaws and the IBM team lacks key experts. > > Some of the NSF managers have been turning blind eyes to breakthrough work > on brain modeling for over a decade, but they want to waste more taxpayer's > money into its "Big Data" fanfare and other "try again" fanfares. It is a > scientific shame for NSF in a developed country like U.S. to do that > shameful politics without real science, causing another large developing > country like China to also echo "Big Data". "Big Data" was called "Large > Data", well known in Pattern Recognition for many years. Stop playing > shameful politics in science! > > You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide > gap,' or even close." > > To say that, you have not read the book: Natural and Artificial > Intelligence . You are > falling behind the literature so bad as some of our NSF project managers. > With their lack of knowledge, they did not understand that the "bridge" was > in print on their desks and in the literature. > > -John > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > Dear John, > > I think all of us on this list are interested in brain-like computing, > so I don't understand your negativity on the topic. > > Many of the speakers are involved in efforts to build hardware that > works in a more brain-like way than conventional computers do. This is not > what is usually meant by research in neural networks. I suspect the phrase > "brain-like computing" is intended as an umbrella term that can cover all > of these efforts. > > I think you are reading far more into the announcement than is there. > Nobody is claiming a "brain-scale theory that bridges the wide gap," or > even close. To the contrary, the announcement is very cautious, saying > that intense research is "gradually increasing our understanding" and > "beginning to shed light on the human brain". In other words, the research > advances slowly, and we are at the beginning. There is certainly no claim > that any of the speakers has finished the job. > > Similarly, the announcement refers to "successful demonstration of some > of the underlying principles [of the brain] in software and hardware", > which implicitly acknowledges that we do not have all the principles. > There is nothing like a claim that anyone has enough principles to > "explain highly integrated brain functions". > > You are concerned that this workshop will avoid the essential issue of > the wide gap between neuron-like computing and highly integrated brain > functions. What makes you think it will avoid this? We are all interested > in filling this gap, and the speakers (well, the ones who I know) all > either work on this, or work on supporting people who work on this, or both. > > This looks like it will be a very nice workshop, with talks from leaders > in the field on a variety of topics, and I wish I were able to attend it. > > Matthew > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > Dear Anders, > > Interesting topic about the brain! But Brain-Like Computing is misleading > because neural networks have been around for at least 70 years. > > I quote: "We are now approaching the point when our knowledge will enable > successful demonstrations of some of the underlying principles in software > and hardware, i.e. brain-like computing." > > What are the underlying principles? I am concerned that projects like > "Brain-Like Computing" avoid essential issues: > the wide gap between neuron-like computing and well-known highly > integrated brain functions. > Continuing this avoidance would again create bad names for "brain-like > computing", just such behaviors did for "neural networks". > > Henry Markram criticized IBM's brain project which does miss essential > brain principles, but has he published such principles? > Modeling individual neurons more and more precisely will explain highly > integrated brain functions? From what I know, definitely not, by far. > > Has any of your 10 speakers published any brain-scale theory that bridges > the wide gap? Are you aware of any such published theories? > > I am sorry for giving a CC to the list, but many on the list said that > they like to hear discussions instead of just event announcements. > > -John > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing is > one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is > arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in > processing real-time unstructured information and uncertain data as well as > in controlling a complex mechanical platform with multiple degrees of > freedom like the human body. Intense experimental research complemented by > computational and informatics efforts are gradually increasing our > understanding of underlying processes and mechanisms in small animal and > mammalian brains and are beginning to shed light on the human brain. We are > now approaching the point when our knowledge will enable successful > demonstrations of some of the underlying principles in software and > hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other leading > names in the field, to provide an overview of the state-of-the-art in > theoretical, software, and hardware aspects of brain-like computing. > List of speakers > > *Speaker* > > *Affiliation* > > Giacomo Indiveri > > ETH Z?rich > > Abigail Morrison > > Forschungszentrum J?lich > > Mark Ritter > > IBM Watson Research Center > > Guillermo Cecchi > > IBM Watson Research Center > > Anders Lansner > > KTH Royal Institute of Technology > > Ahmed Hemani > > KTH Royal Institute of Technology > > Steve Furber > > University of Manchester > > Kazuyuki Aihara > > University of Tokyo > > Karlheinz Meier > > Heidelberg University > > Andreas Schierwagen > > Leipzig University > > > > *For signing up to the Workshop please use the registration form found at > http://bit.ly/1dkuBgR * > > *You need to sign up before January 28th.* > > *Web page: > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > * > > > > > > > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se, +46-70-2166122 > > > > > ------------------------------ > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! > Antivirus ?r aktivt. > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works > best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who > hustle. -- Abraham Lincoln > > "Of course, none of this will be easy. If it was, we would already > know everything there was about how the brain works, and presumably my > life would be simpler here. It could explain all kinds of things that go on > in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does > not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph.etiennecummings at gmail.com Fri Jan 24 17:00:44 2014 From: ralph.etiennecummings at gmail.com (Ralph Etienne-Cummings) Date: Fri, 24 Jan 2014 17:00:44 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E2CBE2.6090803@cse.msu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> Message-ID: Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! Thanks, Ralph's Android On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > Yes, Gary, you are correct politically, not to upset the "emperor" since > he is always right and he never falls behind the literature. > > But then no clear message can ever get across. Falling behind the > literature is still the fact. More, the entire research community that > does brain research falls behind badly the literature of necessary > disciplines. The current U.S. infrastructure of this research community > does not fit at all the brain subject it studies! This is not a joking > matter. We need to wake up, please. > > Azriel Rosenfeld criticized the entire computer vision filed in his > invited talk at CVPR during early 1980s: "just doing business as usual" and > "more or less the same" . However, the entire computer vision field still > has not woken up after 30 years! As another example, I respect your > colleague Terry Sejnowski, but I must openly say that I object to his "we > need more data" as the key message for the U.S. BRAIN Project. This is > another example of "just doing business as usual" and so everybody will not > be against you. > > Several major disciplines are closely related to the brain, but the > scientific community is still very much fragmented, not willing to wake > up. Some of our government officials only say superficial worlds like "Big > Data" because we like to hear. This cost is too high for our taxpayers. > > -John > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > Hi John - > > It's great that you have an over-arching theory, but if you want people > to read it, it would be better not to disrespect people in your emails. You > say you respect Matthew, but then you accuse him of falling behind in the > literature because he hasn't read your book. Politeness (and modesty!) will > get you much farther than the tone you have taken. > > g. > > On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: > > Dear Matthew: > > My apology if my words are direct, so that people with short attention > spans can quickly get my points. I do respect you. > > You wrote: "to build hardware that works in a more brain-like way than > conventional computers do. This is not what is usually meant by research > in neural networks." > > Your statement is absolutely not true. Your term "brain-like way" is as > old as "brain-like computing". Read about the 14 neurocomputers built by > 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", > IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the > fundamental problems of the current human severe lack in understanding the > brain, no matter how many computers are linked together. Neither will the > current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has > similar fundamental flaws and the IBM team lacks key experts. > > Some of the NSF managers have been turning blind eyes to breakthrough work > on brain modeling for over a decade, but they want to waste more taxpayer's > money into its "Big Data" fanfare and other "try again" fanfares. It is a > scientific shame for NSF in a developed country like U.S. to do that > shameful politics without real science, causing another large developing > country like China to also echo "Big Data". "Big Data" was called "Large > Data", well known in Pattern Recognition for many years. Stop playing > shameful politics in science! > > You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide > gap,' or even close." > > To say that, you have not read the book: Natural and Artificial > Intelligence . You are > falling behind the literature so bad as some of our NSF project managers. > With their lack of knowledge, they did not understand that the "bridge" was > in print on their desks and in the literature. > > -John > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > Dear John, > > I think all of us on this list are interested in brain-like computing, > so I don't understand your negativity on the topic. > > Many of the speakers are involved in efforts to build hardware that > works in a more brain-like way than conventional computers do. This is not > what is usually meant by research in neural networks. I suspect the phrase > "brain-like computing" is intended as an umbrella term that can cover all > of these efforts. > > I think you are reading far more into the announcement than is there. > Nobody is claiming a "brain-scale theory that bridges the wide gap," or > even close. To the contrary, the announcement is very cautious, saying > that intense research is "gradually increasing our understanding" and > "beginning to shed light on the human brain". In other words, the research > advances slowly, and we are at the beginning. There is certainly no claim > that any of the speakers has finished the job. > > Similarly, the announcement refers to "successful demonstration of some > of the underlying principles [of the brain] in software and hardware", > which implicitly acknowledges that we do not have all the principles. > There is nothing like a claim that anyone has enough principles to > "explain highly integrated brain functions". > > You are concerned that this workshop will avoid the essential issue of > the wide gap between neuron-like computing and highly integrated brain > functions. What makes you think it will avoid this? We are all interested > in filling this gap, and the speakers (well, the ones who I know) all > either work on this, or work on supporting people who work on this, or both. > > This looks like it will be a very nice workshop, with talks from leaders > in the field on a variety of topics, and I wish I were able to attend it. > > Matthew > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > Dear Anders, > > Interesting topic about the brain! But Brain-Like Computing is misleading > because neural networks have been around for at least 70 years. > > I quote: "We are now approaching the point when our knowledge will enable > successful demonstrations of some of the underlying principles in software > and hardware, i.e. brain-like computing." > > What are the underlying principles? I am concerned that projects like > "Brain-Like Computing" avoid essential issues: > the wide gap between neuron-like computing and well-known highly > integrated brain functions. > Continuing this avoidance would again create bad names for "brain-like > computing", just such behaviors did for "neural networks". > > Henry Markram criticized IBM's brain project which does miss essential > brain principles, but has he published such principles? > Modeling individual neurons more and more precisely will explain highly > integrated brain functions? From what I know, definitely not, by far. > > Has any of your 10 speakers published any brain-scale theory that bridges > the wide gap? Are you aware of any such published theories? > > I am sorry for giving a CC to the list, but many on the list said that > they like to hear discussions instead of just event announcements. > > -John > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing is > one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is > arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in > processing real-time unstructured information and uncertain data as well as > in controlling a complex mechanical platform with multiple degrees of > freedom like the human body. Intense experimental research complemented by > computational and informatics efforts are gradually increasing our > understanding of underlying processes and mechanisms in small animal and > mammalian brains and are beginning to shed light on the human brain. We are > now approaching the point when our knowledge will enable successful > demonstrations of some of the underlying principles in software and > hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other leading > names in the field, to provide an overview of the state-of-the-art in > theoretical, software, and hardware aspects of brain-like computing. > List of speakers > > *Speaker* > > *Affiliation* > > Giacomo Indiveri > > ETH Z?rich > > Abigail Morrison > > Forschungszentrum J?lich > > Mark Ritter > > IBM Watson Research Center > > Guillermo Cecchi > > IBM Watson Research Center > > Anders Lansner > > KTH Royal Institute of Technology > > Ahmed Hemani > > KTH Royal Institute of Technology > > Steve Furber > > University of Manchester > > Kazuyuki Aihara > > University of Tokyo > > Karlheinz Meier > > Heidelberg University > > Andreas Schierwagen > > Leipzig University > > > > *For signing up to the Workshop please use the registration form found at > http://bit.ly/1dkuBgR * > > *You need to sign up before January 28th.* > > *Web page: > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > * > > > > > > > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se, +46-70-2166122 > > > > > ------------------------------ > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! > Antivirus ?r aktivt. > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works > best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who > hustle. -- Abraham Lincoln > > "Of course, none of this will be easy. If it was, we would already > know everything there was about how the brain works, and presumably my > life would be simpler here. It could explain all kinds of things that go on > in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does > not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Fri Jan 24 17:59:47 2014 From: achler at gmail.com (Tsvi Achler) Date: Fri, 24 Jan 2014 14:59:47 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> Message-ID: First, no matter what the outcome, I must complement John for having the courage to bring this up and for generating an open discussion. I think from an algorithm perspective a small data set computed with limited resources could emulate big data computed with the biggest computers. We must be careful that a big data bias does not introduce an economy of scale that bars new researchers with limited resources or expertise in signal processing. This also gives big business an advantage to compete for research grants. My worry is that an overly strong bias towards big data may hide important details and generate big marketing gimmicks that do not produce more than incremental progress and burn up taxpayer money. Here is an example of how careful accounting of free variables and computational resources can enable smaller data sets to emulate big-data limitations: http://reason.cs.uiuc.edu/tsvi/Evaluating_Flexibility_of_Recognition.pdf Keep in mind, the field is underfunded which affects the little guys much more than the big ones. Adding on a big-data requirements further alienates the little guys who may be students with great ideas, which I believe, the field desperately needs to support. Sincerely, -Tsvi Achler, T., *Artificial General Intelligence Begins with Recognition: Evaluating the Flexibility of Recognition*, in *Theoretical Foundations of Artificial General Intelligence* 2012 PDF On Fri, Jan 24, 2014 at 2:00 PM, Ralph Etienne-Cummings < ralph.etiennecummings at gmail.com> wrote: > Hey, I am happy when our taxpayer money, of which I contribute way more > than I get back, funds any science in all branches of the government. > > Neuromorphic and brain-like computing is on the rise ... Let's please not > shoot ourselves in the foot with in-fighting!! > > Thanks, > Ralph's Android > On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > >> Yes, Gary, you are correct politically, not to upset the "emperor" since >> he is always right and he never falls behind the literature. >> >> But then no clear message can ever get across. Falling behind the >> literature is still the fact. More, the entire research community that >> does brain research falls behind badly the literature of necessary >> disciplines. The current U.S. infrastructure of this research community >> does not fit at all the brain subject it studies! This is not a joking >> matter. We need to wake up, please. >> >> Azriel Rosenfeld criticized the entire computer vision filed in his >> invited talk at CVPR during early 1980s: "just doing business as usual" and >> "more or less the same" . However, the entire computer vision field still >> has not woken up after 30 years! As another example, I respect your >> colleague Terry Sejnowski, but I must openly say that I object to his "we >> need more data" as the key message for the U.S. BRAIN Project. This is >> another example of "just doing business as usual" and so everybody will not >> be against you. >> >> Several major disciplines are closely related to the brain, but the >> scientific community is still very much fragmented, not willing to wake >> up. Some of our government officials only say superficial worlds like "Big >> Data" because we like to hear. This cost is too high for our taxpayers. >> >> -John >> >> On 1/24/14 2:19 PM, Gary Cottrell wrote: >> >> Hi John - >> >> It's great that you have an over-arching theory, but if you want people >> to read it, it would be better not to disrespect people in your emails. You >> say you respect Matthew, but then you accuse him of falling behind in the >> literature because he hasn't read your book. Politeness (and modesty!) will >> get you much farther than the tone you have taken. >> >> g. >> >> On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: >> >> Dear Matthew: >> >> My apology if my words are direct, so that people with short attention >> spans can quickly get my points. I do respect you. >> >> You wrote: "to build hardware that works in a more brain-like way than >> conventional computers do. This is not what is usually meant by research >> in neural networks." >> >> Your statement is absolutely not true. Your term "brain-like way" is as >> old as "brain-like computing". Read about the 14 neurocomputers built by >> 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", >> IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the >> fundamental problems of the current human severe lack in understanding the >> brain, no matter how many computers are linked together. Neither will the >> current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has >> similar fundamental flaws and the IBM team lacks key experts. >> >> Some of the NSF managers have been turning blind eyes to breakthrough >> work on brain modeling for over a decade, but they want to waste more >> taxpayer's money into its "Big Data" fanfare and other "try again" >> fanfares. It is a scientific shame for NSF in a developed country like >> U.S. to do that shameful politics without real science, causing another >> large developing country like China to also echo "Big Data". "Big Data" >> was called "Large Data", well known in Pattern Recognition for many years. >> Stop playing shameful politics in science! >> >> You wrote: "Nobody is claiming a `brain-scale theory that bridges the >> wide gap,' or even close." >> >> To say that, you have not read the book: Natural and Artificial >> Intelligence . You are >> falling behind the literature so bad as some of our NSF project managers. >> With their lack of knowledge, they did not understand that the "bridge" was >> in print on their desks and in the literature. >> >> -John >> >> On 1/23/14 6:15 PM, Matthew Cook wrote: >> >> Dear John, >> >> I think all of us on this list are interested in brain-like computing, >> so I don't understand your negativity on the topic. >> >> Many of the speakers are involved in efforts to build hardware that >> works in a more brain-like way than conventional computers do. This is not >> what is usually meant by research in neural networks. I suspect the phrase >> "brain-like computing" is intended as an umbrella term that can cover all >> of these efforts. >> >> I think you are reading far more into the announcement than is there. >> Nobody is claiming a "brain-scale theory that bridges the wide gap," or >> even close. To the contrary, the announcement is very cautious, saying >> that intense research is "gradually increasing our understanding" and >> "beginning to shed light on the human brain". In other words, the research >> advances slowly, and we are at the beginning. There is certainly no claim >> that any of the speakers has finished the job. >> >> Similarly, the announcement refers to "successful demonstration of some >> of the underlying principles [of the brain] in software and hardware", >> which implicitly acknowledges that we do not have all the principles. >> There is nothing like a claim that anyone has enough principles to >> "explain highly integrated brain functions". >> >> You are concerned that this workshop will avoid the essential issue of >> the wide gap between neuron-like computing and highly integrated brain >> functions. What makes you think it will avoid this? We are all interested >> in filling this gap, and the speakers (well, the ones who I know) all >> either work on this, or work on supporting people who work on this, or both. >> >> This looks like it will be a very nice workshop, with talks from >> leaders in the field on a variety of topics, and I wish I were able to >> attend it. >> >> Matthew >> >> >> On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: >> >> Dear Anders, >> >> Interesting topic about the brain! But Brain-Like Computing is >> misleading because neural networks have been around for at least 70 years. >> >> I quote: "We are now approaching the point when our knowledge will enable >> successful demonstrations of some of the underlying principles in software >> and hardware, i.e. brain-like computing." >> >> What are the underlying principles? I am concerned that projects like >> "Brain-Like Computing" avoid essential issues: >> the wide gap between neuron-like computing and well-known highly >> integrated brain functions. >> Continuing this avoidance would again create bad names for "brain-like >> computing", just such behaviors did for "neural networks". >> >> Henry Markram criticized IBM's brain project which does miss essential >> brain principles, but has he published such principles? >> Modeling individual neurons more and more precisely will explain highly >> integrated brain functions? From what I know, definitely not, by far. >> >> Has any of your 10 speakers published any brain-scale theory that bridges >> the wide gap? Are you aware of any such published theories? >> >> I am sorry for giving a CC to the list, but many on the list said that >> they like to hear discussions instead of just event announcements. >> >> -John >> >> >> On 1/13/14 12:14 PM, Anders Lansner wrote: >> >> Workshop on Brain-Like Computing, February 5-6 2014 >> >> The exciting prospects of developing brain-like information processing is >> one of the Deans Forum focus areas. >> As a means to encourage progress in this research area a Workshop is >> arranged February 5th-6th 2014 on KTH campus in Stockholm. >> >> The human brain excels over contemporary computers and robots in >> processing real-time unstructured information and uncertain data as well as >> in controlling a complex mechanical platform with multiple degrees of >> freedom like the human body. Intense experimental research complemented by >> computational and informatics efforts are gradually increasing our >> understanding of underlying processes and mechanisms in small animal and >> mammalian brains and are beginning to shed light on the human brain. We are >> now approaching the point when our knowledge will enable successful >> demonstrations of some of the underlying principles in software and >> hardware, i.e. brain-like computing. >> >> This workshop assembles experts, from the partners and also other leading >> names in the field, to provide an overview of the state-of-the-art in >> theoretical, software, and hardware aspects of brain-like computing. >> List of speakers >> >> *Speaker* >> >> *Affiliation* >> >> Giacomo Indiveri >> >> ETH Z?rich >> >> Abigail Morrison >> >> Forschungszentrum J?lich >> >> Mark Ritter >> >> IBM Watson Research Center >> >> Guillermo Cecchi >> >> IBM Watson Research Center >> >> Anders Lansner >> >> KTH Royal Institute of Technology >> >> Ahmed Hemani >> >> KTH Royal Institute of Technology >> >> Steve Furber >> >> University of Manchester >> >> Kazuyuki Aihara >> >> University of Tokyo >> >> Karlheinz Meier >> >> Heidelberg University >> >> Andreas Schierwagen >> >> Leipzig University >> >> >> >> *For signing up to the Workshop please use the registration form found at >> http://bit.ly/1dkuBgR * >> >> *You need to sign up before January 28th.* >> >> *Web page: >> http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >> >> * >> >> >> >> >> >> >> >> ****************************************** >> >> Anders Lansner >> >> Professor in Computer Science, Computational biology >> >> School of Computer Science and Communication >> >> Stockholm University and Royal Institute of Technology (KTH) >> >> ala at kth.se, +46-70-2166122 >> >> >> >> >> ------------------------------ >> >> >> Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! >> Antivirus ?r aktivt. >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> >> [I am in Dijon, France on sabbatical this year. To call me, Skype works >> best (gwcottrell), or dial +33 788319271] >> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> My schedule is here: http://tinyurl.com/b7gxpwo >> >> Computer Science and Engineering 0404 >> IF USING FED EX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Things may come to those who wait, but only the things left by those >> who hustle. -- Abraham Lincoln >> >> "Of course, none of this will be easy. If it was, we would already >> know everything there was about how the brain works, and presumably my >> life would be simpler here. It could explain all kinds of things that go on >> in Washington." -Barack Obama >> >> "Probably once or twice a week we are sitting at dinner and Richard >> says, 'The cortex is hopeless,' and I say, 'That's why I work on the >> worm.'" Dr. Bargmann said. >> >> "A grapefruit is a lemon that saw an opportunity and took advantage of >> it." - note written on a door in Amsterdam on Lijnbaansgracht. >> >> "Physical reality is great, but it has a lousy search function." -Matt >> Tong >> >> "Only connect!" -E.M. Forster >> >> "You always have to believe that tomorrow you might write the matlab >> program that solves everything - otherwise you never will." -Geoff Hinton >> >> "There is nothing objective about objective functions" - Jay McClelland >> >> "I am awaiting the day when people remember the fact that discovery does >> not work by deciding what you want and then discovering it." >> -David Mermin >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnick at mit.edu Fri Jan 24 17:54:24 2014 From: mnick at mit.edu (Maximilian Nickel) Date: Fri, 24 Jan 2014 17:54:24 -0500 Subject: Connectionists: CBMM Summer Course "Brains, Minds and Machines" at MBL Woods Hole Message-ID: The Center for Brains, Minds and Machines (CBMM) [1] is offering a course this summer at MBL, Woods Hole [2]. This is an intensive two-week course which will give advanced students a "deep end" introduction to the problem of intelligence ? how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines. Course Date: May 29 - June 12, 2014 Application deadline is February 14, 2014. Link for more information and online registration: http://hermes.mbl.edu/education/courses/special_topics/bmm.html [1] http://cbmm.mit.edu/ [2] http://hermes.mbl.edu/education/courses/special_topics/bmm.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalakhu at cs.toronto.edu Fri Jan 24 16:54:28 2014 From: rsalakhu at cs.toronto.edu (Ruslan Salakhutdinov) Date: Fri, 24 Jan 2014 16:54:28 -0500 (EST) Subject: Connectionists: CFP: ICML 2014: Call for Tutorial Proposals Message-ID: ******************************************** ICML 2014: Call for Tutorial Proposals ******************************************** Important dates * Tutorial proposal deadline Feb 21, 2014 * Acceptance notification Mar 8, 2014 * Tutorials June 21, 2014 * Contact: rsalakhu at cs.toronto.edu The ICML 2014 Organizing Committee invites proposals for tutorials to be held at the 31th International Conference on Machine Learning, on June 21, 2014 in Beijing, China. We seek proposals for two-hour tutorials on core techniques and areas of knowledge of broad interest within the machine learning community, including established or emerging research topics within the field itself, as well as from related fields or application areas that are clearly relevant to machine learning. The ideal tutorial should attract a wide audience, and should be broad enough to provide a gentle introduction to the chosen research area, but should also cover the most important contributions in depth. Tutorial proceedings will not be provided in hardcopy, but will instead be made available by the presenters on their website prior to the conference. How to Propose a Tutorial: Proposals should provide sufficient information to evaluate the quality and importance of the topic, the likely quality of the presentation materials, and the speakers' teaching ability. The written proposal should be 2-3 pages long, and should use the following boldface text for section headings: * Topic overview: What will the tutorial be about? Why is this an interesting and significant subject for the machine learning community at large? * Target audience: From which areas do you expect potential participants to come? What prior knowledge, if any, do you expect from the audience? What will the participants learn? How many participants do you expect? * Content details: Provide a detailed outline of the topics to be presented, including estimates for the time that will be devoted to each subject. Aim for a total length of approximately two hours. If possible, provide samples of past tutorial slides or teaching materials. In case of multiple presenters, specify how you will distribute the work. * Format: How will you present the material? Will there be multimedia parts of the presentation? Do you plan software demonstrations? Specify any extraordinary technical equipment that you would need. * Organizers' and presenters' expertise: Please include the name, email address, and webpage of all presenters. In addition, outline the presenters' background and include a list of publications in the tutorial area. Tutorial proposals should be submitted via email in PDF format to rsalakhu at cs.toronto.edu. Soon after submission, proposers should expect to receive a verification of receipt. Important dates * Tutorial proposal deadline Feb 21, 2014 * Acceptance notification Mar 8, 2014 * Tutorials June 21, 2014 * Contact: rsalakhu at cs.toronto.edu Russ Salakhutdinov, tutorial chair ICML 2014 From bower at uthscsa.edu Fri Jan 24 18:31:36 2014 From: bower at uthscsa.edu (james bower) Date: Fri, 24 Jan 2014 17:31:36 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> Message-ID: <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. All too easy to simplify, pontificate, and sell. So, I sympathize with Juyang Wang?s frustration. If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf Perhaps some sense of how far we have yet to go. Jim Bower On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: > Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. > > Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! > > Thanks, > Ralph's Android > > On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. > > But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. > > Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. > > Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. > > -John > > On 1/24/14 2:19 PM, Gary Cottrell wrote: >> Hi John - >> >> It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. >> >> g. >> >> On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: >> >>> Dear Matthew: >>> >>> My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. >>> >>> You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." >>> >>> Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. >>> >>> Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! >>> >>> You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." >>> >>> To say that, you have not read the book: Natural and Artificial Intelligence. You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. >>> >>> -John >>> >>> On 1/23/14 6:15 PM, Matthew Cook wrote: >>>> Dear John, >>>> >>>> I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. >>>> >>>> Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. >>>> >>>> I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. >>>> >>>> Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". >>>> >>>> You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. >>>> >>>> This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. >>>> >>>> Matthew >>>> >>>> >>>> On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: >>>> >>>>> Dear Anders, >>>>> >>>>> Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. >>>>> >>>>> I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." >>>>> >>>>> What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: >>>>> the wide gap between neuron-like computing and well-known highly integrated brain functions. >>>>> Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". >>>>> >>>>> Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? >>>>> Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. >>>>> >>>>> Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? >>>>> >>>>> I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. >>>>> >>>>> -John >>>>> >>>>> >>>>> On 1/13/14 12:14 PM, Anders Lansner wrote: >>>>>> Workshop on Brain-Like Computing, February 5-6 2014 >>>>>> >>>>>> The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. >>>>>> As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. >>>>>> >>>>>> The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. >>>>>> >>>>>> This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. >>>>>> >>>>>> List of speakers >>>>>> >>>>>> Speaker >>>>>> >>>>>> Affiliation >>>>>> >>>>>> Giacomo Indiveri >>>>>> >>>>>> ETH Z?rich >>>>>> >>>>>> Abigail Morrison >>>>>> >>>>>> Forschungszentrum J?lich >>>>>> >>>>>> Mark Ritter >>>>>> >>>>>> IBM Watson Research Center >>>>>> >>>>>> Guillermo Cecchi >>>>>> >>>>>> IBM Watson Research Center >>>>>> >>>>>> Anders Lansner >>>>>> >>>>>> KTH Royal Institute of Technology >>>>>> >>>>>> Ahmed Hemani >>>>>> >>>>>> KTH Royal Institute of Technology >>>>>> >>>>>> Steve Furber >>>>>> >>>>>> University of Manchester >>>>>> >>>>>> Kazuyuki Aihara >>>>>> >>>>>> University of Tokyo >>>>>> >>>>>> Karlheinz Meier >>>>>> >>>>>> Heidelberg University >>>>>> >>>>>> Andreas Schierwagen >>>>>> >>>>>> Leipzig University >>>>>> >>>>>> >>>>>> >>>>>> For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR >>>>>> >>>>>> You need to sign up before January 28th. >>>>>> >>>>>> Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ****************************************** >>>>>> >>>>>> Anders Lansner >>>>>> >>>>>> Professor in Computer Science, Computational biology >>>>>> >>>>>> School of Computer Science and Communication >>>>>> >>>>>> Stockholm University and Royal Institute of Technology (KTH) >>>>>> >>>>>> ala at kth.se, +46-70-2166122 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. >>>>>> >>>>>> >>>>> >>>>> -- >>>>> -- >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel: 517-353-4388 >>>>> Fax: 517-432-1061 >>>>> Email: weng at cse.msu.edu >>>>> URL: http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] >> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> My schedule is here: http://tinyurl.com/b7gxpwo >> >> Computer Science and Engineering 0404 >> IF USING FED EX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln >> >> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama >> >> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. >> >> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. >> >> "Physical reality is great, but it has a lousy search function." -Matt Tong >> >> "Only connect!" -E.M. Forster >> >> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton >> >> "There is nothing objective about objective functions" - Jay McClelland >> >> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." >> -David Mermin >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.g.raikov at gmail.com Fri Jan 24 20:02:06 2014 From: ivan.g.raikov at gmail.com (Ivan Raikov) Date: Sat, 25 Jan 2014 10:02:06 +0900 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? -Ivan Raikov On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: > [snip] > An enormous amount of engineering and neuroscience continues to think that > the feedforward pathway is from the sensors to the inside - rather than > seeing this as the actual feedback loop. Might to some sound like a > semantic quibble, but I assure you it is not. > > If you believe as I do, that the brain solves very hard problems, in very > sophisticated ways, that involve, in some sense the construction of complex > models about the world and how it operates in the world, and that those > models are manifest in the complex architecture of the brain - then > simplified solutions are missing the point. > > What that means inevitably, in my view, is that the only way we will ever > understand what brain-like is, is to pay tremendous attention > experimentally and in our models to the actual detailed anatomy and > physiology of the brains circuits and cells. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Fri Jan 24 21:03:42 2014 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Fri, 24 Jan 2014 22:03:42 -0400 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: Thanks John for starting a discussion ... I think we need some. What I liked most about your original post was asking about "What are the underlying principles?" Let's make a list. Of course, there are so many levels of organizations and mechanisms in the brain, that we might speak about different things; but getting different views would be fun and I think very useful (without the need to offer the only and ultimate). Cheers, Thomas Trappenberg PS: John, I thought you started a good discussion before, but I got discouraged by your polarizing views. I think a lot of us can relate to you, but lhow about letting others come forward now? On Fri, Jan 24, 2014 at 9:02 PM, Ivan Raikov wrote: > > I think perhaps the objection to the Big Data approach is that it is > applied to the exclusion of all other modelling approaches. While it is > true that complete and detailed understanding of neurophysiology and > anatomy is at the heart of neuroscience, a lot can be learned about signal > propagation in excitable branching structures using statistical physics, > and a lot can be learned about information representation and transmission > in the brain using mathematical theories about distributed communicating > processes. As these modelling approaches have been successfully used in > various areas of science, wouldn't you agree that they can also be used to > understand at least some of the fundamental properties of brain structures > and processes? > > -Ivan Raikov > > On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: > >> [snip] >> > An enormous amount of engineering and neuroscience continues to think that >> the feedforward pathway is from the sensors to the inside - rather than >> seeing this as the actual feedback loop. Might to some sound like a >> semantic quibble, but I assure you it is not. >> >> If you believe as I do, that the brain solves very hard problems, in very >> sophisticated ways, that involve, in some sense the construction of complex >> models about the world and how it operates in the world, and that those >> models are manifest in the complex architecture of the brain - then >> simplified solutions are missing the point. >> >> What that means inevitably, in my view, is that the only way we will ever >> understand what brain-like is, is to pay tremendous attention >> experimentally and in our models to the actual detailed anatomy and >> physiology of the brains circuits and cells. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Fri Jan 24 22:54:07 2014 From: bower at uthscsa.edu (james bower) Date: Fri, 24 Jan 2014 21:54:07 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: Ivan thanks for the response, Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. As I have been saying for 30 years: Beware Ptolemy and curve fitting. The details of reality matter. Jim On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: > > I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? > > -Ivan Raikov > > On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: > [snip] > An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. > > If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. > > What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazskegl at gmail.com Sat Jan 25 04:57:39 2014 From: balazskegl at gmail.com (=?windows-1252?Q?Bal=E1zs_K=E9gl?=) Date: Sat, 25 Jan 2014 10:57:39 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <86FC9364-F509-4037-B24C-6AAF72BD119B@gmail.com> > Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. I agree with you on your argument for needing a model to collect data. At the same time, the LHC is also probably a good example for showing that even with a model you end up with huge data sets. The LHC generates petabytes of data per year, and this is after a real-time filtering of most of the uninteresting collision events (a cut of roughly six orders of magnitude). Ironically (to this discussion), the analysis of these petabytes makes good use of ML technologies developed in the 90s (they mostly use boosted decision trees, but neural networks are also popular) . Bal?zs ? Balazs Kegl Research Scientist (DR2) Linear Accelerator Laboratory CNRS / University of Paris Sud http://users.web.lal.in2p3.fr/kegl From bower at uthscsa.edu Sat Jan 25 12:34:46 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 11:34:46 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <402938705.1298151.1390667875132.JavaMail.root@inria.fr> References: <402938705.1298151.1390667875132.JavaMail.root@inria.fr> Message-ID: <993C8BF8-DB8E-4892-9019-CD1DCEB6791D@uthscsa.edu> so much fun thanks Axel, First a personal note (in case any of you are wondering at my tone), on Dec 31st I officially retired from my academic appointment in the University of Texas, by agreement with the State Attorney General. :-). So, in case anyone is wondering, I am now happily free - and feeling even less constrained than usual - a wonderful feeling. After a total of about $30MM in total federal funding over the last 30 years, yesterday I spent the last $ 1,245.53. Liberation. :-) The UT met all my conditions, only requiring that I not seek nor accept a position ever again in the University of Texas System. :-) easy to agree to that. :-) Anyway, there are all kinds of levels of description of physical systems, and of course, careful understanding of the actual behavior of a physical system is very important. Again, reference to the data on planetary motion that Kepler needed to do what he did. However, one has to be able to understand the restraints provided by the data on what sorts of functional interpretations one can make. Kepler is one of my (few) scientific heroes because in spite of his natural predisposition to a kind of magical thinking about celestial harmonies, he still did the hard work that required the kind of messiness that he wasn?t predisposed to, because it was required by the data. I have no problem with careful descriptions of human behavior - I have serious concerns about the use of that data to support (often through experimental manipulations these days often using the many knobs and uncontrolled variables in imaging studies) cognitive theories of brain function. Recent work of this kind in my poor cerebellum being a particularly unfortunate example. The cerebellum is now being evoked as part of a growing number of cognitive theories, based largely on imaging (see below) and good old lesion studies, without any real fundamental consideration of the actual physical relationship between the structure and the rest of the brain, or the actual physiological organization of its networks. The ?cognitive guys? don?t know and don?t care. Instead, misinterpretations of both are being used to prop up the cognitive theory (for example, the supposed ?timing function? of the cerebellar parallel fibers). Again, a long discussion probably not of interest to most on this mailing list - anyone that wants references, happy to send them. But once again - beware Ptolemy ? and lets add in pythagorus as well. :-) (actual the later another hero of the angst that unignored reality caused him :-) ). Jim On Jan 25, 2014, at 10:37 AM, Axel Hutt wrote: > Hallo, > > thanks to all for the important discussion. > > [..]What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. > here I do not agree with you. Understanding cognitive processes makes it necessary to understand the major principles > in encoding and decoding since the physiological details are different for each subject while the basic underlying > mechanism is (with high probability) the same. Hence, considering more anatomical details does not lead us that far. > We recognize this in todays' research progress, where physiological neuroscientists extracts tons of detailed data > from different patients and do not understand it. One way out is the progress of theory, and not of collecting more data. > > Best > > > Axel Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 09:59:28 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 08:59:28 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: Thanks for your comments Thomas, and good luck with your effort. I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. As I said, they likely won?t make that mistake again - and will very likely get away with it. Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. I hope so. Jim p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > James, enjoyed your writing. > > So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) > > Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. > > Cheers, Thomas > > > > On 2014-01-25 12:09 AM, "james bower" wrote: > Ivan thanks for the response, > > Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. > > Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. > > If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. > > You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. > > As I have been saying for 30 years: Beware Ptolemy and curve fitting. > > The details of reality matter. > > Jim > > > > > > On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: > >> >> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >> >> -Ivan Raikov >> >> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >> [snip] >> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >> >> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >> >> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 10:50:18 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 09:50:18 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: Hi Paul, Good to hear from you - as usual measured and rational. :-) I have often felt that the better way to think about Neural Network and Connectionist type efforts is more in defining what is likely not Brain-like, rather than claiming they are ?brain-like?. I have no problem what-so-ever with these sorts of efforts providing cautions and constraints on thinking about brains - almost in contrast. I have always had a big problem with the opposite. Years ago, at one of the first Snow Bird meetings, there was someone whose name I no longer remember, but he was from MIT (remember that) who gave a quite interesting talk. He had used a combinatorial approach to make a large set of NN models and had predicted that a larger proportion of them (as I remember 95%) would naturally produce oscillations. Given the problem in engineering with things that oscillate, he suggested that the NN community should be looking to define and work with the small percentage that don?t. I believe I stood up and said something like: ?nice observation, wrong conclusion?. Although another soap box (way too many, I think one accumulates them over time), it is quite clear now to many that brains use the oscillatory properties of their networks intrinsically. The soap box is the relative lack of real progress in figuring that out, given the sociological structure of neuroscience, and the love of abstraction. :-) Jim On Jan 25, 2014, at 6:38 AM, Paul Adams wrote: > As a former synaptic physiologist who now dabbles in neural net models, I have a slightly different take on this interesting debate. One can argue the new improved neural networks (and more broadly computing and machine learning progress) can achieve brain-like performance without detailed biological imitation or knowledge. One can also argue that direct and intensive study of the brain itself will be necessary, and will reveal major new principles. However, the universal core of connectionism is the idealization that local activity-dependent adjustment of synaptic weights is both crucial and achievable (on a small scale in digital simulations, and on a much larger scale in real brains). However we and also Terry Elliott (see NeCo 24,455-522) have shown that even for the simplest type of unsupervised learning (classic ICA) even negligible deviations from ideality can prevent learning. In other words, the entire connectionist project might be built on shifting sands. But, perhaps more likely and more interestingly, even though it's built on sand, much of the complication one sees in real brains might be the way nature nevertheless builds massive structures on such shifting sand. This would mean that both viewpoints are correct, and the hard problem is to combine the two. > - Paul Adams > Department of Neurobiology, Stony Brook University > > > > On Fri, Jan 24, 2014 at 10:54 PM, james bower wrote: > Ivan thanks for the response, > > Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. > > Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. > > If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. > > You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. > > As I have been saying for 30 years: Beware Ptolemy and curve fitting. > > The details of reality matter. > > Jim > > > > > > On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: > >> >> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >> >> -Ivan Raikov >> >> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >> [snip] >> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >> >> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >> >> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Paul Adams > Department of Neurobiology > Stony Brook University Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Sat Jan 25 10:43:31 2014 From: jose at psychology.rutgers.edu (Stephen =?ISO-8859-1?Q?Jos=E9?= Hanson) Date: Sat, 25 Jan 2014 10:43:31 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <1390664611.3420.15.camel@sam> Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle integrated fossilized views of the brain principles. I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. The most disconcerting thing about the emergence the new new neural network field(s) is that the NIH Connectome RFPs contain language about large scale network functions...and yet when Program managers are directly asked whether fMRI or any neuroimaging methods would be compliant with the RFP.. the answer is "NO". So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much less any other mental function! Hopefully neuroimaging will be relevant again. Just saying. Cheers. Steve PS. Hi Gary! Dijon! Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: > Well, well - remarkable!!! an actual debate on connectionists - just > like the old days - in fact REMARKABLY like the old days. > > > > Same issues - how ?brain-like? is ?brain-like? and how much hype is > ?brain-like? generating by itself. How much do engineers really know > about neuroscience, and how much do neurobiologists really know about > the brain (both groups tend to claim they know a lot - now and then). > > > I went to the NIPS meeting this year for the first time in more than > 25 years. Some of the older timers on connectionists may remember > that I was one of the founding members of NIPS - and some will also > remember that a few years of trying to get some kind of real > interaction between neuroscience and then ?neural networks? lead me to > give up and start, with John Miller, the CNS meetings - focused > specifically on computational neuroscience. Another story - > > > At NIPS this year, there was a very large focus on ?big data? of > course, with "machine learning" largely replaced "Neural Networks" in > most talk titles. I was actually a panelist (most had no idea of my > early involvement with NIPS) on big data in on-line learning > (generated by Ed-X, Kahn, etc) workshop. I was interested, because > for 15 years I have also been running Numedeon Inc, whose virtual > world for kids, Whyville.net was the first game based immersive > worlds, and is still one of the biggest and most innovative. (no > MOOCs there). > > > From the panel I made the assertion, as I had, in effect, many years > ago, that if you have a big data problem - it is likely you are not > taking anything resembling a ?brain-like? approach to solving it. The > version almost 30 years ago, when everyone was convinced that the > relatively simple Hopfield Network could solve all kinds of hard > problems, was my assertion that, in fact, simple ?Neural Networks, or > simple Neural Network learning rules were unlikely to work very well, > because, almost certainly, you have to build a great deal of knowledge > about the nature of the problem into all levels (including the input > layer) of your network to get it to work. > > > Now, many years later, everyone seems convinced that you can figure > things out by amassing an enormous amount of data and working on it. > > > It has been a slow revolution (may actually not even be at the > revolutionary stage yet), BUT it is very likely that the nervous > system (like all model based systems) doesn?t collect tons of data to > figure out with feedforward processing and filtering, but instead, > collects the data it thinks it needs to confirm what it already > believes to be true. In other words, it specifically avoids the big > data problem at all cost. It is willing to suffer the consequence > that occasionally (more and more recently for me), you end up talking > to someone for 15 minutes before you realize that they are not the > person you thought they were. > > > An enormous amount of engineering and neuroscience continues to think > that the feedforward pathway is from the sensors to the inside - > rather than seeing this as the actual feedback loop. Might to some > sound like a semantic quibble, but I assure you it is not. > > > If you believe as I do, that the brain solves very hard problems, in > very sophisticated ways, that involve, in some sense the construction > of complex models about the world and how it operates in the world, > and that those models are manifest in the complex architecture of the > brain - then simplified solutions are missing the point. > > > What that means inevitably, in my view, is that the only way we will > ever understand what brain-like is, is to pay tremendous attention > experimentally and in our models to the actual detailed anatomy and > physiology of the brains circuits and cells. > > > I saw none of that at NIPS - and in fact, I see less and less of that > at the CNS meeting as well. > > > All too easy to simplify, pontificate, and sell. > > > So, I sympathize with Juyang Wang?s frustration. > > > If there is any better evidence that we are still in the dark, it is > that we are still having the same debate 30 years later, with the same > ruffled feathers, the same bold assertions (mine included) and the > same seeming lack of progress. > > > If anyone is interested, here is a chapter I recently wrote of the > book I edited on ?20 years of progress in computational neuroscience > (Springer) on the last 40 years trying to understand the workings of a > single neuron (The cerebellar Purkinje cell), using models. > https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF% > 20copy.pdf > > > Perhaps some sense of how far we have yet to go. > > > Jim Bower > > > > > > > > > > > On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings > wrote: > > > > > Hey, I am happy when our taxpayer money, of which I contribute way > > more than I get back, funds any science in all branches of the > > government. > > > > Neuromorphic and brain-like computing is on the rise ... Let's > > please not shoot ourselves in the foot with in-fighting!! > > > > Thanks, > > Ralph's Android > > > > > > On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > > > > Yes, Gary, you are correct politically, not to upset the > > "emperor" since he is always right and he never falls behind > > the literature. > > > > But then no clear message can ever get across. Falling > > behind the literature is still the fact. More, the entire > > research community that does brain research falls behind > > badly the literature of necessary disciplines. The current > > U.S. infrastructure of this research community does not fit > > at all the brain subject it studies! This is not a joking > > matter. We need to wake up, please. > > > > Azriel Rosenfeld criticized the entire computer vision filed > > in his invited talk at CVPR during early 1980s: "just doing > > business as usual" and "more or less the same" . However, > > the entire computer vision field still has not woken up > > after 30 years! As another example, I respect your > > colleague Terry Sejnowski, but I must openly say that I > > object to his "we need more data" as the key message for the > > U.S. BRAIN Project. This is another example of "just doing > > business as usual" and so everybody will not be against > > you. > > > > Several major disciplines are closely related to the brain, > > but the scientific community is still very much fragmented, > > not willing to wake up. Some of our government officials > > only say superficial worlds like "Big Data" because we like > > to hear. This cost is too high for our taxpayers. > > > > -John > > > > > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > > > > > > > Hi John - > > > > > > > > > > > > It's great that you have an over-arching theory, but if > > > you want people to read it, it would be better not to > > > disrespect people in your emails. You say you respect > > > Matthew, but then you accuse him of falling behind in the > > > literature because he hasn't read your book. Politeness > > > (and modesty!) will get you much farther than the tone you > > > have taken. > > > > > > > > > g. > > > > > > > > > On Jan 24, 2014, at 6:27 PM, Juyang Weng > > > wrote: > > > > > > > > > > > > > Dear Matthew: > > > > > > > > My apology if my words are direct, so that people with > > > > short attention spans can quickly get my points. I do > > > > respect you. > > > > > > > > You wrote: "to build hardware that works in a more > > > > brain-like way than conventional computers do. This is > > > > not what is usually meant by research in neural > > > > networks." > > > > > > > > Your statement is absolutely not true. Your term > > > > "brain-like way" is as old as "brain-like computing". > > > > Read about the 14 neurocomputers built by 1988 in Robert > > > > Hecht-Nielsen, "Neurocomputing: picking the human > > > > brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. > > > > Hardware will not solve the fundamental problems of the > > > > current human severe lack in understanding the brain, no > > > > matter how many computers are linked together. Neither > > > > will the current "Big Data" fanfare from NSF in U.S.. > > > > The IBM's brain project has similar fundamental flaws > > > > and the IBM team lacks key experts. > > > > > > > > Some of the NSF managers have been turning blind eyes to > > > > breakthrough work on brain modeling for over a decade, > > > > but they want to waste more taxpayer's money into its > > > > "Big Data" fanfare and other "try again" fanfares. It > > > > is a scientific shame for NSF in a developed country > > > > like U.S. to do that shameful politics without real > > > > science, causing another large developing country like > > > > China to also echo "Big Data". "Big Data" was called > > > > "Large Data", well known in Pattern Recognition for many > > > > years. Stop playing shameful politics in science! > > > > > > > > You wrote: "Nobody is claiming a `brain-scale theory > > > > that bridges the wide gap,' or even close." > > > > > > > > To say that, you have not read the book: Natural and > > > > Artificial Intelligence. You are falling behind the > > > > literature so bad as some of our NSF project managers. > > > > With their lack of knowledge, they did not understand > > > > that the "bridge" was in print on their desks and in the > > > > literature. > > > > > > > > -John > > > > > > > > > > > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > > > > > > > > > > > > > Dear John, > > > > > > > > > > > > > > > > > > > > I think all of us on this list are interested in > > > > > brain-like computing, so I don't understand your > > > > > negativity on the topic. > > > > > > > > > > > > > > > Many of the speakers are involved in efforts to build > > > > > hardware that works in a more brain-like way than > > > > > conventional computers do. This is not what is > > > > > usually meant by research in neural networks. I > > > > > suspect the phrase "brain-like computing" is intended > > > > > as an umbrella term that can cover all of these > > > > > efforts. > > > > > > > > > > > > > > > I think you are reading far more into the announcement > > > > > than is there. Nobody is claiming a "brain-scale > > > > > theory that bridges the wide gap," or even close. To > > > > > the contrary, the announcement is very cautious, > > > > > saying that intense research is "gradually increasing > > > > > our understanding" and "beginning to shed light on the > > > > > human brain". In other words, the research advances > > > > > slowly, and we are at the beginning. There is > > > > > certainly no claim that any of the speakers has > > > > > finished the job. > > > > > > > > > > > > > > > Similarly, the announcement refers to "successful > > > > > demonstration of some of the underlying principles [of > > > > > the brain] in software and hardware", which implicitly > > > > > acknowledges that we do not have all the principles. > > > > > There is nothing like a claim that anyone has enough > > > > > principles to "explain highly integrated brain > > > > > functions". > > > > > > > > > > > > > > > You are concerned that this workshop will avoid the > > > > > essential issue of the wide gap between neuron-like > > > > > computing and highly integrated brain functions. What > > > > > makes you think it will avoid this? We are all > > > > > interested in filling this gap, and the speakers > > > > > (well, the ones who I know) all either work on this, > > > > > or work on supporting people who work on this, or > > > > > both. > > > > > > > > > > > > > > > This looks like it will be a very nice workshop, with > > > > > talks from leaders in the field on a variety of > > > > > topics, and I wish I were able to attend it. > > > > > > > > > > > > > > > Matthew > > > > > > > > > > > > > > > > > > > > > > > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > > > > > > > > > > > > > > > > > > > > Dear Anders, > > > > > > > > > > > > Interesting topic about the brain! But Brain-Like > > > > > > Computing is misleading because neural networks have > > > > > > been around for at least 70 years. > > > > > > > > > > > > I quote: "We are now approaching the point when our > > > > > > knowledge will enable successful demonstrations of > > > > > > some of the underlying principles in software and > > > > > > hardware, i.e. brain-like computing." > > > > > > > > > > > > What are the underlying principles? I am concerned > > > > > > that projects like "Brain-Like Computing" avoid > > > > > > essential issues: > > > > > > the wide gap between neuron-like computing and > > > > > > well-known highly integrated brain functions. > > > > > > Continuing this avoidance would again create bad > > > > > > names for "brain-like computing", just such > > > > > > behaviors did for "neural networks". > > > > > > > > > > > > Henry Markram criticized IBM's brain project which > > > > > > does miss essential brain principles, but has he > > > > > > published such principles? > > > > > > Modeling individual neurons more and more precisely > > > > > > will explain highly integrated brain functions? > > > > > > From what I know, definitely not, by far. > > > > > > > > > > > > Has any of your 10 speakers published any > > > > > > brain-scale theory that bridges the wide gap? Are > > > > > > you aware of any such published theories? > > > > > > > > > > > > I am sorry for giving a CC to the list, but many on > > > > > > the list said that they like to hear discussions > > > > > > instead of just event announcements. > > > > > > > > > > > > -John > > > > > > > > > > > > > > > > > > > > > > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > > > > > > > > > > > > Workshop on Brain-Like Computing, February 5-6 > > > > > > > 2014 > > > > > > > > > > > > > > The exciting prospects of developing brain-like > > > > > > > information processing is one of the Deans Forum > > > > > > > focus areas. > > > > > > > As a means to encourage progress in this research > > > > > > > area a Workshop is arranged February 5th-6th 2014 > > > > > > > on KTH campus in Stockholm. > > > > > > > > > > > > > > The human brain excels over contemporary computers > > > > > > > and robots in processing real-time unstructured > > > > > > > information and uncertain data as well as in > > > > > > > controlling a complex mechanical platform with > > > > > > > multiple degrees of freedom like the human body. > > > > > > > Intense experimental research complemented by > > > > > > > computational and informatics efforts are > > > > > > > gradually increasing our understanding of > > > > > > > underlying processes and mechanisms in small > > > > > > > animal and mammalian brains and are beginning to > > > > > > > shed light on the human brain. We are now > > > > > > > approaching the point when our knowledge will > > > > > > > enable successful demonstrations of some of the > > > > > > > underlying principles in software and hardware, > > > > > > > i.e. brain-like computing. > > > > > > > > > > > > > > This workshop assembles experts, from the partners > > > > > > > and also other leading names in the field, to > > > > > > > provide an overview of the state-of-the-art in > > > > > > > theoretical, software, and hardware aspects of > > > > > > > brain-like computing. > > > > > > > > > > > > > > > > > > > > > > > > > > > > List of speakers > > > > > > > > > > > > > > Speaker > > > > > > > > > > > > > > > > > > > > > Affiliation > > > > > > > > > > > > > > > > > > > > > Giacomo Indiveri > > > > > > > > > > > > > > > > > > > > > ETH Z?rich > > > > > > > > > > > > > > > > > > > > > Abigail Morrison > > > > > > > > > > > > > > > > > > > > > Forschungszentrum J?lich > > > > > > > > > > > > > > > > > > > > > Mark Ritter > > > > > > > > > > > > > > > > > > > > > IBM Watson Research > > > > > > > Center > > > > > > > > > > > > > > > > > > > > > Guillermo Cecchi > > > > > > > > > > > > > > > > > > > > > IBM Watson Research > > > > > > > Center > > > > > > > > > > > > > > > > > > > > > Anders Lansner > > > > > > > > > > > > > > > > > > > > > KTH Royal Institute of > > > > > > > Technology > > > > > > > > > > > > > > > > > > > > > Ahmed Hemani > > > > > > > > > > > > > > > > > > > > > KTH Royal Institute of > > > > > > > Technology > > > > > > > > > > > > > > > > > > > > > Steve Furber > > > > > > > > > > > > > > > > > > > > > University of Manchester > > > > > > > > > > > > > > > > > > > > > Kazuyuki Aihara > > > > > > > > > > > > > > > > > > > > > University of Tokyo > > > > > > > > > > > > > > > > > > > > > Karlheinz Meier > > > > > > > > > > > > > > > > > > > > > Heidelberg University > > > > > > > > > > > > > > > > > > > > > Andreas Schierwagen > > > > > > > > > > > > > > > > > > > > > Leipzig University > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > For signing up to the Workshop please use the > > > > > > > registration form found at http://bit.ly/1dkuBgR > > > > > > > > > > > > > > You need to sign up before January 28th. > > > > > > > > > > > > > > Web page: > > > > > > > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ****************************************** > > > > > > > > > > > > > > Anders Lansner > > > > > > > > > > > > > > Professor in Computer Science, Computational > > > > > > > biology > > > > > > > > > > > > > > School of Computer Science and Communication > > > > > > > > > > > > > > Stockholm University and Royal Institute of > > > > > > > Technology (KTH) > > > > > > > > > > > > > > ala at kth.se, +46-70-2166122 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > __________________________________________________ > > > > > > > > > > > > > > Detta epostmeddelande > > > > > > > inneh?ller inget virus > > > > > > > eller annan skadlig kod > > > > > > > f?r avast! Antivirus ?r > > > > > > > aktivt. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > -- > > > > > > Juyang (John) Weng, Professor > > > > > > Department of Computer Science and Engineering > > > > > > MSU Cognitive Science Program and MSU Neuroscience Program > > > > > > 428 S Shaw Ln Rm 3115 > > > > > > Michigan State University > > > > > > East Lansing, MI 48824 USA > > > > > > Tel: 517-353-4388 > > > > > > Fax: 517-432-1061 > > > > > > Email: weng at cse.msu.edu > > > > > > URL: http://www.cse.msu.edu/~weng/ > > > > > > ---------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > -- > > > > Juyang (John) Weng, Professor > > > > Department of Computer Science and Engineering > > > > MSU Cognitive Science Program and MSU Neuroscience Program > > > > 428 S Shaw Ln Rm 3115 > > > > Michigan State University > > > > East Lansing, MI 48824 USA > > > > Tel: 517-353-4388 > > > > Fax: 517-432-1061 > > > > Email: weng at cse.msu.edu > > > > URL: http://www.cse.msu.edu/~weng/ > > > > ---------------------------------------------- > > > > > > > > > > > > > > > > [I am in Dijon, France on sabbatical this year. To call > > > me, Skype works best (gwcottrell), or dial +33 788319271] > > > > > > > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > > > > > > > My schedule is here: http://tinyurl.com/b7gxpwo > > > > > > Computer Science and Engineering 0404 > > > IF USING FED EX INCLUDE THE FOLLOWING LINE: > > > CSE Building, Room 4130 > > > University of California San Diego > > > 9500 Gilman Drive # 0404 > > > La Jolla, Ca. 92093-0404 > > > > > > > > > Things may come to those who wait, but only the things > > > left by those who hustle. -- Abraham Lincoln > > > > > > > > > "Of course, none of this will be easy. If it was, we would > > > already know everything there was about how the brain > > > works, and presumably my life would be simpler here. It > > > could explain all kinds of things that go on > > > in Washington." -Barack Obama > > > > > > > > > "Probably once or twice a week we are sitting at dinner > > > and Richard says, 'The cortex is hopeless,' and I say, > > > 'That's why I work on the worm.'" Dr. Bargmann said. > > > > > > "A grapefruit is a lemon that saw an opportunity and took > > > advantage of it." - note written on a door in Amsterdam on > > > Lijnbaansgracht. > > > > > > "Physical reality is great, but it has a lousy search > > > function." -Matt Tong > > > > > > "Only connect!" -E.M. Forster > > > > > > "You always have to believe that tomorrow you might write > > > the matlab program that solves everything - otherwise you > > > never will." -Geoff Hinton > > > > > > > > > "There is nothing objective about objective functions" - > > > Jay McClelland > > > > > > "I am awaiting the day when people remember the fact that > > > discovery does not work by deciding what you want and then > > > discovering it." > > > -David Mermin > > > > > > Email: gary at ucsd.edu > > > Home page: http://www-cse.ucsd.edu/~gary/ > > > > > > > > > > > > > > > > > -- > > -- > > Juyang (John) Weng, Professor > > Department of Computer Science and Engineering > > MSU Cognitive Science Program and MSU Neuroscience Program > > 428 S Shaw Ln Rm 3115 > > Michigan State University > > East Lansing, MI 48824 USA > > Tel: 517-353-4388 > > Fax: 517-432-1061 > > Email: weng at cse.msu.edu > > URL: http://www.cse.msu.edu/~weng/ > > ---------------------------------------------- > > > > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information > is only for the viewing or use of the intended recipient. If you have > received this e-mail in error or are not the intended recipient, you > are hereby notified that any disclosure, copying, distribution or use > of, or the taking of any action in reliance upon, any of the > information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately > deleted from your computer without making any copies hereof and any > and all hard copies made must be destroyed. If you have received this > e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Sat Jan 25 11:37:55 2014 From: axel.hutt at inria.fr (Axel Hutt) Date: Sat, 25 Jan 2014 17:37:55 +0100 (CET) Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <402938705.1298151.1390667875132.JavaMail.root@inria.fr> Hallo, thanks to all for the important discussion. ----- Original Message ----- > [..]What that means inevitably, in my view, is that the only way we > will ever understand what brain-like is, is to pay tremendous > attention experimentally and in our models to the actual detailed > anatomy and physiology of the brains circuits and cells. here I do not agree with you. Understanding cognitive processes makes it necessary to understand the major principles in encoding and decoding since the physiological details are different for each subject while the basic underlying mechanism is (with high probability) the same. Hence, considering more anatomical details does not lead us that far. We recognize this in todays' research progress, where physiological neuroscientists extracts tons of detailed data from different patients and do not understand it. One way out is the progress of theory, and not of collecting more data. Best Axel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 12:05:10 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 11:05:10 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <1390664611.3420.15.camel@sam> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> Message-ID: <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> Hi Jose, Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: At present I am actually much more concerned (and involved) in the use of brain imaging in what has come to be called "Neuro-marketing?. Many on this list are perhaps not aware, but while we worry about the effect of over interpretation of neuroimaging data within neuroscience, the effect of this kind of data in the business world is growing and not good. Although my guess is that those of you in the United States might have noted the rather large and absurd marketing campaign by Lumosity and the sellers of other ?brain training? games. A number of neuroscientists are actually now getting in this business. As some of you know, almost as long as I have been involved in computational neuroscience, I have also been involved in exploring the use of games for children?s learning. In the game/learning world, the misuse of neuroscience and especially brain imaging has become excessive. It wouldn?t be appropriate to belabor this point on this list - although the use of neuroscience by the NN community does, in my view, often cross over into a kind of neuro-marketing. For those that are interested in the more general abuses of neuro-marketing, here is a link to the first ever session I organized in the game development world based on my work as a neurobiologist: http://www.youtube.com/watch?v=Joqmf4baaT8&list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 As set up for that video, you should know that in his keynote address the night before, Jessi Schelll (of Schell Games and CMU) introduced his talk by saying that he was going to tell the audience what neuroscientists have figured out about human brains, going on to claim that they (we) have discovered that human brains come in two forms, goat brains and sheep brains. Of course the talk implied that the goats were in the room and the sheep were out there to be sold to. (although as I noted on the twitter feed at the time, there was an awful lot of ?baaing? going on in the audience :-) ). Anyway, the second iteration of my campaign to try to bring some balance and sanity to neuro-marketing, will take place at SxSW in Austin in march, in another session I have organized on the subject. http://schedule.sxsw.com/2014/events/event_IAP22511 If you happen to be in Austin for SxSW feel free to stop by. :-) The larger point, I suppose, is that while we debate these things within our own community, our debate and our claims have unintended consequences in larger society, with companies like Lumosity, in effect marketing to the baby boomers the idea (false) that using ?the science of neuroplasticity? and doing something as simple as playing games ?designed by neuroscientists? can revert their brains to teen age form. fRMI and Neuropsychology used extensively as evidence. Perhaps society has become so accustomed to outlandish claims and over selling that they won?t hold us accountable. Or perhaps they will. Jim p.s. (always a ps) I have also recently proposed that we declare a moratorium on neuroimaging studies until we at least know how the signal is related to actual neural-activity. Seems rather foolish to base so much speculation and interpretation on a signal we don?t understand. Easy enough to poo poo cell spikers - but to my knowledge, there is no evidence that neural computing is performed through the generation of areas of red, yellow, green and blue. :-) On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson wrote: > Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle > integrated fossilized views of the brain principles. > > I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. > > The most disconcerting thing about the emergence the new new neural network field(s) > is that the NIH Connectome RFPs contain language about large scale network functions...and > yet when Program managers are directly asked whether fMRI or any neuroimaging methods > would be compliant with the RFP.. the answer is "NO". > > So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons > at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much > less any other mental function! Hopefully neuroimaging will be relevant again. > > Just saying. > > Cheers. > > Steve > PS. Hi Gary! Dijon! > > Stephen Jos? Hanson > Director RUBIC (Rutgers Brain Imaging Center) > Professor of Psychology > Member of Cognitive Science Center (NB) > Member EE Graduate Program (NB) > Member CS Graduate Program (NB) > Rutgers University > > email: jose at psychology.rutgers.edu > web: psychology.rutgers.edu/~jose > lab: www.rumba.rutgers.edu > fax: 866-434-7959 > voice: 973-353-3313 (RUBIC) > > On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: >> Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. >> >> >> Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). >> >> >> I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - >> >> >> At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). >> >> >> From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. >> >> >> Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. >> >> >> It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. >> >> >> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >> >> >> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >> >> >> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >> >> >> I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. >> >> >> All too easy to simplify, pontificate, and sell. >> >> >> So, I sympathize with Juyang Wang?s frustration. >> >> >> If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. >> >> >> If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf >> >> >> Perhaps some sense of how far we have yet to go. >> >> >> Jim Bower >> >> >> >> >> >> >> >> >> >> On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: >> >>> Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. >>> >>> Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! >>> >>> Thanks, >>> Ralph's Android >>> >>> On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: >>> Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. >>> >>> But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. >>> >>> Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. >>> >>> Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. >>> >>> -John >>> >>> On 1/24/14 2:19 PM, Gary Cottrell wrote: >>> >>>> Hi John - >>>> >>>> >>>> It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. >>>> >>>> >>>> g. >>>> >>>> On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: >>>> >>>>> Dear Matthew: >>>>> >>>>> My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. >>>>> >>>>> You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." >>>>> >>>>> Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. >>>>> >>>>> Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! >>>>> >>>>> You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." >>>>> >>>>> To say that, you have not read the book: Natural and Artificial Intelligence. You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. >>>>> >>>>> -John >>>>> >>>>> On 1/23/14 6:15 PM, Matthew Cook wrote: >>>>> >>>>>> Dear John, >>>>>> >>>>>> >>>>>> I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. >>>>>> >>>>>> >>>>>> Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. >>>>>> >>>>>> >>>>>> I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. >>>>>> >>>>>> >>>>>> Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". >>>>>> >>>>>> >>>>>> You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. >>>>>> >>>>>> >>>>>> This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. >>>>>> >>>>>> >>>>>> Matthew >>>>>> >>>>>> >>>>>> >>>>>> On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: >>>>>> >>>>>>> Dear Anders, >>>>>>> >>>>>>> Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. >>>>>>> >>>>>>> I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." >>>>>>> >>>>>>> What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: >>>>>>> the wide gap between neuron-like computing and well-known highly integrated brain functions. >>>>>>> Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". >>>>>>> >>>>>>> Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? >>>>>>> Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. >>>>>>> >>>>>>> Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? >>>>>>> >>>>>>> I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. >>>>>>> >>>>>>> -John >>>>>>> >>>>>>> >>>>>>> On 1/13/14 12:14 PM, Anders Lansner wrote: >>>>>>> >>>>>>>> Workshop on Brain-Like Computing, February 5-6 2014 >>>>>>>> >>>>>>>> The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. >>>>>>>> As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. >>>>>>>> >>>>>>>> The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. >>>>>>>> >>>>>>>> This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. >>>>>>>> >>>>>>>> >>>>>>>> List of speakers >>>>>>>> >>>>>>>> Speaker >>>>>>>> >>>>>>>> >>>>>>>> Affiliation >>>>>>>> >>>>>>>> >>>>>>>> Giacomo Indiveri >>>>>>>> >>>>>>>> >>>>>>>> ETH Z?rich >>>>>>>> >>>>>>>> >>>>>>>> Abigail Morrison >>>>>>>> >>>>>>>> >>>>>>>> Forschungszentrum J?lich >>>>>>>> >>>>>>>> >>>>>>>> Mark Ritter >>>>>>>> >>>>>>>> >>>>>>>> IBM Watson Research Center >>>>>>>> >>>>>>>> >>>>>>>> Guillermo Cecchi >>>>>>>> >>>>>>>> >>>>>>>> IBM Watson Research Center >>>>>>>> >>>>>>>> >>>>>>>> Anders Lansner >>>>>>>> >>>>>>>> >>>>>>>> KTH Royal Institute of Technology >>>>>>>> >>>>>>>> >>>>>>>> Ahmed Hemani >>>>>>>> >>>>>>>> >>>>>>>> KTH Royal Institute of Technology >>>>>>>> >>>>>>>> >>>>>>>> Steve Furber >>>>>>>> >>>>>>>> >>>>>>>> University of Manchester >>>>>>>> >>>>>>>> >>>>>>>> Kazuyuki Aihara >>>>>>>> >>>>>>>> >>>>>>>> University of Tokyo >>>>>>>> >>>>>>>> >>>>>>>> Karlheinz Meier >>>>>>>> >>>>>>>> >>>>>>>> Heidelberg University >>>>>>>> >>>>>>>> >>>>>>>> Andreas Schierwagen >>>>>>>> >>>>>>>> >>>>>>>> Leipzig University >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR >>>>>>>> >>>>>>>> You need to sign up before January 28th. >>>>>>>> >>>>>>>> Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ****************************************** >>>>>>>> >>>>>>>> Anders Lansner >>>>>>>> >>>>>>>> Professor in Computer Science, Computational biology >>>>>>>> >>>>>>>> School of Computer Science and Communication >>>>>>>> >>>>>>>> Stockholm University and Royal Institute of Technology (KTH) >>>>>>>> >>>>>>>> ala at kth.se, +46-70-2166122 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Juyang (John) Weng, Professor >>>>>>> Department of Computer Science and Engineering >>>>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>>>> 428 S Shaw Ln Rm 3115 >>>>>>> Michigan State University >>>>>>> East Lansing, MI 48824 USA >>>>>>> Tel: 517-353-4388 >>>>>>> Fax: 517-432-1061 >>>>>>> Email: weng at cse.msu.edu >>>>>>> URL: http://www.cse.msu.edu/~weng/ >>>>>>> ---------------------------------------------- >>>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> -- >>>>> Juyang (John) Weng, Professor >>>>> Department of Computer Science and Engineering >>>>> MSU Cognitive Science Program and MSU Neuroscience Program >>>>> 428 S Shaw Ln Rm 3115 >>>>> Michigan State University >>>>> East Lansing, MI 48824 USA >>>>> Tel: 517-353-4388 >>>>> Fax: 517-432-1061 >>>>> Email: weng at cse.msu.edu >>>>> URL: http://www.cse.msu.edu/~weng/ >>>>> ---------------------------------------------- >>>>> >>>> >>>> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] >>>> >>>> >>>> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >>>> >>>> >>>> My schedule is here: http://tinyurl.com/b7gxpwo >>>> >>>> Computer Science and Engineering 0404 >>>> IF USING FED EX INCLUDE THE FOLLOWING LINE: >>>> CSE Building, Room 4130 >>>> University of California San Diego >>>> 9500 Gilman Drive # 0404 >>>> La Jolla, Ca. 92093-0404 >>>> >>>> >>>> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln >>>> >>>> >>>> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama >>>> >>>> >>>> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. >>>> >>>> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. >>>> >>>> "Physical reality is great, but it has a lousy search function." -Matt Tong >>>> >>>> "Only connect!" -E.M. Forster >>>> >>>> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton >>>> >>>> >>>> "There is nothing objective about objective functions" - Jay McClelland >>>> >>>> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." >>>> -David Mermin >>>> >>>> Email: gary at ucsd.edu >>>> Home page: http://www-cse.ucsd.edu/~gary/ >>>> >>>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> >> >> >> > > -- > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Sat Jan 25 07:58:53 2014 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Sat, 25 Jan 2014 08:58:53 -0400 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: James, enjoyed your writing. So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. Cheers, Thomas On 2014-01-25 12:09 AM, "james bower" wrote: > Ivan thanks for the response, > > Actually, the talks at the recent Neuroscience Meeting about the Brain > Project either excluded modeling altogether - or declared we in the US > could leave it to the Europeans. I am not in the least bit nationalistic - > but, collecting data without having models (rather than imaginings) to > indicate what to collect, is simply foolish, with many examples from > history to demonstrate the foolishness. In fact, one of the primary > proponents (and likely beneficiaries) of this Brain Project, who gave the > big talk at Neuroscience on the project (showing lots of pretty pictures), > started his talk by asking: ?what have we really learned since Cajal, > except that there are also inhibitory neurons?? Shocking, not only because > Cajal actually suggested that there might be inhibitory neurons - in fact. > To quote ?Stupid is as stupid does?. > > Forbes magazine estimated that finding the Higgs Boson cost over $13BB, > conservatively. The Higgs experiment was absolutely the opposite of a Big > Data experiment - In fact, can you imagine the amount of money and time > that would have been required if one had simply decided to collect all data > at all possible energy levels? The Higgs experiment is all the more > remarkable because it had the nearly unified support of the high energy > physics community, not that there weren?t and aren?t skeptics, but still, > remarkable that the large majority could agree on the undertaking and > effort. The reason is, of course, that there was a theory - that dealt > with the particulars and the details - not generalities. In contrast, > there is a GREAT DEAL of skepticism (me included) about the Brain Project - > its politics and its effects (or lack therefore), within neuroscience. (of > course, many people are burring their concerns in favor of tin cups - > hoping). Neuroscience has had genome envy for ever - the connectome is > their response - who says its all in the connections? (sorry > ?connectionists?) Where is the theory? Hebb? You should read Hebb if you > haven?t - rather remarkable treatise. But very far from a theory. > > If you want an honest answer to your question - I have not seen any good > evidence so far that the approach works, and I deeply suspect that the > nervous system is very much NOT like any machine we have built or designed > to date. I don?t believe that Newton would have accomplished what he did, > had he not, first, been a remarkable experimentalist, tinkering with real > things. I feel the same way about Neuroscience. Having spent almost 30 > years building realistic models of its cells and networks (and also doing > experiments, as described in the article I linked to) we have made some > small progress - but only by avoiding abstractions and paying attention to > the details. OF course, most experimentalists and even most modelers have > paid little or no attention. We have a sociological and structural problem > that, in my opinion, only the right kind of models can fix, coupled with a > real commitment to the biology - in all its complexity. And, as the model > I linked tries to make clear - we also have to all agree to start working > on common ?community models?. But like big horn sheep, much safer to stand > on your own peak and make a lot of noise. > > You can predict with great accuracy the movement of the planets in the sky > using circles linked to other circles - nice and easy math, and very > adaptable model (just add more circles when you need more accuracy, and > invent entities like equant points, etc). Problem is, without getting into > the nasty math and reality of ellipses- you can?t possible know anything > about gravity, or the origins of the solar system, or its various and > eventual perturbations. > > As I have been saying for 30 years: Beware Ptolemy and curve fitting. > > The details of reality matter. > > Jim > > > > > > On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: > > > I think perhaps the objection to the Big Data approach is that it is > applied to the exclusion of all other modelling approaches. While it is > true that complete and detailed understanding of neurophysiology and > anatomy is at the heart of neuroscience, a lot can be learned about signal > propagation in excitable branching structures using statistical physics, > and a lot can be learned about information representation and transmission > in the brain using mathematical theories about distributed communicating > processes. As these modelling approaches have been successfully used in > various areas of science, wouldn't you agree that they can also be used to > understand at least some of the fundamental properties of brain structures > and processes? > > -Ivan Raikov > > On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: > >> [snip] >> > An enormous amount of engineering and neuroscience continues to think that >> the feedforward pathway is from the sensors to the inside - rather than >> seeing this as the actual feedback loop. Might to some sound like a >> semantic quibble, but I assure you it is not. >> >> If you believe as I do, that the brain solves very hard problems, in very >> sophisticated ways, that involve, in some sense the construction of complex >> models about the world and how it operates in the world, and that those >> models are manifest in the complex architecture of the brain - then >> simplified solutions are missing the point. >> >> What that means inevitably, in my view, is that the only way we will ever >> understand what brain-like is, is to pay tremendous attention >> experimentally and in our models to the actual detailed anatomy and >> physiology of the brains circuits and cells. >> >> > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sml at essex.ac.uk Sat Jan 25 09:43:15 2014 From: sml at essex.ac.uk (Lucas, Simon M) Date: Sat, 25 Jan 2014 14:43:15 +0000 Subject: Connectionists: 4-year funded PhD places in Intelligent Games and Game Intelligence Message-ID: Game AI is a major application of machine learning and optimisation. We have 11 fully-funded studentships available each year (covering fees at Home/EU rate and a stipend for four years) in the EPSRC Centre for Doctoral Training in Intelligent Games and Game Intelligence (IGGI), to conduct cutting-edge research and train the next generation of researchers, designers, developers and entrepreneurs in digital games. IGGI is a collaboration between three UK Universities: the University of York, the University of Essex and Goldsmiths College, University of London. IGGI PhDs will be based at their principal supervisor's University site with travel to the other sites for team and training activities. IGGI brings together 60 industrial partners from the UK games industry and related organisations (including Sony Computer Entertainment Europe, The Creative Assembly, Codemasters, Rebellion, TIGA, and many more - see www.iggi.org.uk/our-industrial-partners/). IGGI PhDs will have the opportunity to engage in placements at these partner organisations, as well as international research labs, during their PhD research. In addition to conducting research with world-leading academics and industry partners, you will participate in global game jams, co-organise and participate in an annual games symposium, and engage with industry-led seminars. You will receive training from experts in Games Development, Games Design, Research Skills and a range of optional modules including AI, computer vision, human-computer interaction, storytelling, graphics, sound and robotics. Since places will be allocated on a first-come-first-served basis, we encourage applicants to contact prospective supervisors and submit applications as soon as possible. For further information and details of how to apply go to www.iggi.org.uk. Best wishes, Simon Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 10:08:02 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 09:08:02 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <86FC9364-F509-4037-B24C-6AAF72BD119B@gmail.com> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <86FC9364-F509-4037-B24C-6AAF72BD119B@gmail.com> Message-ID: <7E71D62D-F4AB-40B9-9EC3-F73DE1828A4A@uthscsa.edu> Of course, but imagine the nature of the problem if you didn?t have the focus provided by the theory ? you know what the needle in the hay stack is supposed to look like. The brain project intends to simply construct the hay stack (literally). Furthermore, there isn?t any ?force? (I apologize) like selection in the biological sense, organizing the structure of your data. Another (of many) soap boxes I can get on, is that lack of comparative and evolutionary thinking in neuroscience - made much much worse now by ?Translational? science. So, your big data sorting problem might be more significant than ours, if we had the right kinds of models. I would love to find that out. Jim BTW, I am very aware that the NN efforts have resulted in technology that is, in fact useful. This has been pointed out by several ?off list? comments I have gotten along the following lines: > The big difffrence is that now we justify the "neural" nets not by the hope that they are brain like but by the fact that for a growing number of tricky computations they work better than any of the alternative technologies. to which I replied: Absolutely and no doubt - I love the voice recognition on my iPhone However, did getting there really require, intellectually, the faux neuroscience argument - or was that only a way to rally the troops and get the generals to pay :-) (My brain still doesn?t mess up as much as Siri does, but it is getting closer :-), I wish she also knew how to wash the dishes or balance the grocery bags :-) ) And BTW, I actually found the NIPS meeting to be much more honest than in the old days - little faux neuroscience - turns out a visit by Zuckerburg is more than enough to rally the troops. :-) On Jan 25, 2014, at 3:57 AM, Bal?zs K?gl wrote: >> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. > > I agree with you on your argument for needing a model to collect data. At the same time, the LHC is also probably a good example for showing that even with a model you end up with huge data sets. The LHC generates petabytes of data per year, and this is after a real-time filtering of most of the uninteresting collision events (a cut of roughly six orders of magnitude). Ironically (to this discussion), the analysis of these petabytes makes good use of ML technologies developed in the 90s (they mostly use boosted decision trees, but neural networks are also popular) . > > Bal?zs > > > > ? > Balazs Kegl > Research Scientist (DR2) > Linear Accelerator Laboratory > CNRS / University of Paris Sud > http://users.web.lal.in2p3.fr/kegl > > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Sat Jan 25 13:48:12 2014 From: jose at psychology.rutgers.edu (Stephen =?ISO-8859-1?Q?Jos=E9?= Hanson) Date: Sat, 25 Jan 2014 13:48:12 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> Message-ID: <1390675692.3420.33.camel@sam> Yes, much of fMRI/MEG can be hyped and misused..like most technologies. But to be fair, there is interesting work going on with the connectome and with "big data". And we are just at the tipping point of the deluge of this stuff...so if you didn't like brain mapping in the last 10 years... time to disconnect! But here's an example of something we thought would be a model of this type of research: An early version of this was an study that Russ Poldrack and I did a few years ago, using 130 brains, with 8 different tasks (equal number sub per task) same magnet same enviroment etc.. and used a large margin classifier (SVM) to predict the task the subjects were doing using only the whole brain (400k voxels) distributed pattern of activity. We were able to decode task state with 80% cross validation on held out brains. If one looks at the representations recovered, it begins to give one a sense of the difficulty of matching brain activity with tasks. In some ways we don't really have an epistomology of tasks (cognitive or perceptual) and they don't really match up to the constitute structure of the brain. Of course its presumptive that somehow our arbitrary tasks have something to do with brain computation in a factorial (linear!) way. Here's the paper if you have interest in the details: http://nwkpsych.rutgers.edu/~jose/psych_sciPHH.pdf Cheers, Steve On Sat, 2014-01-25 at 11:05 -0600, james bower wrote: > Hi Jose, > > > > Ah, neuroimaging - don?t get me started. Not all, but a great deal of > neuroimaging has become a modern form of phrenology IMHO, distorting > not only neuroscience, but it turns out, increasingly business too. > To wit: > > > At present I am actually much more concerned (and involved) in the use > of brain imaging in what has come to be called "Neuro-marketing?. > Many on this list are perhaps not aware, but while we worry about the > effect of over interpretation of neuroimaging data within > neuroscience, the effect of this kind of data in the business world is > growing and not good. Although my guess is that those of you in the > United States might have noted the rather large and absurd marketing > campaign by Lumosity and the sellers of other ?brain training? games. > A number of neuroscientists are actually now getting in this > business. > > > As some of you know, almost as long as I have been involved in > computational neuroscience, I have also been involved in exploring the > use of games for children?s learning. In the game/learning world, the > misuse of neuroscience and especially brain imaging has become > excessive. It wouldn?t be appropriate to belabor this point on this > list - although the use of neuroscience by the NN community does, in > my view, often cross over into a kind of neuro-marketing. > > > For those that are interested in the more general abuses of > neuro-marketing, here is a link to the first ever session I organized > in the game development world based on my work as a neurobiologist: > > > http://www.youtube.com/watch?v=Joqmf4baaT8&list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 > > > As set up for that video, you should know that in his keynote address > the night before, Jessi Schelll (of Schell Games and CMU) introduced > his talk by saying that he was going to tell the audience what > neuroscientists have figured out about human brains, going on to claim > that they (we) have discovered that human brains come in two forms, > goat brains and sheep brains. Of course the talk implied that the > goats were in the room and the sheep were out there to be sold to. > (although as I noted on the twitter feed at the time, there was an > awful lot of ?baaing? going on in the audience :-) ). > > > Anyway, the second iteration of my campaign to try to bring some > balance and sanity to neuro-marketing, will take place at SxSW in > Austin in march, in another session I have organized on the subject. > > > http://schedule.sxsw.com/2014/events/event_IAP22511 > > > If you happen to be in Austin for SxSW feel free to stop by. :-) > > > The larger point, I suppose, is that while we debate these things > within our own community, our debate and our claims have unintended > consequences in larger society, with companies like Lumosity, in > effect marketing to the baby boomers the idea (false) that using ?the > science of neuroplasticity? and doing something as simple as playing > games ?designed by neuroscientists? can revert their brains to teen > age form. fRMI and Neuropsychology used extensively as evidence. > > > Perhaps society has become so accustomed to outlandish claims and over > selling that they won?t hold us accountable. > > > Or perhaps they will. > > > Jim > > > > > p.s. (always a ps) I have also recently proposed that we declare a > moratorium on neuroimaging studies until we at least know how the > signal is related to actual neural-activity. Seems rather foolish to > base so much speculation and interpretation on a signal we don?t > understand. Easy enough to poo poo cell spikers - but to my > knowledge, there is no evidence that neural computing is performed > through the generation of areas of red, yellow, green and blue. :-) > > > > > > > > > > > > > > > On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson > wrote: > > > > > Indeed. Its like we never stopped arguing about this for the last > > 30 years! Maybe this is a brain principle > > integrated fossilized views of the brain principles. > > > > I actually agree with John.. and disagree with you JIm... surprise > > surprise...seems like old times.. > > > > The most disconcerting thing about the emergence the new new neural > > network field(s) > > is that the NIH Connectome RFPs contain language about large scale > > network functions...and > > yet when Program managers are directly asked whether fMRI or any > > neuroimaging methods > > would be compliant with the RFP.. the answer is "NO". > > > > So once the neuroscience cell spikers get done analyzing 1000 or > > 10000 or even a 1M neurons > > at a circuit level.. we still won't know why someone makes decisions > > about the shoes they wear; much > > less any other mental function! Hopefully neuroimaging will be > > relevant again. > > > > Just saying. > > > > Cheers. > > > > Steve > > PS. Hi Gary! Dijon! > > > > Stephen Jos? Hanson > > Director RUBIC (Rutgers Brain Imaging Center) > > Professor of Psychology > > Member of Cognitive Science Center (NB) > > Member EE Graduate Program (NB) > > Member CS Graduate Program (NB) > > Rutgers University > > > > email: jose at psychology.rutgers.edu > > web: psychology.rutgers.edu/~jose > > lab: www.rumba.rutgers.edu > > fax: 866-434-7959 > > voice: 973-353-3313 (RUBIC) > > > > On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: > > > > > Well, well - remarkable!!! an actual debate on connectionists - > > > just like the old days - in fact REMARKABLY like the old days. > > > > > > > > > Same issues - how ?brain-like? is ?brain-like? and how much hype > > > is ?brain-like? generating by itself. How much do engineers really > > > know about neuroscience, and how much do neurobiologists really > > > know about the brain (both groups tend to claim they know a lot - > > > now and then). > > > > > > > > > I went to the NIPS meeting this year for the first time in more > > > than 25 years. Some of the older timers on connectionists may > > > remember that I was one of the founding members of NIPS - and some > > > will also remember that a few years of trying to get some kind of > > > real interaction between neuroscience and then ?neural networks? > > > lead me to give up and start, with John Miller, the CNS meetings - > > > focused specifically on computational neuroscience. Another story > > > - > > > > > > > > > At NIPS this year, there was a very large focus on ?big data? of > > > course, with "machine learning" largely replaced "Neural Networks" > > > in most talk titles. I was actually a panelist (most had no idea > > > of my early involvement with NIPS) on big data in on-line learning > > > (generated by Ed-X, Kahn, etc) workshop. I was interested, > > > because for 15 years I have also been running Numedeon Inc, whose > > > virtual world for kids, Whyville.net was the first game based > > > immersive worlds, and is still one of the biggest and most > > > innovative. (no MOOCs there). > > > > > > > > > From the panel I made the assertion, as I had, in effect, many > > > years ago, that if you have a big data problem - it is likely you > > > are not taking anything resembling a ?brain-like? approach to > > > solving it. The version almost 30 years ago, when everyone was > > > convinced that the relatively simple Hopfield Network could solve > > > all kinds of hard problems, was my assertion that, in fact, simple > > > ?Neural Networks, or simple Neural Network learning rules were > > > unlikely to work very well, because, almost certainly, you have to > > > build a great deal of knowledge about the nature of the problem > > > into all levels (including the input layer) of your network to get > > > it to work. > > > > > > > > > Now, many years later, everyone seems convinced that you can > > > figure things out by amassing an enormous amount of data and > > > working on it. > > > > > > > > > It has been a slow revolution (may actually not even be at the > > > revolutionary stage yet), BUT it is very likely that the nervous > > > system (like all model based systems) doesn?t collect tons of data > > > to figure out with feedforward processing and filtering, but > > > instead, collects the data it thinks it needs to confirm what it > > > already believes to be true. In other words, it specifically > > > avoids the big data problem at all cost. It is willing to suffer > > > the consequence that occasionally (more and more recently for me), > > > you end up talking to someone for 15 minutes before you realize > > > that they are not the person you thought they were. > > > > > > > > > An enormous amount of engineering and neuroscience continues to > > > think that the feedforward pathway is from the sensors to the > > > inside - rather than seeing this as the actual feedback loop. > > > Might to some sound like a semantic quibble, but I assure you it > > > is not. > > > > > > > > > If you believe as I do, that the brain solves very hard problems, > > > in very sophisticated ways, that involve, in some sense the > > > construction of complex models about the world and how it operates > > > in the world, and that those models are manifest in the complex > > > architecture of the brain - then simplified solutions are missing > > > the point. > > > > > > > > > What that means inevitably, in my view, is that the only way we > > > will ever understand what brain-like is, is to pay tremendous > > > attention experimentally and in our models to the actual detailed > > > anatomy and physiology of the brains circuits and cells. > > > > > > > > > I saw none of that at NIPS - and in fact, I see less and less of > > > that at the CNS meeting as well. > > > > > > > > > All too easy to simplify, pontificate, and sell. > > > > > > > > > So, I sympathize with Juyang Wang?s frustration. > > > > > > > > > If there is any better evidence that we are still in the dark, it > > > is that we are still having the same debate 30 years later, with > > > the same ruffled feathers, the same bold assertions (mine > > > included) and the same seeming lack of progress. > > > > > > > > > If anyone is interested, here is a chapter I recently wrote of the > > > book I edited on ?20 years of progress in computational > > > neuroscience (Springer) on the last 40 years trying to understand > > > the workings of a single neuron (The cerebellar Purkinje cell), > > > using models. > > > https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF% > > > 20copy.pdf > > > > > > > > > Perhaps some sense of how far we have yet to go. > > > > > > > > > Jim Bower > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings > > > wrote: > > > > > > > > > > Hey, I am happy when our taxpayer money, of which I contribute > > > > way more than I get back, funds any science in all branches of > > > > the government. > > > > > > > > Neuromorphic and brain-like computing is on the rise ... Let's > > > > please not shoot ourselves in the foot with in-fighting!! > > > > > > > > Thanks, > > > > Ralph's Android > > > > > > > > On Jan 24, 2014 4:13 PM, "Juyang Weng" > > > > wrote: > > > > > > > > Yes, Gary, you are correct politically, not to upset the > > > > "emperor" since he is always right and he never falls > > > > behind the literature. > > > > > > > > But then no clear message can ever get across. Falling > > > > behind the literature is still the fact. More, the > > > > entire research community that does brain research falls > > > > behind badly the literature of necessary disciplines. > > > > The current U.S. infrastructure of this research > > > > community does not fit at all the brain subject it > > > > studies! This is not a joking matter. We need to wake > > > > up, please. > > > > > > > > Azriel Rosenfeld criticized the entire computer vision > > > > filed in his invited talk at CVPR during early 1980s: > > > > "just doing business as usual" and "more or less the > > > > same" . However, the entire computer vision field > > > > still has not woken up after 30 years! As another > > > > example, I respect your colleague Terry Sejnowski, but I > > > > must openly say that I object to his "we need more data" > > > > as the key message for the U.S. BRAIN Project. This is > > > > another example of "just doing business as usual" and so > > > > everybody will not be against you. > > > > > > > > Several major disciplines are closely related to the > > > > brain, but the scientific community is still very much > > > > fragmented, not willing to wake up. Some of our > > > > government officials only say superficial worlds like > > > > "Big Data" because we like to hear. This cost is too > > > > high for our taxpayers. > > > > > > > > -John > > > > > > > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > > > > > > > > > > > > Hi John - > > > > > > > > > > > > > > > It's great that you have an over-arching theory, but > > > > > if you want people to read it, it would be better not > > > > > to disrespect people in your emails. You say you > > > > > respect Matthew, but then you accuse him of falling > > > > > behind in the literature because he hasn't read your > > > > > book. Politeness (and modesty!) will get you much > > > > > farther than the tone you have taken. > > > > > > > > > > > > > > > g. > > > > > > > > > > On Jan 24, 2014, at 6:27 PM, Juyang Weng > > > > > wrote: > > > > > > > > > > > > > > > > Dear Matthew: > > > > > > > > > > > > My apology if my words are direct, so that people > > > > > > with short attention spans can quickly get my > > > > > > points. I do respect you. > > > > > > > > > > > > You wrote: "to build hardware that works in a more > > > > > > brain-like way than conventional computers do. This > > > > > > is not what is usually meant by research in neural > > > > > > networks." > > > > > > > > > > > > Your statement is absolutely not true. Your term > > > > > > "brain-like way" is as old as "brain-like > > > > > > computing". Read about the 14 neurocomputers built > > > > > > by 1988 in Robert Hecht-Nielsen, "Neurocomputing: > > > > > > picking the human brain", IEEE Spectrum 25(3), March > > > > > > 1988, pp. 36-41. Hardware will not solve the > > > > > > fundamental problems of the current human severe > > > > > > lack in understanding the brain, no matter how many > > > > > > computers are linked together. Neither will the > > > > > > current "Big Data" fanfare from NSF in U.S.. The > > > > > > IBM's brain project has similar fundamental flaws > > > > > > and the IBM team lacks key experts. > > > > > > > > > > > > Some of the NSF managers have been turning blind > > > > > > eyes to breakthrough work on brain modeling for over > > > > > > a decade, but they want to waste more taxpayer's > > > > > > money into its "Big Data" fanfare and other "try > > > > > > again" fanfares. It is a scientific shame for NSF > > > > > > in a developed country like U.S. to do that shameful > > > > > > politics without real science, causing another large > > > > > > developing country like China to also echo "Big > > > > > > Data". "Big Data" was called "Large Data", well > > > > > > known in Pattern Recognition for many years. Stop > > > > > > playing shameful politics in science! > > > > > > > > > > > > You wrote: "Nobody is claiming a `brain-scale theory > > > > > > that bridges the wide gap,' or even close." > > > > > > > > > > > > To say that, you have not read the book: Natural and > > > > > > Artificial Intelligence. You are falling behind the > > > > > > literature so bad as some of our NSF project > > > > > > managers. With their lack of knowledge, they did > > > > > > not understand that the "bridge" was in print on > > > > > > their desks and in the literature. > > > > > > > > > > > > -John > > > > > > > > > > > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > > > > > > > > > > > > > > > > > > Dear John, > > > > > > > > > > > > > > > > > > > > > I think all of us on this list are interested in > > > > > > > brain-like computing, so I don't understand your > > > > > > > negativity on the topic. > > > > > > > > > > > > > > > > > > > > > Many of the speakers are involved in efforts to > > > > > > > build hardware that works in a more brain-like way > > > > > > > than conventional computers do. This is not what > > > > > > > is usually meant by research in neural networks. > > > > > > > I suspect the phrase "brain-like computing" is > > > > > > > intended as an umbrella term that can cover all of > > > > > > > these efforts. > > > > > > > > > > > > > > > > > > > > > I think you are reading far more into the > > > > > > > announcement than is there. Nobody is claiming a > > > > > > > "brain-scale theory that bridges the wide gap," or > > > > > > > even close. To the contrary, the announcement is > > > > > > > very cautious, saying that intense research is > > > > > > > "gradually increasing our understanding" and > > > > > > > "beginning to shed light on the human brain". In > > > > > > > other words, the research advances slowly, and we > > > > > > > are at the beginning. There is certainly no claim > > > > > > > that any of the speakers has finished the job. > > > > > > > > > > > > > > > > > > > > > Similarly, the announcement refers to "successful > > > > > > > demonstration of some of the underlying principles > > > > > > > [of the brain] in software and hardware", which > > > > > > > implicitly acknowledges that we do not have all > > > > > > > the principles. There is nothing like a claim > > > > > > > that anyone has enough principles to "explain > > > > > > > highly integrated brain functions". > > > > > > > > > > > > > > > > > > > > > You are concerned that this workshop will avoid > > > > > > > the essential issue of the wide gap between > > > > > > > neuron-like computing and highly integrated brain > > > > > > > functions. What makes you think it will avoid > > > > > > > this? We are all interested in filling this gap, > > > > > > > and the speakers (well, the ones who I know) all > > > > > > > either work on this, or work on supporting people > > > > > > > who work on this, or both. > > > > > > > > > > > > > > > > > > > > > This looks like it will be a very nice workshop, > > > > > > > with talks from leaders in the field on a variety > > > > > > > of topics, and I wish I were able to attend it. > > > > > > > > > > > > > > > > > > > > > Matthew > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > > > > > > > > > > > > > > > > > > > > > Dear Anders, > > > > > > > > > > > > > > > > Interesting topic about the brain! But > > > > > > > > Brain-Like Computing is misleading because > > > > > > > > neural networks have been around for at least 70 > > > > > > > > years. > > > > > > > > > > > > > > > > I quote: "We are now approaching the point when > > > > > > > > our knowledge will enable successful > > > > > > > > demonstrations of some of the underlying > > > > > > > > principles in software and hardware, i.e. > > > > > > > > brain-like computing." > > > > > > > > > > > > > > > > What are the underlying principles? I am > > > > > > > > concerned that projects like "Brain-Like > > > > > > > > Computing" avoid essential issues: > > > > > > > > the wide gap between neuron-like computing and > > > > > > > > well-known highly integrated brain functions. > > > > > > > > Continuing this avoidance would again create bad > > > > > > > > names for "brain-like computing", just such > > > > > > > > behaviors did for "neural networks". > > > > > > > > > > > > > > > > Henry Markram criticized IBM's brain project > > > > > > > > which does miss essential brain principles, but > > > > > > > > has he published such principles? > > > > > > > > Modeling individual neurons more and more > > > > > > > > precisely will explain highly integrated brain > > > > > > > > functions? From what I know, definitely not, > > > > > > > > by far. > > > > > > > > > > > > > > > > Has any of your 10 speakers published any > > > > > > > > brain-scale theory that bridges the wide gap? > > > > > > > > Are you aware of any such published theories? > > > > > > > > > > > > > > > > I am sorry for giving a CC to the list, but many > > > > > > > > on the list said that they like to hear > > > > > > > > discussions instead of just event > > > > > > > > announcements. > > > > > > > > > > > > > > > > -John > > > > > > > > > > > > > > > > > > > > > > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > > > > > > > > > > > > > > > > > > > > > > > > Workshop on Brain-Like Computing, February 5-6 > > > > > > > > > 2014 > > > > > > > > > > > > > > > > > > The exciting prospects of developing > > > > > > > > > brain-like information processing is one of > > > > > > > > > the Deans Forum focus areas. > > > > > > > > > As a means to encourage progress in this > > > > > > > > > research area a Workshop is arranged February > > > > > > > > > 5th-6th 2014 on KTH campus in Stockholm. > > > > > > > > > > > > > > > > > > The human brain excels over contemporary > > > > > > > > > computers and robots in processing real-time > > > > > > > > > unstructured information and uncertain data as > > > > > > > > > well as in controlling a complex mechanical > > > > > > > > > platform with multiple degrees of freedom like > > > > > > > > > the human body. Intense experimental research > > > > > > > > > complemented by computational and informatics > > > > > > > > > efforts are gradually increasing our > > > > > > > > > understanding of underlying processes and > > > > > > > > > mechanisms in small animal and mammalian > > > > > > > > > brains and are beginning to shed light on the > > > > > > > > > human brain. We are now approaching the point > > > > > > > > > when our knowledge will enable successful > > > > > > > > > demonstrations of some of the underlying > > > > > > > > > principles in software and hardware, i.e. > > > > > > > > > brain-like computing. > > > > > > > > > > > > > > > > > > This workshop assembles experts, from the > > > > > > > > > partners and also other leading names in the > > > > > > > > > field, to provide an overview of the > > > > > > > > > state-of-the-art in theoretical, software, and > > > > > > > > > hardware aspects of brain-like computing. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > List of speakers > > > > > > > > > > > > > > > > > > Speaker > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Affiliation > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Giacomo Indiveri > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ETH Z?rich > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Abigail Morrison > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Forschungszentrum > > > > > > > > > J?lich > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Mark Ritter > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > IBM Watson Research > > > > > > > > > Center > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Guillermo Cecchi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > IBM Watson Research > > > > > > > > > Center > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Anders Lansner > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > KTH Royal Institute of > > > > > > > > > Technology > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Ahmed Hemani > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > KTH Royal Institute of > > > > > > > > > Technology > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Steve Furber > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > University of > > > > > > > > > Manchester > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kazuyuki Aihara > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > University of Tokyo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Karlheinz Meier > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Heidelberg University > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Andreas Schierwagen > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Leipzig University > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > For signing up to the Workshop please use the > > > > > > > > > registration form found at > > > > > > > > > http://bit.ly/1dkuBgR > > > > > > > > > > > > > > > > > > You need to sign up before January 28th. > > > > > > > > > > > > > > > > > > Web page: > > > > > > > > > http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ****************************************** > > > > > > > > > > > > > > > > > > Anders Lansner > > > > > > > > > > > > > > > > > > Professor in Computer Science, Computational > > > > > > > > > biology > > > > > > > > > > > > > > > > > > School of Computer Science and Communication > > > > > > > > > > > > > > > > > > Stockholm University and Royal Institute of > > > > > > > > > Technology (KTH) > > > > > > > > > > > > > > > > > > ala at kth.se, +46-70-2166122 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ______________________________________________ > > > > > > > > > > > > > > > > > > > > > > > > > > > Detta epostmeddelande > > > > > > > > > inneh?ller inget virus > > > > > > > > > eller annan skadlig kod > > > > > > > > > f?r avast! Antivirus ?r > > > > > > > > > aktivt. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > -- > > > > > > > > Juyang (John) Weng, Professor > > > > > > > > Department of Computer Science and Engineering > > > > > > > > MSU Cognitive Science Program and MSU Neuroscience Program > > > > > > > > 428 S Shaw Ln Rm 3115 > > > > > > > > Michigan State University > > > > > > > > East Lansing, MI 48824 USA > > > > > > > > Tel: 517-353-4388 > > > > > > > > Fax: 517-432-1061 > > > > > > > > Email: weng at cse.msu.edu > > > > > > > > URL: http://www.cse.msu.edu/~weng/ > > > > > > > > ---------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > -- > > > > > > Juyang (John) Weng, Professor > > > > > > Department of Computer Science and Engineering > > > > > > MSU Cognitive Science Program and MSU Neuroscience Program > > > > > > 428 S Shaw Ln Rm 3115 > > > > > > Michigan State University > > > > > > East Lansing, MI 48824 USA > > > > > > Tel: 517-353-4388 > > > > > > Fax: 517-432-1061 > > > > > > Email: weng at cse.msu.edu > > > > > > URL: http://www.cse.msu.edu/~weng/ > > > > > > ---------------------------------------------- > > > > > > > > > > > > > > > > > > > > > [I am in Dijon, France on sabbatical this year. To > > > > > call me, Skype works best (gwcottrell), or dial +33 > > > > > 788319271] > > > > > > > > > > > > > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > > > > > > > > > > > > > My schedule is here: http://tinyurl.com/b7gxpwo > > > > > > > > > > Computer Science and Engineering 0404 > > > > > IF USING FED EX INCLUDE THE FOLLOWING LINE: > > > > > CSE Building, Room 4130 > > > > > University of California San Diego > > > > > 9500 Gilman Drive # 0404 > > > > > La Jolla, Ca. 92093-0404 > > > > > > > > > > > > > > > Things may come to those who wait, but only the things > > > > > left by those who hustle. -- Abraham Lincoln > > > > > > > > > > > > > > > "Of course, none of this will be easy. If it was, we > > > > > would already know everything there was about how > > > > > the brain works, and presumably my life would be > > > > > simpler here. It could explain all kinds of things > > > > > that go on in Washington." -Barack Obama > > > > > > > > > > > > > > > "Probably once or twice a week we are sitting at > > > > > dinner and Richard says, 'The cortex is hopeless,' and > > > > > I say, 'That's why I work on the worm.'" Dr. Bargmann > > > > > said. > > > > > > > > > > "A grapefruit is a lemon that saw an opportunity and > > > > > took advantage of it." - note written on a door in > > > > > Amsterdam on Lijnbaansgracht. > > > > > > > > > > "Physical reality is great, but it has a lousy search > > > > > function." -Matt Tong > > > > > > > > > > "Only connect!" -E.M. Forster > > > > > > > > > > "You always have to believe that tomorrow you might > > > > > write the matlab program that solves everything - > > > > > otherwise you never will." -Geoff Hinton > > > > > > > > > > > > > > > "There is nothing objective about objective functions" > > > > > - Jay McClelland > > > > > > > > > > "I am awaiting the day when people remember the fact > > > > > that discovery does not work by deciding what you want > > > > > and then discovering it." > > > > > -David Mermin > > > > > > > > > > Email: gary at ucsd.edu > > > > > Home page: http://www-cse.ucsd.edu/~gary/ > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > -- > > > > Juyang (John) Weng, Professor > > > > Department of Computer Science and Engineering > > > > MSU Cognitive Science Program and MSU Neuroscience Program > > > > 428 S Shaw Ln Rm 3115 > > > > Michigan State University > > > > East Lansing, MI 48824 USA > > > > Tel: 517-353-4388 > > > > Fax: 517-432-1061 > > > > Email: weng at cse.msu.edu > > > > URL: http://www.cse.msu.edu/~weng/ > > > > ---------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > Dr. James M. Bower Ph.D. > > > > > > Professor of Computational Neurobiology > > > > > > Barshop Institute for Longevity and Aging Studies. > > > > > > 15355 Lambda Drive > > > > > > University of Texas Health Science Center > > > > > > San Antonio, Texas 78245 > > > > > > > > > > > > Phone: 210 382 0553 > > > > > > Email: bower at uthscsa.edu > > > > > > Web: http://www.bower-lab.org > > > > > > twitter: superid101 > > > > > > linkedin: Jim Bower > > > > > > > > > > > > CONFIDENTIAL NOTICE: > > > > > > The contents of this email and any attachments to it may be > > > privileged or contain privileged and confidential information. > > > This information is only for the viewing or use of the intended > > > recipient. If you have received this e-mail in error or are not > > > the intended recipient, you are hereby notified that any > > > disclosure, copying, distribution or use of, or the taking of > > > any action in reliance upon, any of the information contained in > > > this e-mail, or > > > > > > any of the attachments to this e-mail, is strictly prohibited and > > > that this e-mail and all of the attachments to this e-mail, if > > > any, must be > > > > > > immediately returned to the sender or destroyed and, in either > > > case, this e-mail and all attachments to this e-mail must be > > > immediately deleted from your computer without making any copies > > > hereof and any and all hard copies made must be destroyed. If you > > > have received this e-mail in error, please notify the sender by > > > e-mail immediately. > > > > > > > > > > > > > > > > > > > > > > > > > > -- Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Sat Jan 25 15:00:57 2014 From: bwyble at gmail.com (Brad Wyble) Date: Sat, 25 Jan 2014 15:00:57 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. Best, Brad Wyble On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > Thanks for your comments Thomas, and good luck with your effort. > > I can?t refrain myself from making the probably culturist remark that this > seems a very practical approach. > > I have for many years suggested that those interested in advancing biology > in general and neuroscience in particular to a ?paradigmatic? as distinct > from a descriptive / folkloric science, would benefit from understanding > this transition as physics went through it in the 15th and 16th centuries. > In many ways, I think that is where we are today, although with perhaps > the decided disadvantage that we have a lot of physicists around who, again > in my view, don?t really understand the origins of their own science. By > that, I mean, that they don?t understand how much of their current > scientific structure, for example the relatively clean separation between > ?theorists? and ?experimentalists?, is dependent on the foundation build by > those (like Newton) who were both in an earlier time. Once you have a sold > underlying computational foundation for a science, then you have the luxury > of this kind of specialization - as there is a framework that ties it all > together. The Higgs effort being a very visible recent example. > > Neuroscience has nothing of the sort. As I point out in the article I > linked to in my first posting - while it was first proposed 40 years ago > (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites > (i.e. that there were non directly-synaptically associated voltage > dependent ion channels in the dendrite that governed its behavior), and 40 > years of anatomically and physiologically realistic modeling has been > necessary to start to understand what they do - many cerebellar modeling > efforts today simply ignore these channels. While that again, to many on > this list, may seem too far buried in the details, these voltage dependent > channels make the Purkinje cell the computational device that it is. > > Recently, I was asked to review a cerebellar modeling paper in which the > authors actually acknowledged that their model lacked these channels > because they would have been too computationally expensive to include. > Sadly for those authors, I was asked to review the paper for the usual > reason - that several of our papers were referenced accordingly. They > likely won?t make that mistake again - as after of course complementing > them on the fact that they were honest (and knowledgable) enough to have > remarked on the fact that their Purkinje cells weren?t really Purkinje > cells - I had to reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and will very > likely get away with it. > > Imagine a comparable situation in a field (like physics) which has > established a structural base for its enterprise. ?We found it > computational expedient to ignore the second law of thermodynamics in our > computations - sorry?. BTW, I know that details are ignored all the time > in physics as one deals with descriptions at different levels of scale - > although even there, the field clearly would like to have a way to link > across different levels of scale. I would claim, however, that that is > precisely the ?trick? that biology uses to ?beat? the second law - linking > all levels of scale together - another reason why you can?t ignore the > details in biological models if you really want to understand how biology > works. (too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this transition > many years ago, and ask the question how neuroscience (and biology) can > now. Key points I think are: > - you need to produce students who are REALLY both experimental and > theoretical (like Newton). (and that doesn?t mean programs that ?import? > physicists and give them enough biology to believe they know what they are > doing, or programs that link experimentalists to physicists to solve their > computational problems) > - you need to base the efforts on models (and therefore mathematics) of > sufficient complexity to capture the physical reality of the system being > studied (as Kepler was forced to do to make the sun centric model of the > solar system even as close to as accurate as the previous earth centered > system) > - you need to build a new form of collaboration and communication that can > support the complexity of those models. Fundamentally, we continue to use > the publication system (short papers in a journal) that was invented as > part of the transformation for physics way back then. Our laboratories are > also largely isolated and non-cooperative, more appropriate for studying > simpler things (like those in physics). Fortunate for us, we have a new > communication tool (the Internet) although, as can be expected, we are > mostly using it to reimplement old style communication systems (e-journals) > with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory needs to be > linked to the experimental side REALLY, and vice versa. I proposed a > number of years ago to NIH that they would make it into the history books > if they simply required the following monday, that any submitted > experimental grant include a REAL theoretical and computational component - > Sadly, they interpreted that as meaning that P.I.s should state "an > hypothesis" - which itself is remarkable, because most of the ?hypotheses? > I see stated in Federal grants are actually statements of what the P.I. > believes to be true. Don?t get me started on human imaging studies. arggg > - As long as we are talking about what funding agencies can do, how about > the following structure for grants - all grants need to be submitted > collaboratively by two laboratories who have different theories (better > models) about how a particular part of the brain works. The grant should > support at set of experiments, that both parties agree distinguish between > their two points of view. All results need to be published with joint > authorship. In effect that is how physics works - given its underlying > structure. > - You need to get rid, as quickly as possible, the pressure to ?translate? > neuroscience research explicitly into clinical significance - we are not > even close to being able to do that intentionally - and the pressure (which > is essentially a give away to the pharma and bio-tech industries anyway) is > forcing neurobiologists to link to what is arguably the least scientific > form of research there is - clinical research. It just has to be the case > that society needs to understand that an investment in basic research will > eventually result in all the wonderful outcomes for humans we would all > like, but this distortion now is killing real neuroscience just at a > critical time, when we may finally have the tools to make the transition to > a paradigmatic science. > As some of you know, I have been all about trying to do these things for > many years - with the GENESIS project, with the original CNS graduate > program at Caltech, with the CNS meetings, (even originally with NIPS) and > with the first ?Methods in Computational Neuroscience Course" at the > Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) > is actually wrapping up next week, and of course with my own research and > students. Of course, I have not been alone in this, but it is remarkable > how little impact all that has had on neuroscience or neuro-engineering. I > have to say, honestly, that the strong tendency seems to be for these > efforts to snap back to the non-realistic, non-biologically based modeling > and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way (sorry) can > figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the ?intro > neuroscience? survey courses in our graduate programs (religious > instruction) and instead organize an introductory course built around the > history of the discovery of the origin of the axon potential that > culminated in the first (and last) Nobel prize work in computational > neuroscience for the Hodkin Huxley model. The 50th anniversary of that > prize was celebrated last year, and the year before I helped to organize a > meeting celebrating the 60th anniversary of the publication of the original > papers (which I care much more about anyway). That meeting was, I believe, > the first meeting in neuroscience ever organized around a single > (mathematical) model or theory - and in organizing it, I required all the > speakers to show the HH model on their first slide, indicating which term > or feature of the model their work was related to. Again, a first - but > possible, as this is about the only ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation (second > order differential) and present the HH model primarily as a description of > the action potential. Most theorists regard the HH model as a prime > example of how progress can be made by ignoring the biological details. > Both views and interpretations are historically and practically incorrect. > In my opinion, if you can?t handle the math in the HH model, you shouldn?t > be a neurobiologist, and if you don?t understand the profound impact of > HH?s knowledge and experimental study of the squid giant axon on the model, > you shouldn?t be a neuro-theorist either. just saying. :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > > James, enjoyed your writing. > > So, what to do? We are trying to get organized in Canada and are thinking > how we fit in with your (US) and the European approaches and big money. My > thought is that our advantage might be flexibility by not having a single > theme but rather a general supporting structure for theory and > theory-experimental interactions. I believe the ultimate place where we > want to be is to take theoretical proposals more seriously and try to make > specific experiments for them; like the Higgs project. (Any other > suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not > already on there.) > > Also, with regards to big data, I believe that one very fascinating thing > about the brain is that it can function with 'small data'. > > Cheers, Thomas > > > On 2014-01-25 12:09 AM, "james bower" wrote: > >> Ivan thanks for the response, >> >> Actually, the talks at the recent Neuroscience Meeting about the Brain >> Project either excluded modeling altogether - or declared we in the US >> could leave it to the Europeans. I am not in the least bit nationalistic - >> but, collecting data without having models (rather than imaginings) to >> indicate what to collect, is simply foolish, with many examples from >> history to demonstrate the foolishness. In fact, one of the primary >> proponents (and likely beneficiaries) of this Brain Project, who gave the >> big talk at Neuroscience on the project (showing lots of pretty pictures), >> started his talk by asking: ?what have we really learned since Cajal, >> except that there are also inhibitory neurons?? Shocking, not only because >> Cajal actually suggested that there might be inhibitory neurons - in fact. >> To quote ?Stupid is as stupid does?. >> >> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, >> conservatively. The Higgs experiment was absolutely the opposite of a Big >> Data experiment - In fact, can you imagine the amount of money and time >> that would have been required if one had simply decided to collect all data >> at all possible energy levels? The Higgs experiment is all the more >> remarkable because it had the nearly unified support of the high energy >> physics community, not that there weren?t and aren?t skeptics, but still, >> remarkable that the large majority could agree on the undertaking and >> effort. The reason is, of course, that there was a theory - that dealt >> with the particulars and the details - not generalities. In contrast, >> there is a GREAT DEAL of skepticism (me included) about the Brain Project - >> its politics and its effects (or lack therefore), within neuroscience. (of >> course, many people are burring their concerns in favor of tin cups - >> hoping). Neuroscience has had genome envy for ever - the connectome is >> their response - who says its all in the connections? (sorry >> ?connectionists?) Where is the theory? Hebb? You should read Hebb if you >> haven?t - rather remarkable treatise. But very far from a theory. >> >> If you want an honest answer to your question - I have not seen any good >> evidence so far that the approach works, and I deeply suspect that the >> nervous system is very much NOT like any machine we have built or designed >> to date. I don?t believe that Newton would have accomplished what he did, >> had he not, first, been a remarkable experimentalist, tinkering with real >> things. I feel the same way about Neuroscience. Having spent almost 30 >> years building realistic models of its cells and networks (and also doing >> experiments, as described in the article I linked to) we have made some >> small progress - but only by avoiding abstractions and paying attention to >> the details. OF course, most experimentalists and even most modelers have >> paid little or no attention. We have a sociological and structural problem >> that, in my opinion, only the right kind of models can fix, coupled with a >> real commitment to the biology - in all its complexity. And, as the model >> I linked tries to make clear - we also have to all agree to start working >> on common ?community models?. But like big horn sheep, much safer to stand >> on your own peak and make a lot of noise. >> >> You can predict with great accuracy the movement of the planets in the >> sky using circles linked to other circles - nice and easy math, and very >> adaptable model (just add more circles when you need more accuracy, and >> invent entities like equant points, etc). Problem is, without getting into >> the nasty math and reality of ellipses- you can?t possible know anything >> about gravity, or the origins of the solar system, or its various and >> eventual perturbations. >> >> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >> >> The details of reality matter. >> >> Jim >> >> >> >> >> >> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >> >> >> I think perhaps the objection to the Big Data approach is that it is >> applied to the exclusion of all other modelling approaches. While it is >> true that complete and detailed understanding of neurophysiology and >> anatomy is at the heart of neuroscience, a lot can be learned about signal >> propagation in excitable branching structures using statistical physics, >> and a lot can be learned about information representation and transmission >> in the brain using mathematical theories about distributed communicating >> processes. As these modelling approaches have been successfully used in >> various areas of science, wouldn't you agree that they can also be used to >> understand at least some of the fundamental properties of brain structures >> and processes? >> >> -Ivan Raikov >> >> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >> >>> [snip] >>> >> An enormous amount of engineering and neuroscience continues to think >>> that the feedforward pathway is from the sensors to the inside - rather >>> than seeing this as the actual feedback loop. Might to some sound like a >>> semantic quibble, but I assure you it is not. >>> >>> If you believe as I do, that the brain solves very hard problems, in >>> very sophisticated ways, that involve, in some sense the construction of >>> complex models about the world and how it operates in the world, and that >>> those models are manifest in the complex architecture of the brain - then >>> simplified solutions are missing the point. >>> >>> What that means inevitably, in my view, is that the only way we will >>> ever understand what brain-like is, is to pay tremendous attention >>> experimentally and in our models to the actual detailed anatomy and >>> physiology of the brains circuits and cells. >>> >>> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> *Phone: 210 382 0553 <210%20382%200553>* >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged >> or contain privileged and confidential information. This information is >> only for the viewing or use of the intended recipient. If you have received >> this e-mail in error or are not the intended recipient, you are hereby >> notified that any disclosure, copying, distribution or use of, or the >> taking of any action in reliance upon, any of the information contained in >> this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that >> this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, >> this e-mail and all attachments to this e-mail must be immediately deleted >> from your computer without making any copies hereof and any and all hard >> copies made must be destroyed. If you have received this e-mail in error, >> please notify the sender by e-mail immediately. >> >> >> > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Sat Jan 25 15:42:01 2014 From: minaiaa at gmail.com (Ali Minai) Date: Sat, 25 Jan 2014 15:42:01 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <-5739697215914102941@unknownmsgid> Right on, Brad! Including details at multiple levels is important, but if we ever hope to understand mental function, there are two critical things that must also be done: 1. Study the entire brain-body system, not just the brain as an isolated system - though, of course, looking at parts of the nervous system is also useful. 2. Pay much more attention to the developmental and evolutionary aspects of the system. We will understand the brain and its function much better if we look at it in its 'natural habitat' and in terms of its 'natural history' - an embodied, developmental, evolutionary neuroscience. At the same time, I think we should welcome all different perspectives and investigative approaches in this great enterprise. The history of science has been cited a lot in this discussuon, but one if its deeoest lessons is that we never know ahead of time what approach will be most fertile. Things always seem inevitable but only in retrospect. Ali Ali Minai University of Cincinnati Sent from my iPhone On Jan 25, 2014, at 3:24 PM, Brad Wyble wrote: I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. Best, Brad Wyble On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > Thanks for your comments Thomas, and good luck with your effort. > > I can?t refrain myself from making the probably culturist remark that this > seems a very practical approach. > > I have for many years suggested that those interested in advancing biology > in general and neuroscience in particular to a ?paradigmatic? as distinct > from a descriptive / folkloric science, would benefit from understanding > this transition as physics went through it in the 15th and 16th centuries. > In many ways, I think that is where we are today, although with perhaps > the decided disadvantage that we have a lot of physicists around who, again > in my view, don?t really understand the origins of their own science. By > that, I mean, that they don?t understand how much of their current > scientific structure, for example the relatively clean separation between > ?theorists? and ?experimentalists?, is dependent on the foundation build by > those (like Newton) who were both in an earlier time. Once you have a sold > underlying computational foundation for a science, then you have the luxury > of this kind of specialization - as there is a framework that ties it all > together. The Higgs effort being a very visible recent example. > > Neuroscience has nothing of the sort. As I point out in the article I > linked to in my first posting - while it was first proposed 40 years ago > (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites > (i.e. that there were non directly-synaptically associated voltage > dependent ion channels in the dendrite that governed its behavior), and 40 > years of anatomically and physiologically realistic modeling has been > necessary to start to understand what they do - many cerebellar modeling > efforts today simply ignore these channels. While that again, to many on > this list, may seem too far buried in the details, these voltage dependent > channels make the Purkinje cell the computational device that it is. > > Recently, I was asked to review a cerebellar modeling paper in which the > authors actually acknowledged that their model lacked these channels > because they would have been too computationally expensive to include. > Sadly for those authors, I was asked to review the paper for the usual > reason - that several of our papers were referenced accordingly. They > likely won?t make that mistake again - as after of course complementing > them on the fact that they were honest (and knowledgable) enough to have > remarked on the fact that their Purkinje cells weren?t really Purkinje > cells - I had to reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and will very > likely get away with it. > > Imagine a comparable situation in a field (like physics) which has > established a structural base for its enterprise. ?We found it > computational expedient to ignore the second law of thermodynamics in our > computations - sorry?. BTW, I know that details are ignored all the time > in physics as one deals with descriptions at different levels of scale - > although even there, the field clearly would like to have a way to link > across different levels of scale. I would claim, however, that that is > precisely the ?trick? that biology uses to ?beat? the second law - linking > all levels of scale together - another reason why you can?t ignore the > details in biological models if you really want to understand how biology > works. (too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this transition > many years ago, and ask the question how neuroscience (and biology) can > now. Key points I think are: > - you need to produce students who are REALLY both experimental and > theoretical (like Newton). (and that doesn?t mean programs that ?import? > physicists and give them enough biology to believe they know what they are > doing, or programs that link experimentalists to physicists to solve their > computational problems) > - you need to base the efforts on models (and therefore mathematics) of > sufficient complexity to capture the physical reality of the system being > studied (as Kepler was forced to do to make the sun centric model of the > solar system even as close to as accurate as the previous earth centered > system) > - you need to build a new form of collaboration and communication that > can support the complexity of those models. Fundamentally, we continue to > use the publication system (short papers in a journal) that was invented as > part of the transformation for physics way back then. Our laboratories are > also largely isolated and non-cooperative, more appropriate for studying > simpler things (like those in physics). Fortunate for us, we have a new > communication tool (the Internet) although, as can be expected, we are > mostly using it to reimplement old style communication systems (e-journals) > with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory needs to be > linked to the experimental side REALLY, and vice versa. I proposed a > number of years ago to NIH that they would make it into the history books > if they simply required the following monday, that any submitted > experimental grant include a REAL theoretical and computational component - > Sadly, they interpreted that as meaning that P.I.s should state "an > hypothesis" - which itself is remarkable, because most of the ?hypotheses? > I see stated in Federal grants are actually statements of what the P.I. > believes to be true. Don?t get me started on human imaging studies. arggg > - As long as we are talking about what funding agencies can do, how > about the following structure for grants - all grants need to be submitted > collaboratively by two laboratories who have different theories (better > models) about how a particular part of the brain works. The grant should > support at set of experiments, that both parties agree distinguish between > their two points of view. All results need to be published with joint > authorship. In effect that is how physics works - given its underlying > structure. > - You need to get rid, as quickly as possible, the pressure to > ?translate? neuroscience research explicitly into clinical significance - > we are not even close to being able to do that intentionally - and the > pressure (which is essentially a give away to the pharma and bio-tech > industries anyway) is forcing neurobiologists to link to what is arguably > the least scientific form of research there is - clinical research. It > just has to be the case that society needs to understand that an investment > in basic research will eventually result in all the wonderful outcomes for > humans we would all like, but this distortion now is killing real > neuroscience just at a critical time, when we may finally have the tools to > make the transition to a paradigmatic science. > As some of you know, I have been all about trying to do these things for > many years - with the GENESIS project, with the original CNS graduate > program at Caltech, with the CNS meetings, (even originally with NIPS) and > with the first ?Methods in Computational Neuroscience Course" at the > Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) > is actually wrapping up next week, and of course with my own research and > students. Of course, I have not been alone in this, but it is remarkable > how little impact all that has had on neuroscience or neuro-engineering. I > have to say, honestly, that the strong tendency seems to be for these > efforts to snap back to the non-realistic, non-biologically based modeling > and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way (sorry) can > figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the ?intro > neuroscience? survey courses in our graduate programs (religious > instruction) and instead organize an introductory course built around the > history of the discovery of the origin of the axon potential that > culminated in the first (and last) Nobel prize work in computational > neuroscience for the Hodkin Huxley model. The 50th anniversary of that > prize was celebrated last year, and the year before I helped to organize a > meeting celebrating the 60th anniversary of the publication of the original > papers (which I care much more about anyway). That meeting was, I believe, > the first meeting in neuroscience ever organized around a single > (mathematical) model or theory - and in organizing it, I required all the > speakers to show the HH model on their first slide, indicating which term > or feature of the model their work was related to. Again, a first - but > possible, as this is about the only ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation (second > order differential) and present the HH model primarily as a description of > the action potential. Most theorists regard the HH model as a prime > example of how progress can be made by ignoring the biological details. > Both views and interpretations are historically and practically incorrect. > In my opinion, if you can?t handle the math in the HH model, you shouldn?t > be a neurobiologist, and if you don?t understand the profound impact of > HH?s knowledge and experimental study of the squid giant axon on the model, > you shouldn?t be a neuro-theorist either. just saying. :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > > James, enjoyed your writing. > > So, what to do? We are trying to get organized in Canada and are thinking > how we fit in with your (US) and the European approaches and big money. My > thought is that our advantage might be flexibility by not having a single > theme but rather a general supporting structure for theory and > theory-experimental interactions. I believe the ultimate place where we > want to be is to take theoretical proposals more seriously and try to make > specific experiments for them; like the Higgs project. (Any other > suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not > already on there.) > > Also, with regards to big data, I believe that one very fascinating thing > about the brain is that it can function with 'small data'. > > Cheers, Thomas > > > On 2014-01-25 12:09 AM, "james bower" wrote: > >> Ivan thanks for the response, >> >> Actually, the talks at the recent Neuroscience Meeting about the Brain >> Project either excluded modeling altogether - or declared we in the US >> could leave it to the Europeans. I am not in the least bit nationalistic - >> but, collecting data without having models (rather than imaginings) to >> indicate what to collect, is simply foolish, with many examples from >> history to demonstrate the foolishness. In fact, one of the primary >> proponents (and likely beneficiaries) of this Brain Project, who gave the >> big talk at Neuroscience on the project (showing lots of pretty pictures), >> started his talk by asking: ?what have we really learned since Cajal, >> except that there are also inhibitory neurons?? Shocking, not only because >> Cajal actually suggested that there might be inhibitory neurons - in fact. >> To quote ?Stupid is as stupid does?. >> >> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, >> conservatively. The Higgs experiment was absolutely the opposite of a Big >> Data experiment - In fact, can you imagine the amount of money and time >> that would have been required if one had simply decided to collect all data >> at all possible energy levels? The Higgs experiment is all the more >> remarkable because it had the nearly unified support of the high energy >> physics community, not that there weren?t and aren?t skeptics, but still, >> remarkable that the large majority could agree on the undertaking and >> effort. The reason is, of course, that there was a theory - that dealt >> with the particulars and the details - not generalities. In contrast, >> there is a GREAT DEAL of skepticism (me included) about the Brain Project - >> its politics and its effects (or lack therefore), within neuroscience. (of >> course, many people are burring their concerns in favor of tin cups - >> hoping). Neuroscience has had genome envy for ever - the connectome is >> their response - who says its all in the connections? (sorry >> ?connectionists?) Where is the theory? Hebb? You should read Hebb if you >> haven?t - rather remarkable treatise. But very far from a theory. >> >> If you want an honest answer to your question - I have not seen any good >> evidence so far that the approach works, and I deeply suspect that the >> nervous system is very much NOT like any machine we have built or designed >> to date. I don?t believe that Newton would have accomplished what he did, >> had he not, first, been a remarkable experimentalist, tinkering with real >> things. I feel the same way about Neuroscience. Having spent almost 30 >> years building realistic models of its cells and networks (and also doing >> experiments, as described in the article I linked to) we have made some >> small progress - but only by avoiding abstractions and paying attention to >> the details. OF course, most experimentalists and even most modelers have >> paid little or no attention. We have a sociological and structural problem >> that, in my opinion, only the right kind of models can fix, coupled with a >> real commitment to the biology - in all its complexity. And, as the model >> I linked tries to make clear - we also have to all agree to start working >> on common ?community models?. But like big horn sheep, much safer to stand >> on your own peak and make a lot of noise. >> >> You can predict with great accuracy the movement of the planets in the >> sky using circles linked to other circles - nice and easy math, and very >> adaptable model (just add more circles when you need more accuracy, and >> invent entities like equant points, etc). Problem is, without getting into >> the nasty math and reality of ellipses- you can?t possible know anything >> about gravity, or the origins of the solar system, or its various and >> eventual perturbations. >> >> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >> >> The details of reality matter. >> >> Jim >> >> >> >> >> >> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >> >> >> I think perhaps the objection to the Big Data approach is that it is >> applied to the exclusion of all other modelling approaches. While it is >> true that complete and detailed understanding of neurophysiology and >> anatomy is at the heart of neuroscience, a lot can be learned about signal >> propagation in excitable branching structures using statistical physics, >> and a lot can be learned about information representation and transmission >> in the brain using mathematical theories about distributed communicating >> processes. As these modelling approaches have been successfully used in >> various areas of science, wouldn't you agree that they can also be used to >> understand at least some of the fundamental properties of brain structures >> and processes? >> >> -Ivan Raikov >> >> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >> >>> [snip] >>> >> An enormous amount of engineering and neuroscience continues to think >>> that the feedforward pathway is from the sensors to the inside - rather >>> than seeing this as the actual feedback loop. Might to some sound like a >>> semantic quibble, but I assure you it is not. >>> >>> If you believe as I do, that the brain solves very hard problems, in >>> very sophisticated ways, that involve, in some sense the construction of >>> complex models about the world and how it operates in the world, and that >>> those models are manifest in the complex architecture of the brain - then >>> simplified solutions are missing the point. >>> >>> What that means inevitably, in my view, is that the only way we will >>> ever understand what brain-like is, is to pay tremendous attention >>> experimentally and in our models to the actual detailed anatomy and >>> physiology of the brains circuits and cells. >>> >>> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> *Phone: 210 382 0553 <210%20382%200553>* >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged >> or contain privileged and confidential information. This information is >> only for the viewing or use of the intended recipient. If you have received >> this e-mail in error or are not the intended recipient, you are hereby >> notified that any disclosure, copying, distribution or use of, or the >> taking of any action in reliance upon, any of the information contained in >> this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that >> this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, >> this e-mail and all attachments to this e-mail must be immediately deleted >> from your computer without making any copies hereof and any and all hard >> copies made must be destroyed. If you have received this e-mail in error, >> please notify the sender by e-mail immediately. >> >> >> > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Sat Jan 25 15:33:43 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Sat, 25 Jan 2014 15:33:43 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> Message-ID: <52E41FA7.5080408@susaro.com> I have already written at least one paper (with Trevor Harley) complaining about much the same cluster of issues as have been raised in this discussion. I will mention just two points, extracted from those papers: 1) Re. Ptolemy's Epicycles, and the bankruptcy of that approach to psychology/neuroscience. It is possible to see this entire issue as stemming from the "complex-system-ness" of the system we are interested in. To the extent that cognition is a complex system (i.e. there is a disconnect between overall behavior and underlying mechanism), we would [insert here a long argument] expect it to be extraordinarily difficult to come up with high level theories of brain function that have a tight connection to the underlying neural machinery. That is especially relevant to those who think they can understand the brain by just simulating it, or collecting vast amounts of signal data: you're wasting your time, because all that effort will boot you nothing. There IS a way around that issue, but it involves a realignment of how we do both cog psych and neuroscience (and AI for that matter). We need more systematic exploration of very large numbers of different types of cognitive mechanism models. Treat the discovery of theories not as the work of bright individuals (one theory per individual per lifetime) but as a process that is quasi-automated, and which yields a thousand theories a day. 2) Re Brain Imaging. In the Loosemore and Harley paper I pointed out the massive impact that a slightly off-beat theory can have. If the functional units of the brain are actually "virtual" entities that are allowed to move around on a physical network of column-like units, some of the neuroscience data can be explained in a very elegant way (and BTW the same data is inconsistent with all other theories or frameworks). But....but....but: that same off-beat theory, if it were correct, would render into nonsense almost all of the defauot assumptions being made by those collecting the Big Neuroscience Data right now. It would do that because believe it or not all that BND is pretty much wedded to the idea that the functional units are not virtual; that the physical units actually do 'mean' something most of the time. My conclusion is with James, and with Juyang Weng, who started the discussion. The Big Data/Brain-Simulation-Or-Bust approach is a gigantic boondoggle. Richard Loosemore Mathematical and Physical Science Wells College Aurora NY 13026 USA On 1/25/14, 12:05 PM, james bower wrote: > Hi Jose, > > Ah, neuroimaging - don't get me started. Not all, but a great deal of > neuroimaging has become a modern form of phrenology IMHO, distorting > not only neuroscience, but it turns out, increasingly business too. > To wit: > [.....] -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at eecs.oregonstate.edu Sat Jan 25 15:39:40 2014 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Sat, 25 Jan 2014 12:39:40 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> Message-ID: <001501cf1a0d$930b6bd0$b9224370$@eecs.oregonstate.edu> I?ve enjoyed this provocative exchange. It reminds me very much of the discussion in the late 80s about whether to invest $3B in sequencing the human genome. The lab scientists who studied individual genes and metabolic pathways complained that this lacked any detailed guiding hypotheses and would just be a waste of money. The sequencing advocates promised that it would be revolutionary. It turned out that they were right. In fact, the technology was much more effective and much cheaper than they had dared to hope. The current BRAIN initiative is investing a similar amount of money in developing new sensing and measurement technologies. If we look at the history of science, we see many cases in which new instruments led to giant leaps forward: telescopes, microscopes, x-ray crystallography, MRI, interferometry, rapid throughput DNA, RNA, and protein sequencing. Of course every new technology will have biases and artifacts, and part of the process of developing a new technology is understanding these and how to compensate for them. Often, this advances the science as well. Sometimes it turns out that the instruments are only very indirectly capturing the underlying processes. This is where the big data argument gets interesting. If we look at the kind of data collected on the internet, it is almost always of this type. For many ecommerce applications, it suffices to build a predictive model of customer behavior. And this is the main contribution of applied machine learning. The bigger the data sets, the more accurate these predictive models can become. Unfortunately, such models are not very useful in science, because the goal of neuroscience, for example, is not to just to predict human behavior but to understand how that behavior is generated. If we only have ?indirect? measurements, we must fit models where the ?real? variables are latent. Making sound scientific inferences about latent variables is extremely difficult and relies on having very good models of how the latent variables produce the observed signal. Issues of identifiability and bias must be addressed, and there are also very challenging computational problems, because latent variable models typically exhibit many local optima. Regularization, the favorite tool of machine learning, usually worsens the biases and limits our ability to draw statistical inferences. If your model is fundamentally not identifiable, then it doesn?t matter how big your data are. Every scientific experiment (or expenditure) involves risk, and every scientist must ?place bets? on what will work. I?ll place mine on improving our measurement technology, because I think it can get us closer to measuring the critical causal variables. But I agree with Jim that not enough effort goes into the unglamorous process of figuring out what it is that our instruments are actually measuring. And if we don?t understand that, then our scientific inferences are likely to be wrong. -- Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 School of Electrical Engineering FAX: 541-737-1300 and Computer Science URL: eecs.oregonstate.edu/~tgd US Mail: 1148 Kelley Engineering Center Office: 2067 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of james bower Sent: Saturday, January 25, 2014 9:05 AM To: jose at psychology.rutgers.edu Cc: Connectionists Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare Hi Jose, Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: At present I am actually much more concerned (and involved) in the use of brain imaging in what has come to be called "Neuro-marketing?. Many on this list are perhaps not aware, but while we worry about the effect of over interpretation of neuroimaging data within neuroscience, the effect of this kind of data in the business world is growing and not good. Although my guess is that those of you in the United States might have noted the rather large and absurd marketing campaign by Lumosity and the sellers of other ?brain training? games. A number of neuroscientists are actually now getting in this business. As some of you know, almost as long as I have been involved in computational neuroscience, I have also been involved in exploring the use of games for children?s learning. In the game/learning world, the misuse of neuroscience and especially brain imaging has become excessive. It wouldn?t be appropriate to belabor this point on this list - although the use of neuroscience by the NN community does, in my view, often cross over into a kind of neuro-marketing. For those that are interested in the more general abuses of neuro-marketing, here is a link to the first ever session I organized in the game development world based on my work as a neurobiologist: http://www.youtube.com/watch?v=Joqmf4baaT8 &list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 As set up for that video, you should know that in his keynote address the night before, Jessi Schelll (of Schell Games and CMU) introduced his talk by saying that he was going to tell the audience what neuroscientists have figured out about human brains, going on to claim that they (we) have discovered that human brains come in two forms, goat brains and sheep brains. Of course the talk implied that the goats were in the room and the sheep were out there to be sold to. (although as I noted on the twitter feed at the time, there was an awful lot of ?baaing? going on in the audience :-) ). Anyway, the second iteration of my campaign to try to bring some balance and sanity to neuro-marketing, will take place at SxSW in Austin in march, in another session I have organized on the subject. http://schedule.sxsw.com/2014/events/event_IAP22511 If you happen to be in Austin for SxSW feel free to stop by. :-) The larger point, I suppose, is that while we debate these things within our own community, our debate and our claims have unintended consequences in larger society, with companies like Lumosity, in effect marketing to the baby boomers the idea (false) that using ?the science of neuroplasticity? and doing something as simple as playing games ?designed by neuroscientists? can revert their brains to teen age form. fRMI and Neuropsychology used extensively as evidence. Perhaps society has become so accustomed to outlandish claims and over selling that they won?t hold us accountable. Or perhaps they will. Jim p.s. (always a ps) I have also recently proposed that we declare a moratorium on neuroimaging studies until we at least know how the signal is related to actual neural-activity. Seems rather foolish to base so much speculation and interpretation on a signal we don?t understand. Easy enough to poo poo cell spikers - but to my knowledge, there is no evidence that neural computing is performed through the generation of areas of red, yellow, green and blue. :-) On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson wrote: Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle integrated fossilized views of the brain principles. I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. The most disconcerting thing about the emergence the new new neural network field(s) is that the NIH Connectome RFPs contain language about large scale network functions...and yet when Program managers are directly asked whether fMRI or any neuroimaging methods would be compliant with the RFP.. the answer is "NO". So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much less any other mental function! Hopefully neuroimaging will be relevant again. Just saying. Cheers. Steve PS. Hi Gary! Dijon! Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). >From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. All too easy to simplify, pontificate, and sell. So, I sympathize with Juyang Wang?s frustration. If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf Perhaps some sense of how far we have yet to go. Jim Bower On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! Thanks, Ralph's Android On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. -John On 1/24/14 2:19 PM, Gary Cottrell wrote: Hi John - It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. g. On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: Dear Matthew: My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." To say that, you have not read the book: Natural and Artificial Intelligence . You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. -John On 1/23/14 6:15 PM, Matthew Cook wrote: Dear John, I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. Matthew On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: Dear Anders, Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: the wide gap between neuron-like computing and well-known highly integrated brain functions. Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. -John On 1/13/14 12:14 PM, Anders Lansner wrote: Workshop on Brain-Like Computing, February 5-6 2014 The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. List of speakers Speaker Affiliation Giacomo Indiveri ETH Z?rich Abigail Morrison Forschungszentrum J?lich Mark Ritter IBM Watson Research Center Guillermo Cecchi IBM Watson Research Center Anders Lansner KTH Royal Institute of Technology Ahmed Hemani KTH Royal Institute of Technology Steve Furber University of Manchester Kazuyuki Aihara University of Tokyo Karlheinz Meier Heidelberg University Andreas Schierwagen Leipzig University For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR You need to sign up before January 28th. Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/work shop-on-brain-like-computing-1.442038 ****************************************** Anders Lansner Professor in Computer Science, Computational biology School of Computer Science and Communication Stockholm University and Royal Institute of Technology (KTH) ala at kth.se, +46-70-2166122 _____ Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271 ] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -- Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Sat Jan 25 16:20:44 2014 From: axel.hutt at inria.fr (Axel Hutt) Date: Sat, 25 Jan 2014 22:20:44 +0100 (CET) Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <1390664611.3420.15.camel@sam> Message-ID: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> ----- Original Message ----- > [...] > So once the neuroscience cell spikers get done analyzing 1000 or > 10000 or even a 1M neurons > at a circuit level.. we still won't know why someone makes decisions > about the shoes they wear; much > less any other mental function! Hopefully neuroimaging will be > relevant again. This brings me to a point that IMHO is very important : the right level of description of phenomena is hard to find. Sometimes some neuroscience research appears to me completely ill-posed, e.g. studying emotion evoked by music by analyzing intracranial Local Field Potentials or studying memory retrieval by single cell measurements, in my opinion this is not the way to go. Why ? Image the task to understand the underlying mechanism of a water wave, the ups and downs of the water surface. OK. What do physicists do ? They take (at best) the Navier-Stokes equation which considers the macroscopic properties of the fluid (compressibility etc.) and has a evolution variables mesoscopic (also measurable) variables. Nobody would even think to model single H2O-molecules, their properties, their interactions, simulate 1,2,3,10 or even hundreds and essecntially would find (probably) chaos for more than 3 molecules --> impressive work but does not answer the original question. But, many neuroscientists think that this is the way to go to answer questions on macroscopic phenomena replacing the H2O-molecules by neurons, e.g. studying single cells to learn more about visual perception of faces or their storage. IMHO this is the wrong concept. What we need is the Navier-Stokes-like equation to explain mesoscopic properties on the neural population level. Yes, of course, you may say that the neural structures are too complex, but hey, fluids are complex as well. It just depends what you are looking for. If you want to describe the EEG evoked by high-frequency visual flickers where no cognition is involved or resting state activity, sleep, anaesthesia and others this Navier-Stokes-like model may give you goo answers. If you are more interested in cognitive effects, then things become more complicated and then the Navier-Stokes-like model is not sufficient, but may give you a hint. In physics, this comcept to start from a well-established model for simple systems and extend it in some way to attack more difficult problems has been very successful. A good candidate for such an equation in neuroscience may be a more realistic variant of a neural population model like neural mass/neural field models which already today can describe several features observed in EEG. This brings me to another point: would it not be good to invest more effort in neuroscience (as already several of you have said) to understand rudimentary mechanisms and less cognition ? I would like to see more research on fundamental phenomena in neuroscience. For some years now, I work actively in anaesthesia research and realize that people love to investigate effects of new mixtures of drugs, and generate much data in experiments with drugs whose receptor action is very poorly understood and the effects on the brain is far from being understood. Only few people try to describe theoretically what is going on in a neural population, when anaesthetic drugs change the receptor properties in the populations. This is a simple question, but still not answered. Sad enough, the pressure of pharma industry and the public interest in health (of funding agencies) is that high that these fundamental and essential questions are asked too seldom. Axel -- Dr. rer. nat. Axel Hutt, HDR INRIA CR Nancy - Grand Est Equipe NEUROSYS (Head) 615, rue du Jardin Botanique 54603 Villers-les-Nancy Cedex France http://www.loria.fr/~huttaxel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 17:29:44 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 16:29:44 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E41FA7.5080408@susaro.com> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> <52E41FA7.5080408@susaro.com> Message-ID: <9A015446-A75F-4D3C-B40B-A7EEDBAAFB96@uthscsa.edu> On Jan 25, 2014, at 2:33 PM, Richard Loosemore wrote: > > 1) Re. Ptolemy's Epicycles, and the bankruptcy of that approach to psychology/neuroscience. > > Treat the discovery of theories not as the work of bright individuals (one theory per individual per lifetime) but as a process that is quasi-automated, and which yields a thousand theories a day. seems so painful when the brain is right there - begging to be considered on its own terms (I think). > > 2) Re Brain Imaging. > > In the Loosemore and Harley paper I pointed out the massive impact that a slightly off-beat theory can have. If the functional units of the brain are actually "virtual" entities that are allowed to move around on a physical network of column-like units, pause here, with a genuflect to a very bad idea (cortical columns) which has disrupted thinking about cerebral cortical mechanisms for years - and no, I don?t want to start that debate again - just look at the literature for any evidence that cortex is built of a serious of independent functional columnar units. It isn?t there. > > My conclusion is with James, and with Juyang Weng, who started the discussion. The Big Data/Brain-Simulation-Or-Bust approach is a gigantic boondoggle. Little doubt about that - I don?t know if they taped and put on the web the big talk about it at the neuroscience meeting - but take a look in case you wonder. (actually, I only lasted about 20 minutes myself). :-) Jim > > Richard Loosemore > Mathematical and Physical Science > Wells College > Aurora NY 13026 > USA > > > > > On 1/25/14, 12:05 PM, james bower wrote: >> >> Hi Jose, >> >> Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: >> > [.....] > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 17:41:47 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 16:41:47 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <-5739697215914102941@unknownmsgid> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <-5739697215914102941@unknownmsgid> Message-ID: <4A512DC1-F16C-4FF5-9768-E8623AA26BF5@uthscsa.edu> > > 2. Pay much more attention to the developmental and evolutionary aspects of the system. > > We will understand the brain and its function much better if we look at it in its 'natural habitat' and in terms of its 'natural history' - an embodied, developmental, evolutionary neuroscience. agreed > > At the same time, I think we should welcome all different perspectives and investigative approaches in this great enterprise. not at all. this is a big task, we have to smart about it. This isn?t open poetry night - :-) The issue of scale is very important - scale in biology, unlike physics, doesn?t mean you can build boundaries around your levels. The enormous thermodynamic efficiency of the brain almost certainly relies on the fact that the different levels reflect each other. While I, as an electrode jockey, am more than willing and actually have used PET and fMRI and also human psychophsics to explore ideas generated by models, MOST of the cognitive people I know deny that they need to pay the least bit attention to the basic neurobiology. (see several previous comments to that effect) - that is, after their intro neuroscience courses (sorry a bit snide). The fact is, that if you want to assume that the cerebellum, for example, is providing some kind of a timing signal, some from of memory, or some way to redirect attention, if the assumed functions can?t be supported by the actual circuitry, you are out of luck. Do MRI on my BMW - turns out that the radiator lights up (bright yellow), take it out, the car stops - in fact, eventually, the local radio stations stop broadcasting as well, because I can?t get the signal through my radio. Who cares about looking at the internal structure of the radiator, or its connectivity to the rest of the mechanics of the car. I can explain all the data, which fits perfectly with my prior hypothesis that the car?s function is dependent on the flow of fluids (humors). I don?t want to hear about the details - perfectly happy designing my next experiment on that assumption. Going to trace the temperature and pathways off all the molecules in the antifreeze - that will make the case for sure. :-) Jim > The history of science has been cited a lot in this discussuon, but one if its deeoest lessons is that we never know ahead of time what approach will be most fertile. Things always seem inevitable but only in retrospect. > > Ali > > Ali Minai > University of Cincinnati > > Sent from my iPhone > > On Jan 25, 2014, at 3:24 PM, Brad Wyble wrote: > >> I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. >> >> Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. >> >> To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. >> >> I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. >> >> In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. >> >> >> Best, >> Brad Wyble >> >> >> >> >> On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: >> Thanks for your comments Thomas, and good luck with your effort. >> >> I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. >> >> I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. >> >> Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. >> >> Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. >> >> As I said, they likely won?t make that mistake again - and will very likely get away with it. >> >> Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). >> >> Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: >> - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) >> - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) >> - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). >> - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg >> - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. >> - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. >> >> As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. >> >> Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. >> >> I hope so. >> >> Jim >> >> p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. >> >> Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) >> >> >> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: >> >>> James, enjoyed your writing. >>> >>> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >>> >>> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >>> >>> Cheers, Thomas >>> >>> >>> >>> On 2014-01-25 12:09 AM, "james bower" wrote: >>> Ivan thanks for the response, >>> >>> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >>> >>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >>> >>> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >>> >>> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >>> >>> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >>> >>> The details of reality matter. >>> >>> Jim >>> >>> >>> >>> >>> >>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >>> >>>> >>>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>>> >>>> -Ivan Raikov >>>> >>>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>>> [snip] >>>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>>> >>>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>>> >>>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>>> >>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> Phone: 210 382 0553 >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> >>> >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dg.connectionists at thesamovar.net Sat Jan 25 17:44:58 2014 From: dg.connectionists at thesamovar.net (Dan Goodman) Date: Sat, 25 Jan 2014 17:44:58 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> Message-ID: <52E43E6A.2010401@thesamovar.net> The comparison with physics is an interesting one, but we have to remember that neuroscience isn't physics. For a start, neuroscience is clearly much harder than physics in many ways. Linear and separable phenomena are much harder to find in neuroscience, and so both analysing and modelling data is much more difficult. Experimentally, it is much more difficult to control for independent variables in addition to the difficulty of working with living animals. So although we might be able to learn things from the history of physics - and I tend to agree with Axel Hutt that one of those lessons is to use the simplest possible model rather than trying to include all the biophysical details we know to exist - while neuroscience is in its pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need to try a diverse set of methodological approaches and see what wins. In terms of funding agencies, I think the best thing they could do would be to not insist on any one methodological approach to the exclusion of others. I also share doubts about the idea that if we collect enough data then interesting results will just pop out. On the other hand, there are some valid hypotheses about brain function that require the collection of large amounts of data. Personally, I think that we need to understand the coordinated behaviour of many neurons to understand how information is encoded and processed in the brain. At present, it's hard to look at enough neurons simultaneously to be very sure of finding this sort of coordinated activity, and this is one of the things that the HBP and BRAIN initiative are aiming at. Dan From bower at uthscsa.edu Sat Jan 25 17:21:46 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 16:21:46 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <001501cf1a0d$930b6bd0$b9224370$@eecs.oregonstate.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> <001501cf1a0d$930b6bd0$b9224370$@eecs.oregonstate.edu> Message-ID: <9EEE3CF0-0EB2-4AD8-91A5-C4B7EE180479@uthscsa.edu> Interesting addition to the discussion which has already involved a lot of analogies - (something deep about how cortex works there, for which I am sure there are several cognitive theories. :-) ). Several points to make with respect to the genome sequencing analogy, however: yes, there were debates (was part of them actually) in which it was argued that brute force sequencing was likely to be more expensive and less productive than directed research of the type being done. The problem, however, at the time was that there was no efficient way to do point directed sequencing of the genome, so the only way to do it really was in bulk. I did not myself, in that case, argue against the bulk sequencing, because 1) it was the only way to get the data and 2) and very importantly, there was no question and certainly a scientific consensus that the structure of proteins, gene regulation, etc, was related to the actual nucleotide sequences that were going to be obtained. What I did argue however, was that a commensurate amount of effort and money should be spent on building the underlying biologically realistic modeling efforts (in this case, gene regulatory mechanisms) necessary to hope to link the nucleotide sequences to the actual function of cells. In fact, I even edited one of the first books on the subject. So, there was never any question that the data would be useful (unlike the conectome project, where there is actually a strong lack of consensus on that point). And the only way then to get the data really was with bulk sequencing. The predictions that the modelers, including me, made was that if you didn?t build up the (realistic) gene regulatory modeling effort at the same time, you would be left with masses of data, that would be misinterpreted, and not really understood, so therefore, highly subject to speculation and confusion. I believe that was the more accurate prediction. Again, the current situation with the brain initiative is a very different situation than with the genome then: . 1) it is highly debatable (and certainly there is no consensus, that the function we are seeking to understand is exclusively or maybe even mostly in the connections 2) relatively speaking the technology underlying genome sequencing was pretty well understood, although, of course, the massive investment of money improved and streamlined the process (BTW, many argue that that would have happened anyway, and with less overall cost with directed research - but the big labs wanted the money) 3) there is a mapping (although it turns out not as direct as most thought at the time) between nucleotide sequences and the structure of proteins, and it was clear the important role proteins played (again in general) in cellular biology. No such connection is established for neuronal connectivity. 4) There are lots of more pointed and directed alternatives that will likely advance our understanding of brain function - in order words, the field is not hung up because of a lack of a connectivity matrix. Will $3BB dollars spent on this effort result in technological advancements - sure Is the data to be collected essential to progress in neuroscience the way the genome data was? Very far from clear. One other point, to a larger extent than people thought at the time, the genome project did not bankrupt the rest of molecular biology. Now, however, despite what has been said, we are at best in a zero sum game, more likely negative sum. Is this where the money should be spent - I seriously doubt it. One thing, however, I am absolutely sure of - as in the case of the genome project, under funding or not funding the modeling side of science will have the same effect as it did in molecular biology (and still does). A couple of other quick (I promise) points: MRI as a revolutionary tool: It is not that hard to argue that the massive amount of money spent on MRI imaging has not advanced neuroscience in proportion. in fact, a case can be made, in my view that it has significantly set us back as most of the cognitive neuroscientists I know, believe that imaging has basically confirmed what they already knew about ?executive function?, attention mechanisms, etc. to begin with. When was the last time in science that a supposedly revolutionary new technology basically confirmed 95% of what we already believed to be the case? Two possibilities, either we are very close to understanding how brains work - or we have invented the ideal technology to delude ourselves. No need probably to state how I would vote. Internet analytics as a way to understand human behavior: Human choice is so contained on the internet (mainly by commercial objectives) that variability in human behavior is severely reduced, making the models more predictive. If you get a human to sit in front of a slot machine, you can predict a lot about their behavior based on accumulated data on average. Brains on slot machines are not particularly interesting, because of the slot machine. Just a note, my company is more and more deeply in the business of figuring out how to measure complex human interactions based on behavior in our virtual worlds. Not a big data problem. Finally, one might think that I of all people, would be happy to see a large amount of money spent on the detailed connections between neurons and on procedures like multi-single neuron recording. In fact, I was one of the first people to record from multiple single neurons at once (in 1982) and have patents on some of the technology. However, it was precisely the experience of pouring over data recorded from 32 Purkinje cells simultaneously that made it clear that we needed models to figure anything out. I hired a guy named Matt Wilson (then an engineering Master student at Wisconsin) to build what was then the first realistic network model of cerebral cortex. I have only become more convinced that data collected in the absence of models, especially in systems as complex as the nervous system, is more a hinderance than a help. Jim On Jan 25, 2014, at 2:39 PM, Thomas G. Dietterich wrote: > I?ve enjoyed this provocative exchange. It reminds me very much of the discussion in the late 80s about whether to invest $3B in sequencing the human genome. The lab scientists who studied individual genes and metabolic pathways complained that this lacked any detailed guiding hypotheses and would just be a waste of money. The sequencing advocates promised that it would be revolutionary. It turned out that they were right. In fact, the technology was much more effective and much cheaper than they had dared to hope. > > The current BRAIN initiative is investing a similar amount of money in developing new sensing and measurement technologies. If we look at the history of science, we see many cases in which new instruments led to giant leaps forward: telescopes, microscopes, x-ray crystallography, MRI, interferometry, rapid throughput DNA, RNA, and protein sequencing. Of course every new technology will have biases and artifacts, and part of the process of developing a new technology is understanding these and how to compensate for them. Often, this advances the science as well. > > Sometimes it turns out that the instruments are only very indirectly capturing the underlying processes. This is where the big data argument gets interesting. If we look at the kind of data collected on the internet, it is almost always of this type. For many ecommerce applications, it suffices to build a predictive model of customer behavior. And this is the main contribution of applied machine learning. The bigger the data sets, the more accurate these predictive models can become. > > Unfortunately, such models are not very useful in science, because the goal of neuroscience, for example, is not to just to predict human behavior but to understand how that behavior is generated. If we only have ?indirect? measurements, we must fit models where the ?real? variables are latent. Making sound scientific inferences about latent variables is extremely difficult and relies on having very good models of how the latent variables produce the observed signal. Issues of identifiability and bias must be addressed, and there are also very challenging computational problems, because latent variable models typically exhibit many local optima. Regularization, the favorite tool of machine learning, usually worsens the biases and limits our ability to draw statistical inferences. If your model is fundamentally not identifiable, then it doesn?t matter how big your data are. > > Every scientific experiment (or expenditure) involves risk, and every scientist must ?place bets? on what will work. I?ll place mine on improving our measurement technology, because I think it can get us closer to measuring the critical causal variables. But I agree with Jim that not enough effort goes into the unglamorous process of figuring out what it is that our instruments are actually measuring. And if we don?t understand that, then our scientific inferences are likely to be wrong. > > -- > Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 > School of Electrical Engineering FAX: 541-737-1300 > and Computer Science URL: eecs.oregonstate.edu/~tgd > US Mail: 1148 Kelley Engineering Center > Office: 2067 Kelley Engineering Center > Oregon State Univ., Corvallis, OR 97331-5501 > > > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of james bower > Sent: Saturday, January 25, 2014 9:05 AM > To: jose at psychology.rutgers.edu > Cc: Connectionists > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare > > Hi Jose, > > Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: > > At present I am actually much more concerned (and involved) in the use of brain imaging in what has come to be called "Neuro-marketing?. Many on this list are perhaps not aware, but while we worry about the effect of over interpretation of neuroimaging data within neuroscience, the effect of this kind of data in the business world is growing and not good. Although my guess is that those of you in the United States might have noted the rather large and absurd marketing campaign by Lumosity and the sellers of other ?brain training? games. A number of neuroscientists are actually now getting in this business. > > As some of you know, almost as long as I have been involved in computational neuroscience, I have also been involved in exploring the use of games for children?s learning. In the game/learning world, the misuse of neuroscience and especially brain imaging has become excessive. It wouldn?t be appropriate to belabor this point on this list - although the use of neuroscience by the NN community does, in my view, often cross over into a kind of neuro-marketing. > > For those that are interested in the more general abuses of neuro-marketing, here is a link to the first ever session I organized in the game development world based on my work as a neurobiologist: > > http://www.youtube.com/watch?v=Joqmf4baaT8&list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 > > As set up for that video, you should know that in his keynote address the night before, Jessi Schelll (of Schell Games and CMU) introduced his talk by saying that he was going to tell the audience what neuroscientists have figured out about human brains, going on to claim that they (we) have discovered that human brains come in two forms, goat brains and sheep brains. Of course the talk implied that the goats were in the room and the sheep were out there to be sold to. (although as I noted on the twitter feed at the time, there was an awful lot of ?baaing? going on in the audience :-) ). > > Anyway, the second iteration of my campaign to try to bring some balance and sanity to neuro-marketing, will take place at SxSW in Austin in march, in another session I have organized on the subject. > > http://schedule.sxsw.com/2014/events/event_IAP22511 > > If you happen to be in Austin for SxSW feel free to stop by. :-) > > The larger point, I suppose, is that while we debate these things within our own community, our debate and our claims have unintended consequences in larger society, with companies like Lumosity, in effect marketing to the baby boomers the idea (false) that using ?the science of neuroplasticity? and doing something as simple as playing games ?designed by neuroscientists? can revert their brains to teen age form. fRMI and Neuropsychology used extensively as evidence. > > Perhaps society has become so accustomed to outlandish claims and over selling that they won?t hold us accountable. > > Or perhaps they will. > > Jim > > > p.s. (always a ps) I have also recently proposed that we declare a moratorium on neuroimaging studies until we at least know how the signal is related to actual neural-activity. Seems rather foolish to base so much speculation and interpretation on a signal we don?t understand. Easy enough to poo poo cell spikers - but to my knowledge, there is no evidence that neural computing is performed through the generation of areas of red, yellow, green and blue. :-) > > > > > > > > On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson wrote: > > > Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle > integrated fossilized views of the brain principles. > > I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. > > The most disconcerting thing about the emergence the new new neural network field(s) > is that the NIH Connectome RFPs contain language about large scale network functions...and > yet when Program managers are directly asked whether fMRI or any neuroimaging methods > would be compliant with the RFP.. the answer is "NO". > > So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons > at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much > less any other mental function! Hopefully neuroimaging will be relevant again. > > Just saying. > > Cheers. > > Steve > PS. Hi Gary! Dijon! > > Stephen Jos? Hanson > Director RUBIC (Rutgers Brain Imaging Center) > Professor of Psychology > Member of Cognitive Science Center (NB) > Member EE Graduate Program (NB) > Member CS Graduate Program (NB) > Rutgers University > > email: jose at psychology.rutgers.edu > web: psychology.rutgers.edu/~jose > lab: www.rumba.rutgers.edu > fax: 866-434-7959 > voice: 973-353-3313 (RUBIC) > > On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: > > Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. > > > Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). > > > I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - > > > At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). > > > From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. > > > Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. > > > It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. > > > An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. > > > If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. > > > What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. > > > I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. > > > All too easy to simplify, pontificate, and sell. > > > So, I sympathize with Juyang Wang?s frustration. > > > If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. > > > If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf > > > Perhaps some sense of how far we have yet to go. > > > Jim Bower > > > > > > > > > > On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: > > > Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. > > Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! > > Thanks, > Ralph's Android > > On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. > > But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. > > Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. > > Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. > > -John > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > Hi John - > > > It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. > > > g. > > On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: > > Dear Matthew: > > My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. > > You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." > > Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. > > Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! > > You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." > > To say that, you have not read the book: Natural and Artificial Intelligence. You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. > > -John > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > Dear John, > > > I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. > > > Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. > > > I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. > > > Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". > > > You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. > > > This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. > > > Matthew > > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > Dear Anders, > > Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. > > I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." > > What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: > the wide gap between neuron-like computing and well-known highly integrated brain functions. > Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". > > Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? > Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. > > Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? > > I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. > > -John > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. > > > List of speakers > > Speaker > > > Affiliation > > > Giacomo Indiveri > > > ETH Z?rich > > > Abigail Morrison > > > Forschungszentrum J?lich > > > Mark Ritter > > > IBM Watson Research Center > > > Guillermo Cecchi > > > IBM Watson Research Center > > > Anders Lansner > > > KTH Royal Institute of Technology > > > Ahmed Hemani > > > KTH Royal Institute of Technology > > > Steve Furber > > > University of Manchester > > > Kazuyuki Aihara > > > University of Tokyo > > > Karlheinz Meier > > > Heidelberg University > > > Andreas Schierwagen > > > Leipzig University > > > > > > For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR > > You need to sign up before January 28th. > > Web page:http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > > > > > > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se, +46-70-2166122 > > > > > > > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > > Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln > > > "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama > > > "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton > > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > > -- > > > > > Dr. James M. Bower Ph.D. > Professor of Computational Neurobiology > Barshop Institute for Longevity and Aging Studies. > 15355 Lambda Drive > University of Texas Health Science Center > San Antonio, Texas 78245 > > Phone: 210 382 0553 > Email: bower at uthscsa.edu > Web: http://www.bower-lab.org > twitter: superid101 > linkedin: Jim Bower > > CONFIDENTIAL NOTICE: > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.coward at anu.edu.au Sat Jan 25 16:08:46 2014 From: andrew.coward at anu.edu.au (Andrew Coward) Date: Sat, 25 Jan 2014 21:08:46 +0000 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> , Message-ID: <0faa1ff768e8432eb19da68bd916fe1b@HKNPR06MB387.apcprd06.prod.outlook.com> I strongly agree with the position that it is critical to find models that work on many different levels of complexity and abstraction. In my view, how we understand complex electronic systems is a much better model for understanding the brain than physics. This is not to say that there is any direct resemblance between brains and electronic systems, rather to say that there are analogies between the problem of understanding how billions of transistors implement the user features of an electronic system and the ways in which trillions of neurons implement cognitive functions. In electronic systems, the use of common information models (behavioural instruction and data read/write) make it possible to describe user features in a way that can be mapped by the same information models through many levels of detail , for example down to the combinations and sequences of machine code instructions that implement the high level user instructions. It is the use of these common information paradigms that make it possible to design and modify electronic systems. I have argued elsewhere that natural selection pressures have resulted in some analogous common information models in the brain, although in this case they are the qualitatively different behavioural recommendation and condition definition/detection. As described in my recent book, these information models make it possible to create descriptions of high level cognitive phenomena that can be mapped through anatomy, neuron physiology and neurochemistry to make integrated understanding possible. Andrew Coward ________________________________ From: Connectionists on behalf of Brad Wyble Sent: 25 January 2014 12:00 To: james bower Cc: Connectionists Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. Best, Brad Wyble On Sat, Jan 25, 2014 at 9:59 AM, james bower > wrote: Thanks for your comments Thomas, and good luck with your effort. I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. As I said, they likely won?t make that mistake again - and will very likely get away with it. Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. I hope so. Jim p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg > wrote: James, enjoyed your writing. So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. Cheers, Thomas On 2014-01-25 12:09 AM, "james bower" > wrote: Ivan thanks for the response, Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. As I have been saying for 30 years: Beware Ptolemy and curve fitting. The details of reality matter. Jim On Jan 24, 2014, at 7:02 PM, Ivan Raikov > wrote: I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? -Ivan Raikov On Sat, Jan 25, 2014 at 8:31 AM, james bower > wrote: [snip] An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 19:09:04 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 18:09:04 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E43E6A.2010401@thesamovar.net> References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> Message-ID: <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> About to sign off here - as have probably already taken too much bandwidth. (although it has been a long time) But just for final clarity on the point about physics - I am not claiming that the actual tools etc, developed by physics mostly to study non-biological and mostly ?simpler? systems (for example, systems were the elements (unlike neurons) aren?t ?individualized? - and therefore can be subjected to a certain amount of averaging (ie. thermodynamics), will apply. But I am suggesting (all be it in an oversimplified way) that the transition from a largely folkloric, philosophically (religiously) driven style of physics, to the physics of today was accomplished in the 15th century by the rejection of the curve fitting, ?simplified? and self reflective Ptolemic model of the solar system. (not actually, it turns out for that reason, but because the Ptolemaic model has become too complex and impure - the famous equint point). Instead, Newton, Kepler, etc, further developed a model that actually valued the physical structure of that system, independent of the philosophical, self reflecting previous set of assumptions. I know, I know that this is an oversimplified description of what happened, but, it is very likely that Newtons early (age 19) discovery of what approximated the least squares law in the ?realistic model? he had constructed of the earth moon system (where it was no problem and pretty clearly evident that the moon orbited the earth in a regular way), lead in later years to his development of mechanics - which, clearly provided an important "community model? of the sort we completely lack in neuroscience and seem to me continue to try to avoid. I have offered for years to buy the beer at the CNS meeting if all the laboratories describing yet another model of the hippocampus or the visual cortex would get together to agree on a single model they would all work on. No takers yet. The paper I linked to in my first post describes how that has happened for the Cerebellar Purkinje cell, because of GENESIS and because we didn?t block others from using the model, even to criticize us. However, when I sent that paper recently to a computational neuroscience I heard was getting into Purkinje cell modeling, he wrote back to say he was developing his own model thank you very much. The proposal that we all be free to build our own models - and everyone is welcome, is EXACTLY the wrong direction. We need more than calculous - and although I understand their attractiveness believe me, models that can be solved in close formed solutions are not likely to be particularly useful in biology, where the averaging won?t work in the same way. The relationship between scales is different, lots of things are different - which means the a lot of the tools will have to be different too. And I even agree that some of the tools developed by engineering, where one is actually trying to make things that work, might end up being useful, or even perhaps more useful. However, the transition to paradigmatic science I believe will critically depend on the acceptance of community models (they are the ?paradigm?), and the models most likely with the most persuasive force as well as the ones mostly likelihood of revealing unexpected functional relationships, are ones that FIRST account for the structure of the brain, and SECOND are used to explore function (rather than what is usually the other way around). As described in the paper I posted, that is exactly what has happened through long hard work (since 1989) using the Purkinje cell model. In the end, unless you are a duelist (which I suspect many actually are, in effect), brain computation involves nothing beyond the nervous system and its physical and physiological structure. Therefore, that structure will be the ultimate reference for how things really work, no matter what level of scale you seek to describe. From 30 years of effort, I believe even more firmly now than I did back then, that, like Newton and his friends, this is where we should start - figuring out the principles and behavior from the physics of the elements themselves. You can claim it is impossible - you can claim that models at other levels of abstraction can help, however, in the end ?the truth? lies in the circuitry in all its complexity. But you can?t just jump into the complexity, without a synergistic link to models that actually provide insights at the detailed level of the data you seek to collect. IMHO. Jim (no ps) On Jan 25, 2014, at 4:44 PM, Dan Goodman wrote: > The comparison with physics is an interesting one, but we have to remember that neuroscience isn't physics. For a start, neuroscience is clearly much harder than physics in many ways. Linear and separable phenomena are much harder to find in neuroscience, and so both analysing and modelling data is much more difficult. Experimentally, it is much more difficult to control for independent variables in addition to the difficulty of working with living animals. > > So although we might be able to learn things from the history of physics - and I tend to agree with Axel Hutt that one of those lessons is to use the simplest possible model rather than trying to include all the biophysical details we know to exist - while neuroscience is in its pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need to try a diverse set of methodological approaches and see what wins. In terms of funding agencies, I think the best thing they could do would be to not insist on any one methodological approach to the exclusion of others. > > I also share doubts about the idea that if we collect enough data then interesting results will just pop out. On the other hand, there are some valid hypotheses about brain function that require the collection of large amounts of data. Personally, I think that we need to understand the coordinated behaviour of many neurons to understand how information is encoded and processed in the brain. At present, it's hard to look at enough neurons simultaneously to be very sure of finding this sort of coordinated activity, and this is one of the things that the HBP and BRAIN initiative are aiming at. > > Dan Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at eecs.oregonstate.edu Sat Jan 25 19:31:13 2014 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Sat, 25 Jan 2014 16:31:13 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <336744F1-4A8A-4195-891D-83F246B31ABC@gmail.com> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> <001501cf1a0d$930b6bd0$b9224370$@eecs.oregonstate.edu> <336744F1-4A8A-4195-891D-83F246B31ABC@gmail.com> Message-ID: <001a01cf1a2d$ebb19cd0$c314d670$@eecs.oregonstate.edu> Hi Bard, This sounds like trouble. Yes, in the Human Genome Project there was a clear end-point for the development of a new measurement method. Without a clear end-point, how would we even know whether we are measuring the right thing? I *hope* that each team working on new measurement methods knows what they are trying to measure and has an independent way of assessing their success. If they don?t, then this will be a lot more like astronomy (e.g., measuring electromagnetic radiation and trying to make inferences) than like genetics. Astronomers do this because they have no alternative, but neuroscientists can and should do better. If the gene sequence measurement problem had been much harder, perhaps the various teams would not have been able to reach the end point. In that case, the technology development would resemble the ?war on XXXX? in the sense of never reaching an end point. But even in that case, we would have been able to measure the error properties of the sequencing methods, so we could have measured progress and even done science using the imperfect measurements. >From your description, it sounds like the measurement goals of the HBP are pretty murky. --Tom -- Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 School of Electrical Engineering FAX: 541-737-1300 and Computer Science URL: eecs.oregonstate.edu/~tgd US Mail: 1148 Kelley Engineering Center Office: 2067 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 From: Bard Ermentrout [mailto:ermentrout at gmail.com] Sent: Saturday, January 25, 2014 2:35 PM To: Dietterich, Tom Cc: Connectionists Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare With the human genome project,there was clear endpoint. One knows when one has sequenced the human genome. With the BRAIN initiative, I would be optimistic about it if someone could tell me when we would know that we were done. But, of course we won't know when we are done. It is like the war on XXXX where you can put in whatever you like for XXXX. Until we know the right questions to ask we will get nowhere. We'll amass a ton of data, but unlike the genome where there is an underlying theory ( we know genes code for proteins and that other parts of the genome are regulatory) we have no such theory of the nervous system. Everyone agrees s to what the genetic code is, but pick any random assortment of N neuroscientists, and there will be O(N) theories as to the code. --Bard Ermentrout On Jan 25, 2014, at 3:39 PM, "Thomas G. Dietterich" wrote: I?ve enjoyed this provocative exchange. It reminds me very much of the discussion in the late 80s about whether to invest $3B in sequencing the human genome. The lab scientists who studied individual genes and metabolic pathways complained that this lacked any detailed guiding hypotheses and would just be a waste of money. The sequencing advocates promised that it would be revolutionary. It turned out that they were right. In fact, the technology was much more effective and much cheaper than they had dared to hope. The current BRAIN initiative is investing a similar amount of money in developing new sensing and measurement technologies. If we look at the history of science, we see many cases in which new instruments led to giant leaps forward: telescopes, microscopes, x-ray crystallography, MRI, interferometry, rapid throughput DNA, RNA, and protein sequencing. Of course every new technology will have biases and artifacts, and part of the process of developing a new technology is understanding these and how to compensate for them. Often, this advances the science as well. Sometimes it turns out that the instruments are only very indirectly capturing the underlying processes. This is where the big data argument gets interesting. If we look at the kind of data collected on the internet, it is almost always of this type. For many ecommerce applications, it suffices to build a predictive model of customer behavior. And this is the main contribution of applied machine learning. The bigger the data sets, the more accurate these predictive models can become. Unfortunately, such models are not very useful in science, because the goal of neuroscience, for example, is not to just to predict human behavior but to understand how that behavior is generated. If we only have ?indirect? measurements, we must fit models where the ?real? variables are latent. Making sound scientific inferences about latent variables is extremely difficult and relies on having very good models of how the latent variables produce the observed signal. Issues of identifiability and bias must be addressed, and there are also very challenging computational problems, because latent variable models typically exhibit many local optima. Regularization, the favorite tool of machine learning, usually worsens the biases and limits our ability to draw statistical inferences. If your model is fundamentally not identifiable, then it doesn?t matter how big your data are. Every scientific experiment (or expenditure) involves risk, and every scientist must ?place bets? on what will work. I?ll place mine on improving our measurement technology, because I think it can get us closer to measuring the critical causal variables. But I agree with Jim that not enough effort goes into the unglamorous process of figuring out what it is that our instruments are actually measuring. And if we don?t understand that, then our scientific inferences are likely to be wrong. -- Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 School of Electrical Engineering FAX: 541-737-1300 and Computer Science URL: eecs.oregonstate.edu/~tgd US Mail: 1148 Kelley Engineering Center Office: 2067 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of james bower Sent: Saturday, January 25, 2014 9:05 AM To: jose at psychology.rutgers.edu Cc: Connectionists Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare Hi Jose, Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: At present I am actually much more concerned (and involved) in the use of brain imaging in what has come to be called "Neuro-marketing?. Many on this list are perhaps not aware, but while we worry about the effect of over interpretation of neuroimaging data within neuroscience, the effect of this kind of data in the business world is growing and not good. Although my guess is that those of you in the United States might have noted the rather large and absurd marketing campaign by Lumosity and the sellers of other ?brain training? games. A number of neuroscientists are actually now getting in this business. As some of you know, almost as long as I have been involved in computational neuroscience, I have also been involved in exploring the use of games for children?s learning. In the game/learning world, the misuse of neuroscience and especially brain imaging has become excessive. It wouldn?t be appropriate to belabor this point on this list - although the use of neuroscience by the NN community does, in my view, often cross over into a kind of neuro-marketing. For those that are interested in the more general abuses of neuro-marketing, here is a link to the first ever session I organized in the game development world based on my work as a neurobiologist: http://www.youtube.com/watch?v=Joqmf4baaT8 &list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 As set up for that video, you should know that in his keynote address the night before, Jessi Schelll (of Schell Games and CMU) introduced his talk by saying that he was going to tell the audience what neuroscientists have figured out about human brains, going on to claim that they (we) have discovered that human brains come in two forms, goat brains and sheep brains. Of course the talk implied that the goats were in the room and the sheep were out there to be sold to. (although as I noted on the twitter feed at the time, there was an awful lot of ?baaing? going on in the audience :-) ). Anyway, the second iteration of my campaign to try to bring some balance and sanity to neuro-marketing, will take place at SxSW in Austin in march, in another session I have organized on the subject. http://schedule.sxsw.com/2014/events/event_IAP22511 If you happen to be in Austin for SxSW feel free to stop by. :-) The larger point, I suppose, is that while we debate these things within our own community, our debate and our claims have unintended consequences in larger society, with companies like Lumosity, in effect marketing to the baby boomers the idea (false) that using ?the science of neuroplasticity? and doing something as simple as playing games ?designed by neuroscientists? can revert their brains to teen age form. fRMI and Neuropsychology used extensively as evidence. Perhaps society has become so accustomed to outlandish claims and over selling that they won?t hold us accountable. Or perhaps they will. Jim p.s. (always a ps) I have also recently proposed that we declare a moratorium on neuroimaging studies until we at least know how the signal is related to actual neural-activity. Seems rather foolish to base so much speculation and interpretation on a signal we don?t understand. Easy enough to poo poo cell spikers - but to my knowledge, there is no evidence that neural computing is performed through the generation of areas of red, yellow, green and blue. :-) On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson wrote: Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle integrated fossilized views of the brain principles. I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. The most disconcerting thing about the emergence the new new neural network field(s) is that the NIH Connectome RFPs contain language about large scale network functions...and yet when Program managers are directly asked whether fMRI or any neuroimaging methods would be compliant with the RFP.. the answer is "NO". So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much less any other mental function! Hopefully neuroimaging will be relevant again. Just saying. Cheers. Steve PS. Hi Gary! Dijon! Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). >From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. All too easy to simplify, pontificate, and sell. So, I sympathize with Juyang Wang?s frustration. If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf Perhaps some sense of how far we have yet to go. Jim Bower On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! Thanks, Ralph's Android On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. -John On 1/24/14 2:19 PM, Gary Cottrell wrote: Hi John - It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. g. On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: Dear Matthew: My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." To say that, you have not read the book: Natural and Artificial Intelligence . You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. -John On 1/23/14 6:15 PM, Matthew Cook wrote: Dear John, I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. Matthew On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: Dear Anders, Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: the wide gap between neuron-like computing and well-known highly integrated brain functions. Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. -John On 1/13/14 12:14 PM, Anders Lansner wrote: Workshop on Brain-Like Computing, February 5-6 2014 The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. List of speakers Speaker Affiliation Giacomo Indiveri ETH Z?rich Abigail Morrison Forschungszentrum J?lich Mark Ritter IBM Watson Research Center Guillermo Cecchi IBM Watson Research Center Anders Lansner KTH Royal Institute of Technology Ahmed Hemani KTH Royal Institute of Technology Steve Furber University of Manchester Kazuyuki Aihara University of Tokyo Karlheinz Meier Heidelberg University Andreas Schierwagen Leipzig University For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR You need to sign up before January 28th. Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 ****************************************** Anders Lansner Professor in Computer Science, Computational biology School of Computer Science and Communication Stockholm University and Royal Institute of Technology (KTH) ala at kth.se, +46-70-2166122 _____ Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271 ] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -- Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sat Jan 25 21:07:20 2014 From: bower at uthscsa.edu (james bower) Date: Sat, 25 Jan 2014 20:07:20 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <001a01cf1a2d$ebb19cd0$c314d670$@eecs.oregonstate.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> <001501cf1a0d$930b6bd0$b9224370$@eecs.oregonstate.edu> <336744F1-4A8A-4195-891D-83F246B31ABC@gmail.com> <001a01cf1a2d$ebb19cd0$c314d670$@eecs.oregonstate.edu> Message-ID: > > > I *hope* that each team working on new measurement methods knows what they are trying to measure This is precisely the original point - you CAN?T know the right thing to measure without theory and models. Who knew the position of stars apparently shifted when close to the sun? Who knew that the 3.2K (or whatever it was) radiation that those two AT&T engineers were trying to remove with aluminum foil from their microwave dishes was SUPPOSED to be there? And who knew that the overshoot of the action potential wasn?t an artifact of the amplifier? And what about the student who in the early days of GENESIS wrote to us claiming that GENESIS was poorly programmed because if you hyperpolerized the cell, the membrane rebounded into an action potential? The history of science is full of ?who knews? unfortunately, neuroscience is currently full of ?I already knows? for no good reason. :-) Its the neuroscience theory behind the measurements that is so murky as to be non existent. Jim > and has an independent way of assessing their success. If they don?t, then this will be a lot more like astronomy (e.g., measuring electromagnetic radiation and trying to make inferences) than like genetics. Astronomers do this because they have no alternative, but neuroscientists can and should do better. > > If the gene sequence measurement problem had been much harder, perhaps the various teams would not have been able to reach the end point. In that case, the technology development would resemble the ?war on XXXX? in the sense of never reaching an end point. But even in that case, we would have been able to measure the error properties of the sequencing methods, so we could have measured progress and even done science using the imperfect measurements. > > From your description, it sounds like the measurement goals of the HBP are pretty murky. > > --Tom > > > -- > Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 > School of Electrical Engineering FAX: 541-737-1300 > and Computer Science URL: eecs.oregonstate.edu/~tgd > US Mail: 1148 Kelley Engineering Center > Office: 2067 Kelley Engineering Center > Oregon State Univ., Corvallis, OR 97331-5501 > > > From: Bard Ermentrout [mailto:ermentrout at gmail.com] > Sent: Saturday, January 25, 2014 2:35 PM > To: Dietterich, Tom > Cc: Connectionists > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare > > With the human genome project,there was clear endpoint. One knows when one has sequenced the human genome. With the BRAIN initiative, I would be optimistic about it if someone could tell me when we would know that we were done. But, of course we won't know when we are done. It is like the war on XXXX where you can put in whatever you like for XXXX. Until we know the right questions to ask we will get nowhere. We'll amass a ton of data, but unlike the genome where there is an underlying theory ( we know genes code for proteins and that other parts of the genome are regulatory) we have no such theory of the nervous system. Everyone agrees s to what the genetic code is, but pick any random assortment of N neuroscientists, and there will be O(N) theories as to the code. > > --Bard Ermentrout > > On Jan 25, 2014, at 3:39 PM, "Thomas G. Dietterich" wrote: > > I?ve enjoyed this provocative exchange. It reminds me very much of the discussion in the late 80s about whether to invest $3B in sequencing the human genome. The lab scientists who studied individual genes and metabolic pathways complained that this lacked any detailed guiding hypotheses and would just be a waste of money. The sequencing advocates promised that it would be revolutionary. It turned out that they were right. In fact, the technology was much more effective and much cheaper than they had dared to hope. > > The current BRAIN initiative is investing a similar amount of money in developing new sensing and measurement technologies. If we look at the history of science, we see many cases in which new instruments led to giant leaps forward: telescopes, microscopes, x-ray crystallography, MRI, interferometry, rapid throughput DNA, RNA, and protein sequencing. Of course every new technology will have biases and artifacts, and part of the process of developing a new technology is understanding these and how to compensate for them. Often, this advances the science as well. > > Sometimes it turns out that the instruments are only very indirectly capturing the underlying processes. This is where the big data argument gets interesting. If we look at the kind of data collected on the internet, it is almost always of this type. For many ecommerce applications, it suffices to build a predictive model of customer behavior. And this is the main contribution of applied machine learning. The bigger the data sets, the more accurate these predictive models can become. > > Unfortunately, such models are not very useful in science, because the goal of neuroscience, for example, is not to just to predict human behavior but to understand how that behavior is generated. If we only have ?indirect? measurements, we must fit models where the ?real? variables are latent. Making sound scientific inferences about latent variables is extremely difficult and relies on having very good models of how the latent variables produce the observed signal. Issues of identifiability and bias must be addressed, and there are also very challenging computational problems, because latent variable models typically exhibit many local optima. Regularization, the favorite tool of machine learning, usually worsens the biases and limits our ability to draw statistical inferences. If your model is fundamentally not identifiable, then it doesn?t matter how big your data are. > > Every scientific experiment (or expenditure) involves risk, and every scientist must ?place bets? on what will work. I?ll place mine on improving our measurement technology, because I think it can get us closer to measuring the critical causal variables. But I agree with Jim that not enough effort goes into the unglamorous process of figuring out what it is that our instruments are actually measuring. And if we don?t understand that, then our scientific inferences are likely to be wrong. > > -- > Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 > School of Electrical Engineering FAX: 541-737-1300 > and Computer Science URL: eecs.oregonstate.edu/~tgd > US Mail: 1148 Kelley Engineering Center > Office: 2067 Kelley Engineering Center > Oregon State Univ., Corvallis, OR 97331-5501 > > > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of james bower > Sent: Saturday, January 25, 2014 9:05 AM > To: jose at psychology.rutgers.edu > Cc: Connectionists > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare > > Hi Jose, > > Ah, neuroimaging - don?t get me started. Not all, but a great deal of neuroimaging has become a modern form of phrenology IMHO, distorting not only neuroscience, but it turns out, increasingly business too. To wit: > > At present I am actually much more concerned (and involved) in the use of brain imaging in what has come to be called "Neuro-marketing?. Many on this list are perhaps not aware, but while we worry about the effect of over interpretation of neuroimaging data within neuroscience, the effect of this kind of data in the business world is growing and not good. Although my guess is that those of you in the United States might have noted the rather large and absurd marketing campaign by Lumosity and the sellers of other ?brain training? games. A number of neuroscientists are actually now getting in this business. > > As some of you know, almost as long as I have been involved in computational neuroscience, I have also been involved in exploring the use of games for children?s learning. In the game/learning world, the misuse of neuroscience and especially brain imaging has become excessive. It wouldn?t be appropriate to belabor this point on this list - although the use of neuroscience by the NN community does, in my view, often cross over into a kind of neuro-marketing. > > For those that are interested in the more general abuses of neuro-marketing, here is a link to the first ever session I organized in the game development world based on my work as a neurobiologist: > > http://www.youtube.com/watch?v=Joqmf4baaT8&list=PL1G85ERLMItAA0Bgvh0PoZ5iGc6cHEv6f&index=16 > > As set up for that video, you should know that in his keynote address the night before, Jessi Schelll (of Schell Games and CMU) introduced his talk by saying that he was going to tell the audience what neuroscientists have figured out about human brains, going on to claim that they (we) have discovered that human brains come in two forms, goat brains and sheep brains. Of course the talk implied that the goats were in the room and the sheep were out there to be sold to. (although as I noted on the twitter feed at the time, there was an awful lot of ?baaing? going on in the audience :-) ). > > Anyway, the second iteration of my campaign to try to bring some balance and sanity to neuro-marketing, will take place at SxSW in Austin in march, in another session I have organized on the subject. > > http://schedule.sxsw.com/2014/events/event_IAP22511 > > If you happen to be in Austin for SxSW feel free to stop by. :-) > > The larger point, I suppose, is that while we debate these things within our own community, our debate and our claims have unintended consequences in larger society, with companies like Lumosity, in effect marketing to the baby boomers the idea (false) that using ?the science of neuroplasticity? and doing something as simple as playing games ?designed by neuroscientists? can revert their brains to teen age form. fRMI and Neuropsychology used extensively as evidence. > > Perhaps society has become so accustomed to outlandish claims and over selling that they won?t hold us accountable. > > Or perhaps they will. > > Jim > > > p.s. (always a ps) I have also recently proposed that we declare a moratorium on neuroimaging studies until we at least know how the signal is related to actual neural-activity. Seems rather foolish to base so much speculation and interpretation on a signal we don?t understand. Easy enough to poo poo cell spikers - but to my knowledge, there is no evidence that neural computing is performed through the generation of areas of red, yellow, green and blue. :-) > > > > > > > > On Jan 25, 2014, at 9:43 AM, Stephen Jos? Hanson wrote: > > > > Indeed. Its like we never stopped arguing about this for the last 30 years! Maybe this is a brain principle > integrated fossilized views of the brain principles. > > I actually agree with John.. and disagree with you JIm... surprise surprise...seems like old times.. > > The most disconcerting thing about the emergence the new new neural network field(s) > is that the NIH Connectome RFPs contain language about large scale network functions...and > yet when Program managers are directly asked whether fMRI or any neuroimaging methods > would be compliant with the RFP.. the answer is "NO". > > So once the neuroscience cell spikers get done analyzing 1000 or 10000 or even a 1M neurons > at a circuit level.. we still won't know why someone makes decisions about the shoes they wear; much > less any other mental function! Hopefully neuroimaging will be relevant again. > > Just saying. > > Cheers. > > Steve > PS. Hi Gary! Dijon! > > Stephen Jos? Hanson > Director RUBIC (Rutgers Brain Imaging Center) > Professor of Psychology > Member of Cognitive Science Center (NB) > Member EE Graduate Program (NB) > Member CS Graduate Program (NB) > Rutgers University > > email: jose at psychology.rutgers.edu > web: psychology.rutgers.edu/~jose > lab: www.rumba.rutgers.edu > fax: 866-434-7959 > voice: 973-353-3313 (RUBIC) > > On Fri, 2014-01-24 at 17:31 -0600, james bower wrote: > > > Well, well - remarkable!!! an actual debate on connectionists - just like the old days - in fact REMARKABLY like the old days. > > > Same issues - how ?brain-like? is ?brain-like? and how much hype is ?brain-like? generating by itself. How much do engineers really know about neuroscience, and how much do neurobiologists really know about the brain (both groups tend to claim they know a lot - now and then). > > > I went to the NIPS meeting this year for the first time in more than 25 years. Some of the older timers on connectionists may remember that I was one of the founding members of NIPS - and some will also remember that a few years of trying to get some kind of real interaction between neuroscience and then ?neural networks? lead me to give up and start, with John Miller, the CNS meetings - focused specifically on computational neuroscience. Another story - > > > At NIPS this year, there was a very large focus on ?big data? of course, with "machine learning" largely replaced "Neural Networks" in most talk titles. I was actually a panelist (most had no idea of my early involvement with NIPS) on big data in on-line learning (generated by Ed-X, Kahn, etc) workshop. I was interested, because for 15 years I have also been running Numedeon Inc, whose virtual world for kids, Whyville.net was the first game based immersive worlds, and is still one of the biggest and most innovative. (no MOOCs there). > > > From the panel I made the assertion, as I had, in effect, many years ago, that if you have a big data problem - it is likely you are not taking anything resembling a ?brain-like? approach to solving it. The version almost 30 years ago, when everyone was convinced that the relatively simple Hopfield Network could solve all kinds of hard problems, was my assertion that, in fact, simple ?Neural Networks, or simple Neural Network learning rules were unlikely to work very well, because, almost certainly, you have to build a great deal of knowledge about the nature of the problem into all levels (including the input layer) of your network to get it to work. > > > Now, many years later, everyone seems convinced that you can figure things out by amassing an enormous amount of data and working on it. > > > It has been a slow revolution (may actually not even be at the revolutionary stage yet), BUT it is very likely that the nervous system (like all model based systems) doesn?t collect tons of data to figure out with feedforward processing and filtering, but instead, collects the data it thinks it needs to confirm what it already believes to be true. In other words, it specifically avoids the big data problem at all cost. It is willing to suffer the consequence that occasionally (more and more recently for me), you end up talking to someone for 15 minutes before you realize that they are not the person you thought they were. > > > An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. > > > If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. > > > What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. > > > I saw none of that at NIPS - and in fact, I see less and less of that at the CNS meeting as well. > > > All too easy to simplify, pontificate, and sell. > > > So, I sympathize with Juyang Wang?s frustration. > > > If there is any better evidence that we are still in the dark, it is that we are still having the same debate 30 years later, with the same ruffled feathers, the same bold assertions (mine included) and the same seeming lack of progress. > > > If anyone is interested, here is a chapter I recently wrote of the book I edited on ?20 years of progress in computational neuroscience (Springer) on the last 40 years trying to understand the workings of a single neuron (The cerebellar Purkinje cell), using models. https://www.dropbox.com/s/5xxut90h65x4ifx/272602_1_En_5_DeltaPDF%20copy.pdf > > > Perhaps some sense of how far we have yet to go. > > > Jim Bower > > > > > > > > > > On Jan 24, 2014, at 4:00 PM, Ralph Etienne-Cummings wrote: > > > > Hey, I am happy when our taxpayer money, of which I contribute way more than I get back, funds any science in all branches of the government. > > Neuromorphic and brain-like computing is on the rise ... Let's please not shoot ourselves in the foot with in-fighting!! > > Thanks, > Ralph's Android > > On Jan 24, 2014 4:13 PM, "Juyang Weng" wrote: > Yes, Gary, you are correct politically, not to upset the "emperor" since he is always right and he never falls behind the literature. > > But then no clear message can ever get across. Falling behind the literature is still the fact. More, the entire research community that does brain research falls behind badly the literature of necessary disciplines. The current U.S. infrastructure of this research community does not fit at all the brain subject it studies! This is not a joking matter. We need to wake up, please. > > Azriel Rosenfeld criticized the entire computer vision filed in his invited talk at CVPR during early 1980s: "just doing business as usual" and "more or less the same" . However, the entire computer vision field still has not woken up after 30 years! As another example, I respect your colleague Terry Sejnowski, but I must openly say that I object to his "we need more data" as the key message for the U.S. BRAIN Project. This is another example of "just doing business as usual" and so everybody will not be against you. > > Several major disciplines are closely related to the brain, but the scientific community is still very much fragmented, not willing to wake up. Some of our government officials only say superficial worlds like "Big Data" because we like to hear. This cost is too high for our taxpayers. > > -John > > On 1/24/14 2:19 PM, Gary Cottrell wrote: > > Hi John - > > > It's great that you have an over-arching theory, but if you want people to read it, it would be better not to disrespect people in your emails. You say you respect Matthew, but then you accuse him of falling behind in the literature because he hasn't read your book. Politeness (and modesty!) will get you much farther than the tone you have taken. > > > g. > > On Jan 24, 2014, at 6:27 PM, Juyang Weng wrote: > > Dear Matthew: > > My apology if my words are direct, so that people with short attention spans can quickly get my points. I do respect you. > > You wrote: "to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks." > > Your statement is absolutely not true. Your term "brain-like way" is as old as "brain-like computing". Read about the 14 neurocomputers built by 1988 in Robert Hecht-Nielsen, "Neurocomputing: picking the human brain", IEEE Spectrum 25(3), March 1988, pp. 36-41. Hardware will not solve the fundamental problems of the current human severe lack in understanding the brain, no matter how many computers are linked together. Neither will the current "Big Data" fanfare from NSF in U.S.. The IBM's brain project has similar fundamental flaws and the IBM team lacks key experts. > > Some of the NSF managers have been turning blind eyes to breakthrough work on brain modeling for over a decade, but they want to waste more taxpayer's money into its "Big Data" fanfare and other "try again" fanfares. It is a scientific shame for NSF in a developed country like U.S. to do that shameful politics without real science, causing another large developing country like China to also echo "Big Data". "Big Data" was called "Large Data", well known in Pattern Recognition for many years. Stop playing shameful politics in science! > > You wrote: "Nobody is claiming a `brain-scale theory that bridges the wide gap,' or even close." > > To say that, you have not read the book: Natural and Artificial Intelligence. You are falling behind the literature so bad as some of our NSF project managers. With their lack of knowledge, they did not understand that the "bridge" was in print on their desks and in the literature. > > -John > > On 1/23/14 6:15 PM, Matthew Cook wrote: > > Dear John, > > > I think all of us on this list are interested in brain-like computing, so I don't understand your negativity on the topic. > > > Many of the speakers are involved in efforts to build hardware that works in a more brain-like way than conventional computers do. This is not what is usually meant by research in neural networks. I suspect the phrase "brain-like computing" is intended as an umbrella term that can cover all of these efforts. > > > I think you are reading far more into the announcement than is there. Nobody is claiming a "brain-scale theory that bridges the wide gap," or even close. To the contrary, the announcement is very cautious, saying that intense research is "gradually increasing our understanding" and "beginning to shed light on the human brain". In other words, the research advances slowly, and we are at the beginning. There is certainly no claim that any of the speakers has finished the job. > > > Similarly, the announcement refers to "successful demonstration of some of the underlying principles [of the brain] in software and hardware", which implicitly acknowledges that we do not have all the principles. There is nothing like a claim that anyone has enough principles to "explain highly integrated brain functions". > > > You are concerned that this workshop will avoid the essential issue of the wide gap between neuron-like computing and highly integrated brain functions. What makes you think it will avoid this? We are all interested in filling this gap, and the speakers (well, the ones who I know) all either work on this, or work on supporting people who work on this, or both. > > > This looks like it will be a very nice workshop, with talks from leaders in the field on a variety of topics, and I wish I were able to attend it. > > > Matthew > > > > On Jan 23, 2014, at 7:08 PM, Juyang Weng wrote: > > Dear Anders, > > Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. > > I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." > > What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: > the wide gap between neuron-like computing and well-known highly integrated brain functions. > Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". > > Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? > Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. > > Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? > > I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. > > -John > > > > On 1/13/14 12:14 PM, Anders Lansner wrote: > > Workshop on Brain-Like Computing, February 5-6 2014 > > The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. > As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. > > The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. > > This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. > > > > List of speakers > > Speaker > > > > Affiliation > > > > Giacomo Indiveri > > > > ETH Z?rich > > > > Abigail Morrison > > > > Forschungszentrum J?lich > > > > Mark Ritter > > > > IBM Watson Research Center > > > > Guillermo Cecchi > > > > IBM Watson Research Center > > > > Anders Lansner > > > > KTH Royal Institute of Technology > > > > Ahmed Hemani > > > > KTH Royal Institute of Technology > > > > Steve Furber > > > > University of Manchester > > > > Kazuyuki Aihara > > > > University of Tokyo > > > > Karlheinz Meier > > > > Heidelberg University > > > > Andreas Schierwagen > > > > Leipzig University > > > > > > > For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR > > You need to sign up before January 28th. > > Web page:http://www.kth.se/en/om/internationellt/university-networks/deans-forum/workshop-on-brain-like-computing-1.442038 > > > > > > > > ****************************************** > > Anders Lansner > > Professor in Computer Science, Computational biology > > School of Computer Science and Communication > > Stockholm University and Royal Institute of Technology (KTH) > > ala at kth.se, +46-70-2166122 > > > > > > > > > > Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. > > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > > > Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln > > > "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama > > > "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton > > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > > > -- > > > > > Dr. James M. Bower Ph.D. > Professor of Computational Neurobiology > Barshop Institute for Longevity and Aging Studies. > 15355 Lambda Drive > University of Texas Health Science Center > San Antonio, Texas 78245 > > Phone: 210 382 0553 > Email: bower at uthscsa.edu > Web: http://www.bower-lab.org > twitter: superid101 > linkedin: Jim Bower > > CONFIDENTIAL NOTICE: > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Sat Jan 25 21:56:51 2014 From: bwyble at gmail.com (Brad Wyble) Date: Sat, 25 Jan 2014 21:56:51 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <-5739697215914102941@unknownmsgid> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <-5739697215914102941@unknownmsgid> Message-ID: > > > At the same time, I think we should welcome all different perspectives and > investigative approaches in this great enterprise. The history of science > has been cited a lot in this discussuon, but one if its deeoest lessons is > that we never know ahead of time what approach will be most fertile. Things > always seem inevitable but only in retrospect. > > > Hear hear! Although I sympathize with Jim in his aim to do a little pruning and thereby increase the efficiency of the scientific process given the same amount of total effort. But where to prune? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Sat Jan 25 21:52:33 2014 From: bwyble at gmail.com (Brad Wyble) Date: Sat, 25 Jan 2014 21:52:33 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> Message-ID: Jim, Great debate! There are several good points here.. First, I agree with you that models with tidy, analytical solutions are probably not the ultimate answer, as biology is unlikely to exhibit behavior that coincides with mathematical formalisms that are easy to represent in equations. In fact, I think that seeking such solutions can get in the way of progress in some cases. I also agree with you that community models are a good idea, and I am not advocating that everyone should build their own model. But I think that we need a hierarchy of such community models at multiple levels of abstraction, with clear ways of translating ideas and constraints from each level to the next. The goal of computational neuroscience is not to build the ultimate model, but to build a shared understanding in the minds of the entire body of neuroscientists with a minimum of communication failures. Next, I think that you're espousing a purely bottom-up approach to modelling the brain. ( i.e. that if we just build it, understanding will follow from the emergent dynamics). I very much admire your strong position, but I really can't agree with it. I return to the question of how we will even know what the bottom floor is in such an approach You seem to imply in previous emails that it's a channel/cable model, but someone else might argue that we'd have to represent interactions at the atomic level to truly capture the dynamics of the circuit. So if that's the only place to start, how will we ever make serious progress? The computational requirements to simulate even a single neuron at the atomic level on a super cluster is probably a decade away. And once we'd accomplished that, someone might point out a case in which subatomic interactions play a functional role in the neuron and then we've got to wait another 10 years to be able to model a single neuron again? To me, it really looks like turtles all the way down which means that we have to choose our levels of abstraction with an understanding that there are important dynamics at lower levels that will be missed. However if we build in constraints from the behavior of the system, such abstract models can nevertheless provide a foothold for climbing a bit higher in our understanding. Is there some reason that you think channels are a sufficient level of detail? (or maybe I've mischaracterized your position) -Brad On Sat, Jan 25, 2014 at 7:09 PM, james bower wrote: > About to sign off here - as have probably already taken too much > bandwidth. (although it has been a long time) > > But just for final clarity on the point about physics - I am not claiming > that the actual tools etc, developed by physics mostly to study > non-biological and mostly ?simpler? systems (for example, systems were the > elements (unlike neurons) aren?t ?individualized? - and therefore can be > subjected to a certain amount of averaging (ie. thermodynamics), will apply. > > But I am suggesting (all be it in an oversimplified way) that the > transition from a largely folkloric, philosophically (religiously) driven > style of physics, to the physics of today was accomplished in the 15th > century by the rejection of the curve fitting, ?simplified? and self > reflective Ptolemic model of the solar system. (not actually, it turns out > for that reason, but because the Ptolemaic model has become too complex and > impure - the famous equint point). Instead, Newton, Kepler, etc, further > developed a model that actually valued the physical structure of that > system, independent of the philosophical, self reflecting previous set of > assumptions. I know, I know that this is an oversimplified description of > what happened, but, it is very likely that Newtons early (age 19) discovery > of what approximated the least squares law in the ?realistic model? he had > constructed of the earth moon system (where it was no problem and pretty > clearly evident that the moon orbited the earth in a regular way), lead in > later years to his development of mechanics - which, clearly provided an > important "community model? of the sort we completely lack in neuroscience > and seem to me continue to try to avoid. > > I have offered for years to buy the beer at the CNS meeting if all the > laboratories describing yet another model of the hippocampus or the visual > cortex would get together to agree on a single model they would all work > on. No takers yet. The paper I linked to in my first post describes how > that has happened for the Cerebellar Purkinje cell, because of GENESIS and > because we didn?t block others from using the model, even to criticize us. > However, when I sent that paper recently to a computational neuroscience > I heard was getting into Purkinje cell modeling, he wrote back to say he > was developing his own model thank you very much. > > The proposal that we all be free to build our own models - and everyone is > welcome, is EXACTLY the wrong direction. > > We need more than calculous - and although I understand their > attractiveness believe me, models that can be solved in close formed > solutions are not likely to be particularly useful in biology, where the > averaging won?t work in the same way. The relationship between scales is > different, lots of things are different - which means the a lot of the > tools will have to be different too. And I even agree that some of the > tools developed by engineering, where one is actually trying to make things > that work, might end up being useful, or even perhaps more useful. > However, the transition to paradigmatic science I believe will critically > depend on the acceptance of community models (they are the ?paradigm?), and > the models most likely with the most persuasive force as well as the ones > mostly likelihood of revealing unexpected functional relationships, are > ones that FIRST account for the structure of the brain, and SECOND are used > to explore function (rather than what is usually the other way around). > > As described in the paper I posted, that is exactly what has happened > through long hard work (since 1989) using the Purkinje cell model. > > In the end, unless you are a duelist (which I suspect many actually are, > in effect), brain computation involves nothing beyond the nervous system > and its physical and physiological structure. Therefore, that structure > will be the ultimate reference for how things really work, no matter what > level of scale you seek to describe. > > From 30 years of effort, I believe even more firmly now than I did back > then, that, like Newton and his friends, this is where we should start - > figuring out the principles and behavior from the physics of the elements > themselves. > > You can claim it is impossible - you can claim that models at other levels > of abstraction can help, however, in the end ?the truth? lies in the > circuitry in all its complexity. But you can?t just jump into the > complexity, without a synergistic link to models that actually provide > insights at the detailed level of the data you seek to collect. > > IMHO. > > Jim > > (no ps) > > > > > > > > On Jan 25, 2014, at 4:44 PM, Dan Goodman > wrote: > > The comparison with physics is an interesting one, but we have to remember > that neuroscience isn't physics. For a start, neuroscience is clearly much > harder than physics in many ways. Linear and separable phenomena are much > harder to find in neuroscience, and so both analysing and modelling data is > much more difficult. Experimentally, it is much more difficult to control > for independent variables in addition to the difficulty of working with > living animals. > > So although we might be able to learn things from the history of physics - > and I tend to agree with Axel Hutt that one of those lessons is to use the > simplest possible model rather than trying to include all the biophysical > details we know to exist - while neuroscience is in its pre-paradigmatic > phase (agreed with Jim Bower on this) I would say we need to try a diverse > set of methodological approaches and see what wins. In terms of funding > agencies, I think the best thing they could do would be to not insist on > any one methodological approach to the exclusion of others. > > I also share doubts about the idea that if we collect enough data then > interesting results will just pop out. On the other hand, there are some > valid hypotheses about brain function that require the collection of large > amounts of data. Personally, I think that we need to understand the > coordinated behaviour of many neurons to understand how information is > encoded and processed in the brain. At present, it's hard to look at enough > neurons simultaneously to be very sure of finding this sort of coordinated > activity, and this is one of the things that the HBP and BRAIN initiative > are aiming at. > > Dan > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Sat Jan 25 23:23:28 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Sat, 25 Jan 2014 21:23:28 -0700 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> Message-ID: Hi Brad et al., - thanks very much for this fun and entertaining philosophical discussion:) With regards to turtles all the way down, and also with regards to choosing the appropriate level of analysis for modeling, I'd like to reiterate a position I made earlier but in which I didn't really provide enough detail. There exists a formalization of Ockham's razor in a field called Algorithmic Information Theory, and this formalization is the Minimum Description Length (MDL). This perspective essentially says that we are searching for the optimal compression of all of the data relating to the brain. This means that we don't want to overcompress relevant distinctions, but we don't want to undercompress redundancies. This optimal compression, when represented as a computer program that outputs all of the brain data (aka a model), has a description length known as the Kolmogorov complexity. Now there is something weird about what I have just described, which is that the resulting model will produce not just the data for a single brain, but the data for *every* brain - a kind of meta-brain. And this is not quite what we are looking for. And due to the turtles problem it is probably ill-posed, in that the length of the description may be infinite as we zoom in to finer levels of detail. So we need to provide some relevant constraints on the problem to make it tractable. Based on what I just described, the MDL for your brain *is* your brain. This is essentially because we haven't defined a utility function, and we haven't done that because we aren't quite sure what exactly it is we are doing, or what we are looking for, when modeling the brain. To begin fixing this problem, we can rotate this perspective into a tool that we are all probably familiar with - factor analysis, i.e., PCA. What we are essentially looking for, first and foremost, is a model that explains the first principle component of just one person's comprehensive brain dataset (which includes behavioral data). Then we want to study this component (which is tantamount to a model of the brain) and see what it can do. What will this first principle component look like? Now we need to define what exactly it is that we are after. I would argue that our model should be composed of neuron-like elements connected in networks, and that when we look at the statistical properties of these networks, they should be quite similar to what we see in humans. Most importantly, however, I would argue that this model, when raised as a human, should exhibit some distinctly human traits. It should not pass a trivial turing test, but rather a deep turing test. After having been raised as and with human beings, but not exposed to any substantial philosophy, this model should independently invent consciousness philosophy. As you might imagine, our abstract high level model brain which captures the first principle component of the brain data might not be able to do this. Thus, we will start adding in more components that explain more of the variance, iteratively increasing our description length. This is a distinctly top-down approach, in which we only add relevant detail as it becomes obvious that the current model just isn't quite human. This approach follows a scientific gradient advocated for by Ockham's razor, in that we start with the simplest description (brain model) that explains the most amount of variance, and gradually increase the size of the description until it finally reinvents consciousness philosophy and can live among humans. In my admittedly biased experience, the first appropriate level of analysis is approximately point-neuron deep neural network architectures. However, this might actually be too low level - we might want to start with even more abstract, modern day NIPS-level models, and confirm that, although they can behave like humans, they can't reinvent consciousness philosophy and are thus more akin to zombie-like automata. Of course, with sufficient computing power our modeling approach can be somewhat more sloppy - we can begin experimenting with the synthesis of different levels of analysis right away. However, before we do any of this "for real" we probably want to comprehensively discuss the ethics of raising beings that are ultimately similar to humans, but are not quite human, and further, the ethics of raising digital humans. Lastly, to touch back to the original topic - Big Data - I think it's clear that the more data we have, the merrier. However, it also makes sense to follow the Ockham gradient. Ultimately, we are really just not as close to creating a human being as it may seem, and so it is probably safe, for the time being, to collect data from all levels of analysis willy nilly. However, when it comes time to actually build the human, we should be more careful, for the sake of the being we create. Indeed, perhaps we should be *sure* that it will reinvent consciousness philosophy before we ever turn it on in the first place. If anyone has an idea of how to do that, I would be extremely interested to hear about it. Brian Mingus Graduate student Department of Psychology and Neuroscience University of Colorado at Boulder http://grey.colorado.edu/mingus On Sat, Jan 25, 2014 at 7:52 PM, Brad Wyble wrote: > Jim, > > Great debate! There are several good points here.. > > First, I agree with you that models with tidy, analytical solutions are > probably not the ultimate answer, as biology is unlikely to exhibit > behavior that coincides with mathematical formalisms that are easy to > represent in equations. In fact, I think that seeking such solutions can > get in the way of progress in some cases. > > I also agree with you that community models are a good idea, and I am not > advocating that everyone should build their own model. But I think that we > need a hierarchy of such community models at multiple levels of > abstraction, with clear ways of translating ideas and constraints from each > level to the next. The goal of computational neuroscience is not to build > the ultimate model, but to build a shared understanding in the minds of the > entire body of neuroscientists with a minimum of communication failures. > > Next, I think that you're espousing a purely bottom-up approach to > modelling the brain. ( i.e. that if we just build it, understanding will > follow from the emergent dynamics). I very much admire your strong > position, but I really can't agree with it. I return to the question of > how we will even know what the bottom floor is in such an approach You > seem to imply in previous emails that it's a channel/cable model, but > someone else might argue that we'd have to represent interactions at the > atomic level to truly capture the dynamics of the circuit. So if that's > the only place to start, how will we ever make serious progress? The > computational requirements to simulate even a single neuron at the atomic > level on a super cluster is probably a decade away. And once we'd > accomplished that, someone might point out a case in which subatomic > interactions play a functional role in the neuron and then we've got to > wait another 10 years to be able to model a single neuron again? > > To me, it really looks like turtles all the way down which means that we > have to choose our levels of abstraction with an understanding that there > are important dynamics at lower levels that will be missed. However if we > build in constraints from the behavior of the system, such abstract models > can nevertheless provide a foothold for climbing a bit higher in our > understanding. > > Is there some reason that you think channels are a sufficient level of > detail? (or maybe I've mischaracterized your position) > > -Brad > > > > > > On Sat, Jan 25, 2014 at 7:09 PM, james bower wrote: > >> About to sign off here - as have probably already taken too much >> bandwidth. (although it has been a long time) >> >> But just for final clarity on the point about physics - I am not claiming >> that the actual tools etc, developed by physics mostly to study >> non-biological and mostly ?simpler? systems (for example, systems were the >> elements (unlike neurons) aren?t ?individualized? - and therefore can be >> subjected to a certain amount of averaging (ie. thermodynamics), will apply. >> >> But I am suggesting (all be it in an oversimplified way) that the >> transition from a largely folkloric, philosophically (religiously) driven >> style of physics, to the physics of today was accomplished in the 15th >> century by the rejection of the curve fitting, ?simplified? and self >> reflective Ptolemic model of the solar system. (not actually, it turns out >> for that reason, but because the Ptolemaic model has become too complex and >> impure - the famous equint point). Instead, Newton, Kepler, etc, further >> developed a model that actually valued the physical structure of that >> system, independent of the philosophical, self reflecting previous set of >> assumptions. I know, I know that this is an oversimplified description of >> what happened, but, it is very likely that Newtons early (age 19) discovery >> of what approximated the least squares law in the ?realistic model? he had >> constructed of the earth moon system (where it was no problem and pretty >> clearly evident that the moon orbited the earth in a regular way), lead in >> later years to his development of mechanics - which, clearly provided an >> important "community model? of the sort we completely lack in neuroscience >> and seem to me continue to try to avoid. >> >> I have offered for years to buy the beer at the CNS meeting if all the >> laboratories describing yet another model of the hippocampus or the visual >> cortex would get together to agree on a single model they would all work >> on. No takers yet. The paper I linked to in my first post describes how >> that has happened for the Cerebellar Purkinje cell, because of GENESIS and >> because we didn?t block others from using the model, even to criticize us. >> However, when I sent that paper recently to a computational neuroscience >> I heard was getting into Purkinje cell modeling, he wrote back to say he >> was developing his own model thank you very much. >> >> The proposal that we all be free to build our own models - and everyone >> is welcome, is EXACTLY the wrong direction. >> >> We need more than calculous - and although I understand their >> attractiveness believe me, models that can be solved in close formed >> solutions are not likely to be particularly useful in biology, where the >> averaging won?t work in the same way. The relationship between scales is >> different, lots of things are different - which means the a lot of the >> tools will have to be different too. And I even agree that some of the >> tools developed by engineering, where one is actually trying to make things >> that work, might end up being useful, or even perhaps more useful. >> However, the transition to paradigmatic science I believe will critically >> depend on the acceptance of community models (they are the ?paradigm?), and >> the models most likely with the most persuasive force as well as the ones >> mostly likelihood of revealing unexpected functional relationships, are >> ones that FIRST account for the structure of the brain, and SECOND are used >> to explore function (rather than what is usually the other way around). >> >> As described in the paper I posted, that is exactly what has happened >> through long hard work (since 1989) using the Purkinje cell model. >> >> In the end, unless you are a duelist (which I suspect many actually are, >> in effect), brain computation involves nothing beyond the nervous system >> and its physical and physiological structure. Therefore, that structure >> will be the ultimate reference for how things really work, no matter what >> level of scale you seek to describe. >> >> From 30 years of effort, I believe even more firmly now than I did back >> then, that, like Newton and his friends, this is where we should start - >> figuring out the principles and behavior from the physics of the elements >> themselves. >> >> You can claim it is impossible - you can claim that models at other >> levels of abstraction can help, however, in the end ?the truth? lies in the >> circuitry in all its complexity. But you can?t just jump into the >> complexity, without a synergistic link to models that actually provide >> insights at the detailed level of the data you seek to collect. >> >> IMHO. >> >> Jim >> >> (no ps) >> >> >> >> >> >> >> >> On Jan 25, 2014, at 4:44 PM, Dan Goodman < >> dg.connectionists at thesamovar.net> wrote: >> >> The comparison with physics is an interesting one, but we have to >> remember that neuroscience isn't physics. For a start, neuroscience is >> clearly much harder than physics in many ways. Linear and separable >> phenomena are much harder to find in neuroscience, and so both analysing >> and modelling data is much more difficult. Experimentally, it is much more >> difficult to control for independent variables in addition to the >> difficulty of working with living animals. >> >> So although we might be able to learn things from the history of physics >> - and I tend to agree with Axel Hutt that one of those lessons is to use >> the simplest possible model rather than trying to include all the >> biophysical details we know to exist - while neuroscience is in its >> pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need >> to try a diverse set of methodological approaches and see what wins. In >> terms of funding agencies, I think the best thing they could do would be to >> not insist on any one methodological approach to the exclusion of others. >> >> I also share doubts about the idea that if we collect enough data then >> interesting results will just pop out. On the other hand, there are some >> valid hypotheses about brain function that require the collection of large >> amounts of data. Personally, I think that we need to understand the >> coordinated behaviour of many neurons to understand how information is >> encoded and processed in the brain. At present, it's hard to look at enough >> neurons simultaneously to be very sure of finding this sort of coordinated >> activity, and this is one of the things that the HBP and BRAIN initiative >> are aiming at. >> >> Dan >> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> >> *Phone: 210 382 0553 <210%20382%200553>* >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged >> or contain privileged and confidential information. This information is >> only for the viewing or use of the intended recipient. If you have received >> this e-mail in error or are not the intended recipient, you are hereby >> notified that any disclosure, copying, distribution or use of, or the >> taking of any action in reliance upon, any of the information contained in >> this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that >> this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, >> this e-mail and all attachments to this e-mail must be immediately deleted >> from your computer without making any copies hereof and any and all hard >> copies made must be destroyed. If you have received this e-mail in error, >> please notify the sender by e-mail immediately. >> >> >> >> > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sun Jan 26 01:27:10 2014 From: weng at cse.msu.edu (Juyang Weng) Date: Sun, 26 Jan 2014 01:27:10 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <52E4AABE.4070505@cse.msu.edu> I enjoyed many views of you, including those of Jim Bower, Richard Loosemore, Ali Minai, and Thomas Trappenberg. Let me give a humble suggestion, when you hear a detailed view that looks "polarizing", do not get discouraged as your emotion is mistreating you. Find out whether the detail fits many brain functions. Big data and brain-like computing will fail if such a project is without the guidance of a brain-scale model. Note: vague statements are not very useful here as everybody can give many. A brain-scale model must be very detailed computationally. The more detailed, the more brain-scale functions it covers, and the fewer mechanisms it uses, the better. For example, the Spaun model claimed to be " the world's largest functional brain model". With great respect, I congratulate its appearance in Science 2012. But, unfortunately, Science editors are not in a position to judge how close a brain model is. I understand that no model of the brain is perfect and every model is an approximation of the nature. From my brain-scale model, I think that a minimal requirement for a reviewer on a brain model must have a formal training (e.g., a 3-credit course and pass all its exams) in all the following: (1) computer vision, (2) artificial intelligence, (3) automata theory and computational complexity, (4) electrical engineering, such as signals and systems, control, conventional neural networks, (5) biology, (6) neuroscience, (7) cognitive science, such as learning and memory, human vision systems, and developmental psychology, (8) mathematics, such as linear algebra, probability, statistics, and optimization theory. If you do not have some of the above, take such courses as soon as possible. BMI summer courses 2014 will offer some. If you have taken all the above courses, you will know that the Spaun model is grossly wrong (and, with respect, the deep learning net of Geoffery Hinton for the same reason). Why? I just give the first mechanism that every brain must have and thus every brain model must have: learning and recognizing unknown objects FROM unknown cluttered backgrounds and producing desired behaviors Note: not just recognizing but learning; not a single object in a clean background that Spaun demonstrated but also simultaneous multiple objects in a cluttered backgrounds. No objects can be pre-segmented from the cluttered background during learning. That is how a baby learns. None of the tasks that Spaun did includes cluttered background, let along learning directly from cluttered scenes. Attention is the first basic mechanism of the brain learned from the baby time, not recognizing a pattern in a clean background. Autonomously learning attention is the single most important mechanism for Big Data and Brain-Like Computing! How? Read How the Brain-Mind Works: A Two-Page Introduction to a Theory banner -John On 1/24/14 9:03 PM, Thomas Trappenberg wrote: > Thanks John for starting a discussion ... I think we need some. What I > liked most about your original post was asking about "What are the > underlying principles?" Let's make a list. > Of course, there are so many levels of organizations and mechanisms in > the brain, that we might speak about different things; but getting > different views would be fun and I think very useful (without the need > to offer the only and ultimate). > > Cheers, Thomas Trappenberg > > > PS: John, I thought you started a good discussion before, but I got > discouraged by your polarizing views. I think a lot of us can relate > to you, but lhow about letting others come forward now? > > > > On Fri, Jan 24, 2014 at 9:02 PM, Ivan Raikov > wrote: > > > I think perhaps the objection to the Big Data approach is that it > is applied to the exclusion of all other modelling approaches. > While it is true that complete and detailed understanding of > neurophysiology and anatomy is at the heart of neuroscience, a lot > can be learned about signal propagation in excitable branching > structures using statistical physics, and a lot can be learned > about information representation and transmission in the brain > using mathematical theories about distributed communicating > processes. As these modelling approaches have been successfully > used in various areas of science, wouldn't you agree that they can > also be used to understand at least some of the fundamental > properties of brain structures and processes? > > -Ivan Raikov > > On Sat, Jan 25, 2014 at 8:31 AM, james bower > wrote: > > [snip] > > An enormous amount of engineering and neuroscience continues > to think that the feedforward pathway is from the sensors to > the inside - rather than seeing this as the actual feedback > loop. Might to some sound like a semantic quibble, but I > assure you it is not. > > If you believe as I do, that the brain solves very hard > problems, in very sophisticated ways, that involve, in some > sense the construction of complex models about the world and > how it operates in the world, and that those models are > manifest in the complex architecture of the brain - then > simplified solutions are missing the point. > > What that means inevitably, in my view, is that the only way > we will ever understand what brain-like is, is to pay > tremendous attention experimentally and in our models to the > actual detailed anatomy and physiology of the brains circuits > and cells. > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N2-a1-HowBrainMind-i.jpg Type: image/jpeg Size: 696158 bytes Desc: not available URL: From mo2259 at columbia.edu Sun Jan 26 10:25:53 2014 From: mo2259 at columbia.edu (Mark Orr) Date: Sun, 26 Jan 2014 10:25:53 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> Message-ID: Brad, I see your point. In 1962, Herb Simon wrote about levels of analysis and dynamics at each level, and importantly how to think about when it is/is not safe to "ignore" dynamics at some levels: He called it nearly decomposable systems. This is not a neuro/cognitive issue, it is an issue relevant to understanding complex systems in general. Mark Simon, H. A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106(6), 467-482 On Jan 25, 2014, at 9:52 PM, Brad Wyble wrote: > Jim, > > Great debate! There are several good points here.. > > First, I agree with you that models with tidy, analytical solutions are probably not the ultimate answer, as biology is unlikely to exhibit behavior that coincides with mathematical formalisms that are easy to represent in equations. In fact, I think that seeking such solutions can get in the way of progress in some cases. > > I also agree with you that community models are a good idea, and I am not advocating that everyone should build their own model. But I think that we need a hierarchy of such community models at multiple levels of abstraction, with clear ways of translating ideas and constraints from each level to the next. The goal of computational neuroscience is not to build the ultimate model, but to build a shared understanding in the minds of the entire body of neuroscientists with a minimum of communication failures. > > Next, I think that you're espousing a purely bottom-up approach to modelling the brain. ( i.e. that if we just build it, understanding will follow from the emergent dynamics). I very much admire your strong position, but I really can't agree with it. I return to the question of how we will even know what the bottom floor is in such an approach You seem to imply in previous emails that it's a channel/cable model, but someone else might argue that we'd have to represent interactions at the atomic level to truly capture the dynamics of the circuit. So if that's the only place to start, how will we ever make serious progress? The computational requirements to simulate even a single neuron at the atomic level on a super cluster is probably a decade away. And once we'd accomplished that, someone might point out a case in which subatomic interactions play a functional role in the neuron and then we've got to wait another 10 years to be able to model a single neuron again? > > To me, it really looks like turtles all the way down which means that we have to choose our levels of abstraction with an understanding that there are important dynamics at lower levels that will be missed. However if we build in constraints from the behavior of the system, such abstract models can nevertheless provide a foothold for climbing a bit higher in our understanding. > > Is there some reason that you think channels are a sufficient level of detail? (or maybe I've mischaracterized your position) > > -Brad > > > > > > On Sat, Jan 25, 2014 at 7:09 PM, james bower wrote: > About to sign off here - as have probably already taken too much bandwidth. (although it has been a long time) > > But just for final clarity on the point about physics - I am not claiming that the actual tools etc, developed by physics mostly to study non-biological and mostly ?simpler? systems (for example, systems were the elements (unlike neurons) aren?t ?individualized? - and therefore can be subjected to a certain amount of averaging (ie. thermodynamics), will apply. > > But I am suggesting (all be it in an oversimplified way) that the transition from a largely folkloric, philosophically (religiously) driven style of physics, to the physics of today was accomplished in the 15th century by the rejection of the curve fitting, ?simplified? and self reflective Ptolemic model of the solar system. (not actually, it turns out for that reason, but because the Ptolemaic model has become too complex and impure - the famous equint point). Instead, Newton, Kepler, etc, further developed a model that actually valued the physical structure of that system, independent of the philosophical, self reflecting previous set of assumptions. I know, I know that this is an oversimplified description of what happened, but, it is very likely that Newtons early (age 19) discovery of what approximated the least squares law in the ?realistic model? he had constructed of the earth moon system (where it was no problem and pretty clearly evident that the moon orbited the earth in a regular way), lead in later years to his development of mechanics - which, clearly provided an important "community model? of the sort we completely lack in neuroscience and seem to me continue to try to avoid. > > I have offered for years to buy the beer at the CNS meeting if all the laboratories describing yet another model of the hippocampus or the visual cortex would get together to agree on a single model they would all work on. No takers yet. The paper I linked to in my first post describes how that has happened for the Cerebellar Purkinje cell, because of GENESIS and because we didn?t block others from using the model, even to criticize us. However, when I sent that paper recently to a computational neuroscience I heard was getting into Purkinje cell modeling, he wrote back to say he was developing his own model thank you very much. > > The proposal that we all be free to build our own models - and everyone is welcome, is EXACTLY the wrong direction. > > We need more than calculous - and although I understand their attractiveness believe me, models that can be solved in close formed solutions are not likely to be particularly useful in biology, where the averaging won?t work in the same way. The relationship between scales is different, lots of things are different - which means the a lot of the tools will have to be different too. And I even agree that some of the tools developed by engineering, where one is actually trying to make things that work, might end up being useful, or even perhaps more useful. However, the transition to paradigmatic science I believe will critically depend on the acceptance of community models (they are the ?paradigm?), and the models most likely with the most persuasive force as well as the ones mostly likelihood of revealing unexpected functional relationships, are ones that FIRST account for the structure of the brain, and SECOND are used to explore function (rather than what is usually the other way around). > > As described in the paper I posted, that is exactly what has happened through long hard work (since 1989) using the Purkinje cell model. > > In the end, unless you are a duelist (which I suspect many actually are, in effect), brain computation involves nothing beyond the nervous system and its physical and physiological structure. Therefore, that structure will be the ultimate reference for how things really work, no matter what level of scale you seek to describe. > > From 30 years of effort, I believe even more firmly now than I did back then, that, like Newton and his friends, this is where we should start - figuring out the principles and behavior from the physics of the elements themselves. > > You can claim it is impossible - you can claim that models at other levels of abstraction can help, however, in the end ?the truth? lies in the circuitry in all its complexity. But you can?t just jump into the complexity, without a synergistic link to models that actually provide insights at the detailed level of the data you seek to collect. > > IMHO. > > Jim > > (no ps) > > > > > > > > On Jan 25, 2014, at 4:44 PM, Dan Goodman wrote: > >> The comparison with physics is an interesting one, but we have to remember that neuroscience isn't physics. For a start, neuroscience is clearly much harder than physics in many ways. Linear and separable phenomena are much harder to find in neuroscience, and so both analysing and modelling data is much more difficult. Experimentally, it is much more difficult to control for independent variables in addition to the difficulty of working with living animals. >> >> So although we might be able to learn things from the history of physics - and I tend to agree with Axel Hutt that one of those lessons is to use the simplest possible model rather than trying to include all the biophysical details we know to exist - while neuroscience is in its pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need to try a diverse set of methodological approaches and see what wins. In terms of funding agencies, I think the best thing they could do would be to not insist on any one methodological approach to the exclusion of others. >> >> I also share doubts about the idea that if we collect enough data then interesting results will just pop out. On the other hand, there are some valid hypotheses about brain function that require the collection of large amounts of data. Personally, I think that we need to understand the coordinated behaviour of many neurons to understand how information is encoded and processed in the brain. At present, it's hard to look at enough neurons simultaneously to be very sure of finding this sort of coordinated activity, and this is one of the things that the HBP and BRAIN initiative are aiming at. >> >> Dan > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Sun Jan 26 13:41:20 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Sun, 26 Jan 2014 19:41:20 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <9A015446-A75F-4D3C-B40B-A7EEDBAAFB96@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1390664611.3420.15.camel@sam> <7C477036-B425-485D-998E-B7BB1CF34376@uthscsa.edu> <52E41FA7.5080408@susaro.com> <9A015446-A75F-4D3C-B40B-A7EEDBAAFB96@uthscsa.edu> Message-ID: > On Jan 25, 2014, at 11:29 PM, james bower wrote: > > I don?t know if they taped and put on the web the big talk about it at the neuroscience meeting - but take a look in case you wonder. (actually, I only lasted about 20 minutes myself). it's here: http://www.sfn.org/annual-meeting/neuroscience-2013/abstracts-and-sessions/public-events/specialpresentationvideo [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala at csc.kth.se Sun Jan 26 13:25:00 2014 From: ala at csc.kth.se (Anders Lansner) Date: Sun, 26 Jan 2014 19:25:00 +0100 Subject: Connectionists: Workshop Progress in Brain-Like Computing, February 5-6 2014 In-Reply-To: <52E15A9E.5080207@cse.msu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> Message-ID: <013a01cf1ac3$f058f3a0$d10adae0$@csc.kth.se> Dear John and all, I was not aware until this morning that my simple announcement of the workshop on ?Progress in Brain-Like Computing? at KTH Royal Institute of Technology in Stockholm next week had stirred up such a vivid discussion on the list. I was on a conference trip to Singapore with only occasional web access and got back only yesterday. Allow me some comments and reflections. I agree with some of the original points made by John Weng on that we need brain-scale theories in order to make real progress in brain-like computing and what the focus should be. Indeed, I think we see some maybe vague contours of partial theories that wait to be integrated to a more complete understanding. Since the terminology of ?brain-like? was criticized from different perspectives, allow me some motivation why we use this term. We could have stated ?neuromorphic? but in my opinion this term leads the thoughts a bit too much towards microscopic and microcircuit levels. After all real brains not only have very many neurons and synapses but also a very complex structure in terms of specialized neural populations and projections connecting them. We have today chips and clusters that are able to simulate with reasonable throughput such multi-neural-network structures (if not too complex components ) so we can at least computationally handle this level, rather than staying with small simple networks. Personally I think that to understand principles of brain function we need to avoid a lot but not all of the complexity we see at the single neuron and synapse levels. I also prefer the term ?brain-like? rather than ?brain-inspired? since the former defines the goal of building computational structures that are like brains and not just to start there and then perhaps quickly, in all our inspiration, diverge away from mimicking the essential aspects of real brains. It is interesting to note that the subject of the discussion quickly deviated from the main content of our workshop which has to do with designing and eventually building brain-like computational architectures in silicon ? or some more exotic substrate. Such research has been going on for long time and is now seeing increasing efforts. It can obviously be argued whether this is still premature or if it is now finally the right time to boost such efforts. Despite the fact that our knowledge about the brain is still not complete What also strikes me when I read this discussion is that we are still quite a divided and diverse community with minor consensus. There are many who think we are many decades away from doing the above, many who study abstract computational ?deep learning? network models for classification and prediction without bothering much about the biology, many who study experimentally or model brain circuits without focusing much on what functions they perform, and many who design hardware without knowing exactly what features to include, etc. But I am optimistic! Perhaps, in the near future, these efforts will combine synergistically and the pieces of the puzzle will start falling in place, triggering a series of real breakthroughs in our understanding of how our brain works. To identify at what point in time and what stage in brain science this will happen is indeed critical. Then, those who have the best understanding of how to design the hardware appropriate for executing in real time or faster this integrated set of brain-like algorithms in a low-power way will be in an excellent position for exploiting such progress in many important applications ? hopefully beneficial for mankind! This is some of the background for organizing the event I announced, which will hopefully contribute something to the further discussion on these very important topics. /Anders La From: Juyang Weng [mailto:weng at cse.msu.edu] Sent: den 23 januari 2014 19:09 To: Anders Lansner Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Workshop Progress in Brain-Like Computing, February 5-6 2014 Dear Anders, Interesting topic about the brain! But Brain-Like Computing is misleading because neural networks have been around for at least 70 years. I quote: "We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing." What are the underlying principles? I am concerned that projects like "Brain-Like Computing" avoid essential issues: the wide gap between neuron-like computing and well-known highly integrated brain functions. Continuing this avoidance would again create bad names for "brain-like computing", just such behaviors did for "neural networks". Henry Markram criticized IBM's brain project which does miss essential brain principles, but has he published such principles? Modeling individual neurons more and more precisely will explain highly integrated brain functions? From what I know, definitely not, by far. Has any of your 10 speakers published any brain-scale theory that bridges the wide gap? Are you aware of any such published theories? I am sorry for giving a CC to the list, but many on the list said that they like to hear discussions instead of just event announcements. -John On 1/13/14 12:14 PM, Anders Lansner wrote: Workshop on Brain-Like Computing, February 5-6 2014 The exciting prospects of developing brain-like information processing is one of the Deans Forum focus areas. As a means to encourage progress in this research area a Workshop is arranged February 5th-6th 2014 on KTH campus in Stockholm. The human brain excels over contemporary computers and robots in processing real-time unstructured information and uncertain data as well as in controlling a complex mechanical platform with multiple degrees of freedom like the human body. Intense experimental research complemented by computational and informatics efforts are gradually increasing our understanding of underlying processes and mechanisms in small animal and mammalian brains and are beginning to shed light on the human brain. We are now approaching the point when our knowledge will enable successful demonstrations of some of the underlying principles in software and hardware, i.e. brain-like computing. This workshop assembles experts, from the partners and also other leading names in the field, to provide an overview of the state-of-the-art in theoretical, software, and hardware aspects of brain-like computing. List of speakers Speaker Affiliation Giacomo Indiveri ETH Z?rich Abigail Morrison Forschungszentrum J?lich Mark Ritter IBM Watson Research Center Guillermo Cecchi IBM Watson Research Center Anders Lansner KTH Royal Institute of Technology Ahmed Hemani KTH Royal Institute of Technology Steve Furber University of Manchester Kazuyuki Aihara University of Tokyo Karlheinz Meier Heidelberg University Andreas Schierwagen Leipzig University For signing up to the Workshop please use the registration form found at http://bit.ly/1dkuBgR You need to sign up before January 28th. Web page: http://www.kth.se/en/om/internationellt/university-networks/deans-forum/work shop-on-brain-like-computing-1.442038 ****************************************** Anders Lansner Professor in Computer Science, Computational biology School of Computer Science and Communication Stockholm University and Royal Institute of Technology (KTH) ala at kth.se, +46-70-2166122 _____ Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! Antivirus ?r aktivt. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- --- Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! antivirus ?r aktivt. http://www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at eecs.oregonstate.edu Sun Jan 26 15:11:07 2014 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Sun, 26 Jan 2014 12:11:07 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> Message-ID: <001f01cf1ad2$c0313b60$4093b220$@eecs.oregonstate.edu> Dear Brian, Please keep in mind that MDL, Ockham's razor, PCA, and similar regularization approaches focus on the problem of *prediction* (or, equivalently, compression). Given a fixed amount of data and a flexible class of models, these principles tell us how to modulate the expressiveness of the model to maximize predictive accuracy. I would characterize it as follows: "Which deliberately incorrect model should we adopt in order to optimize predictive accuracy?" One stance toward creating an AI system is to pursue this purely functional approach and model a person as an input-output mapping (with latent state variables, as appropriate). Such an approach might be very useful both for engineering and for science. From a scientific perspective, it would tell us that if we build a system with certain properties, it can exhibit this input-output behavior. But it would not be a satisfactory theory of neuroscience for two reasons. First, it only provides sufficient conditions but does not show they are necessary. There might be other ways of producing the behavior, and the brain might implement one of those instead. Second, even if it could be made into a necessary and sufficient condition (e.g., by proving that all systems lacking certain properties would NOT exhibit the desired behavior), it would still not explain how the chemistry and biology of the brain produces the required properties. To fall back on the old bird vs. airplane analogy, the accomplishments of the Wright brothers (and the field of aerodynamics) provided a theory of how flight could be achieved. But we are still learning at the biological level how birds actually do it. -- Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 School of Electrical Engineering FAX: 541-737-1300 and Computer Science URL: eecs.oregonstate.edu/~tgd US Mail: 1148 Kelley Engineering Center Office: 2067 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Brian J Mingus Sent: Saturday, January 25, 2014 8:23 PM To: Brad Wyble Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare Hi Brad et al., - thanks very much for this fun and entertaining philosophical discussion:) With regards to turtles all the way down, and also with regards to choosing the appropriate level of analysis for modeling, I'd like to reiterate a position I made earlier but in which I didn't really provide enough detail. There exists a formalization of Ockham's razor in a field called Algorithmic Information Theory, and this formalization is the Minimum Description Length (MDL). This perspective essentially says that we are searching for the optimal compression of all of the data relating to the brain. This means that we don't want to overcompress relevant distinctions, but we don't want to undercompress redundancies. This optimal compression, when represented as a computer program that outputs all of the brain data (aka a model), has a description length known as the Kolmogorov complexity. Now there is something weird about what I have just described, which is that the resulting model will produce not just the data for a single brain, but the data for every brain - a kind of meta-brain. And this is not quite what we are looking for. And due to the turtles problem it is probably ill-posed, in that the length of the description may be infinite as we zoom in to finer levels of detail. So we need to provide some relevant constraints on the problem to make it tractable. Based on what I just described, the MDL for your brain is your brain. This is essentially because we haven't defined a utility function, and we haven't done that because we aren't quite sure what exactly it is we are doing, or what we are looking for, when modeling the brain. To begin fixing this problem, we can rotate this perspective into a tool that we are all probably familiar with - factor analysis, i.e., PCA. What we are essentially looking for, first and foremost, is a model that explains the first principle component of just one person's comprehensive brain dataset (which includes behavioral data). Then we want to study this component (which is tantamount to a model of the brain) and see what it can do. What will this first principle component look like? Now we need to define what exactly it is that we are after. I would argue that our model should be composed of neuron-like elements connected in networks, and that when we look at the statistical properties of these networks, they should be quite similar to what we see in humans. Most importantly, however, I would argue that this model, when raised as a human, should exhibit some distinctly human traits. It should not pass a trivial turing test, but rather a deep turing test. After having been raised as and with human beings, but not exposed to any substantial philosophy, this model should independently invent consciousness philosophy. As you might imagine, our abstract high level model brain which captures the first principle component of the brain data might not be able to do this. Thus, we will start adding in more components that explain more of the variance, iteratively increasing our description length. This is a distinctly top-down approach, in which we only add relevant detail as it becomes obvious that the current model just isn't quite human. This approach follows a scientific gradient advocated for by Ockham's razor, in that we start with the simplest description (brain model) that explains the most amount of variance, and gradually increase the size of the description until it finally reinvents consciousness philosophy and can live among humans. In my admittedly biased experience, the first appropriate level of analysis is approximately point-neuron deep neural network architectures. However, this might actually be too low level - we might want to start with even more abstract, modern day NIPS-level models, and confirm that, although they can behave like humans, they can't reinvent consciousness philosophy and are thus more akin to zombie-like automata. Of course, with sufficient computing power our modeling approach can be somewhat more sloppy - we can begin experimenting with the synthesis of different levels of analysis right away. However, before we do any of this "for real" we probably want to comprehensively discuss the ethics of raising beings that are ultimately similar to humans, but are not quite human, and further, the ethics of raising digital humans. Lastly, to touch back to the original topic - Big Data - I think it's clear that the more data we have, the merrier. However, it also makes sense to follow the Ockham gradient. Ultimately, we are really just not as close to creating a human being as it may seem, and so it is probably safe, for the time being, to collect data from all levels of analysis willy nilly. However, when it comes time to actually build the human, we should be more careful, for the sake of the being we create. Indeed, perhaps we should be sure that it will reinvent consciousness philosophy before we ever turn it on in the first place. If anyone has an idea of how to do that, I would be extremely interested to hear about it. Brian Mingus Graduate student Department of Psychology and Neuroscience University of Colorado at Boulder http://grey.colorado.edu/mingus On Sat, Jan 25, 2014 at 7:52 PM, Brad Wyble wrote: Jim, Great debate! There are several good points here.. First, I agree with you that models with tidy, analytical solutions are probably not the ultimate answer, as biology is unlikely to exhibit behavior that coincides with mathematical formalisms that are easy to represent in equations. In fact, I think that seeking such solutions can get in the way of progress in some cases. I also agree with you that community models are a good idea, and I am not advocating that everyone should build their own model. But I think that we need a hierarchy of such community models at multiple levels of abstraction, with clear ways of translating ideas and constraints from each level to the next. The goal of computational neuroscience is not to build the ultimate model, but to build a shared understanding in the minds of the entire body of neuroscientists with a minimum of communication failures. Next, I think that you're espousing a purely bottom-up approach to modelling the brain. ( i.e. that if we just build it, understanding will follow from the emergent dynamics). I very much admire your strong position, but I really can't agree with it. I return to the question of how we will even know what the bottom floor is in such an approach You seem to imply in previous emails that it's a channel/cable model, but someone else might argue that we'd have to represent interactions at the atomic level to truly capture the dynamics of the circuit. So if that's the only place to start, how will we ever make serious progress? The computational requirements to simulate even a single neuron at the atomic level on a super cluster is probably a decade away. And once we'd accomplished that, someone might point out a case in which subatomic interactions play a functional role in the neuron and then we've got to wait another 10 years to be able to model a single neuron again? To me, it really looks like turtles all the way down which means that we have to choose our levels of abstraction with an understanding that there are important dynamics at lower levels that will be missed. However if we build in constraints from the behavior of the system, such abstract models can nevertheless provide a foothold for climbing a bit higher in our understanding. Is there some reason that you think channels are a sufficient level of detail? (or maybe I've mischaracterized your position) -Brad On Sat, Jan 25, 2014 at 7:09 PM, james bower wrote: About to sign off here - as have probably already taken too much bandwidth. (although it has been a long time) But just for final clarity on the point about physics - I am not claiming that the actual tools etc, developed by physics mostly to study non-biological and mostly 'simpler' systems (for example, systems were the elements (unlike neurons) aren't 'individualized' - and therefore can be subjected to a certain amount of averaging (ie. thermodynamics), will apply. But I am suggesting (all be it in an oversimplified way) that the transition from a largely folkloric, philosophically (religiously) driven style of physics, to the physics of today was accomplished in the 15th century by the rejection of the curve fitting, 'simplified' and self reflective Ptolemic model of the solar system. (not actually, it turns out for that reason, but because the Ptolemaic model has become too complex and impure - the famous equint point). Instead, Newton, Kepler, etc, further developed a model that actually valued the physical structure of that system, independent of the philosophical, self reflecting previous set of assumptions. I know, I know that this is an oversimplified description of what happened, but, it is very likely that Newtons early (age 19) discovery of what approximated the least squares law in the 'realistic model' he had constructed of the earth moon system (where it was no problem and pretty clearly evident that the moon orbited the earth in a regular way), lead in later years to his development of mechanics - which, clearly provided an important "community model" of the sort we completely lack in neuroscience and seem to me continue to try to avoid. I have offered for years to buy the beer at the CNS meeting if all the laboratories describing yet another model of the hippocampus or the visual cortex would get together to agree on a single model they would all work on. No takers yet. The paper I linked to in my first post describes how that has happened for the Cerebellar Purkinje cell, because of GENESIS and because we didn't block others from using the model, even to criticize us. However, when I sent that paper recently to a computational neuroscience I heard was getting into Purkinje cell modeling, he wrote back to say he was developing his own model thank you very much. The proposal that we all be free to build our own models - and everyone is welcome, is EXACTLY the wrong direction. We need more than calculous - and although I understand their attractiveness believe me, models that can be solved in close formed solutions are not likely to be particularly useful in biology, where the averaging won't work in the same way. The relationship between scales is different, lots of things are different - which means the a lot of the tools will have to be different too. And I even agree that some of the tools developed by engineering, where one is actually trying to make things that work, might end up being useful, or even perhaps more useful. However, the transition to paradigmatic science I believe will critically depend on the acceptance of community models (they are the 'paradigm'), and the models most likely with the most persuasive force as well as the ones mostly likelihood of revealing unexpected functional relationships, are ones that FIRST account for the structure of the brain, and SECOND are used to explore function (rather than what is usually the other way around). As described in the paper I posted, that is exactly what has happened through long hard work (since 1989) using the Purkinje cell model. In the end, unless you are a duelist (which I suspect many actually are, in effect), brain computation involves nothing beyond the nervous system and its physical and physiological structure. Therefore, that structure will be the ultimate reference for how things really work, no matter what level of scale you seek to describe. >From 30 years of effort, I believe even more firmly now than I did back then, that, like Newton and his friends, this is where we should start - figuring out the principles and behavior from the physics of the elements themselves. You can claim it is impossible - you can claim that models at other levels of abstraction can help, however, in the end 'the truth' lies in the circuitry in all its complexity. But you can't just jump into the complexity, without a synergistic link to models that actually provide insights at the detailed level of the data you seek to collect. IMHO. Jim (no ps) On Jan 25, 2014, at 4:44 PM, Dan Goodman wrote: The comparison with physics is an interesting one, but we have to remember that neuroscience isn't physics. For a start, neuroscience is clearly much harder than physics in many ways. Linear and separable phenomena are much harder to find in neuroscience, and so both analysing and modelling data is much more difficult. Experimentally, it is much more difficult to control for independent variables in addition to the difficulty of working with living animals. So although we might be able to learn things from the history of physics - and I tend to agree with Axel Hutt that one of those lessons is to use the simplest possible model rather than trying to include all the biophysical details we know to exist - while neuroscience is in its pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need to try a diverse set of methodological approaches and see what wins. In terms of funding agencies, I think the best thing they could do would be to not insist on any one methodological approach to the exclusion of others. I also share doubts about the idea that if we collect enough data then interesting results will just pop out. On the other hand, there are some valid hypotheses about brain function that require the collection of large amounts of data. Personally, I think that we need to understand the coordinated behaviour of many neurons to understand how information is encoded and processed in the brain. At present, it's hard to look at enough neurons simultaneously to be very sure of finding this sort of coordinated activity, and this is one of the things that the HBP and BRAIN initiative are aiming at. Dan Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Sun Jan 26 15:43:22 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Sun, 26 Jan 2014 13:43:22 -0700 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <001f01cf1ad2$c0313b60$4093b220$@eecs.oregonstate.edu> References: <470613370.1314179.1390684844273.JavaMail.root@inria.fr> <52E43E6A.2010401@thesamovar.net> <3861BC25-6728-46A1-9A1B-68ABF1B78AD9@uthscsa.edu> <001f01cf1ad2$c0313b60$4093b220$@eecs.oregonstate.edu> Message-ID: Hi Thomas, thanks for your feedback. I agree with you that we will have to choose a deliberately incorrect model. I see that as emerging from the idea that you can't actually compute the Kolmogorov complexity, you can merely approximate it. This means you will have to use heuristics and make sacrifices in your model. You will overcompress and undercompress and overfit and underfit in various parts of the space. It seems though that this problem can be ameliorated by Big Data. The more data we collect, the more useful constraints we apply to the problem. In the limit of Big Data, our model is not underspecified at all, but rather falls perfectly within the normal distribution of human beings. On the way to this utopia, the choice of deliberately incorrect model is going to be a very hard problem. For example, given the simplest possible model, it may be impossible to choose which of the possible bits of complexity we should add on any given step when following the Ockham gradient during our hill climb. This means that, right from the start, we are already stuck on some local maximum. Still, there are so many useful constraints, and so much relevant data, that I don't see why, without modeling a human on the Planck scale, we can't create a digital human that falls within the normal range. And given that we have a normal range, it also seems as though we have some leeway with regards to the model - i.e., we don't have to get it exactly right - we can substantially compress the model and it will still be a normal human, despite the fact that there are lots of different ways to compress it. SO, ultimately, my position is Big Data all the way :) The more constraints the merrier - we don't actually have to satisfy them all, but the more we have available the easier it will be to hit the target. Brian Mingus Graduate student Department of Psychology and Neuroscience University of Colorado at Boulder http://grey.colorado.edu/mingus On Sun, Jan 26, 2014 at 1:11 PM, Thomas G. Dietterich < tgd at eecs.oregonstate.edu> wrote: > Dear Brian, > > > > Please keep in mind that MDL, Ockham's razor, PCA, and similar > regularization approaches focus on the problem of **prediction** (or, > equivalently, compression). Given a fixed amount of data and a flexible > class of models, these principles tell us how to modulate the > expressiveness of the model to maximize predictive accuracy. I would > characterize it as follows: "Which deliberately incorrect model should we > adopt in order to optimize predictive accuracy?" > > > > One stance toward creating an AI system is to pursue this purely > functional approach and model a person as an input-output mapping (with > latent state variables, as appropriate). Such an approach might be very > useful both for engineering and for science. From a scientific > perspective, it would tell us that if we build a system with certain > properties, it can exhibit this input-output behavior. > > > > But it would not be a satisfactory theory of neuroscience for two reasons. > First, it only provides sufficient conditions but does not show they are > necessary. There might be other ways of producing the behavior, and the > brain might implement one of those instead. Second, even if it could be > made into a necessary and sufficient condition (e.g., by proving that all > systems lacking certain properties would NOT exhibit the desired behavior), > it would still not explain how the chemistry and biology of the brain > produces the required properties. To fall back on the old bird vs. > airplane analogy, the accomplishments of the Wright brothers (and the field > of aerodynamics) provided a theory of how flight could be achieved. But we > are still learning at the biological level how birds actually do it. > > > > > > > > -- > > Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559 > > School of Electrical Engineering FAX: 541-737-1300 > > and Computer Science URL: > eecs.oregonstate.edu/~tgd > > US Mail: 1148 Kelley Engineering Center > > Office: 2067 Kelley Engineering Center > > Oregon State Univ., Corvallis, OR 97331-5501 > > > > > > *From:* Connectionists [mailto: > connectionists-bounces at mailman.srv.cs.cmu.edu] *On Behalf Of *Brian J > Mingus > *Sent:* Saturday, January 25, 2014 8:23 PM > *To:* Brad Wyble > *Cc:* connectionists at mailman.srv.cs.cmu.edu > > *Subject:* Re: Connectionists: Brain-like computing fanfare and big data > fanfare > > > > Hi Brad et al., - thanks very much for this fun and entertaining > philosophical discussion:) > > > > With regards to turtles all the way down, and also with regards to > choosing the appropriate level of analysis for modeling, I'd like to > reiterate a position I made earlier but in which I didn't really provide > enough detail. > > > > There exists a formalization of Ockham's razor in a field called > Algorithmic Information Theory, and this formalization is the Minimum > Description Length (MDL). > > > > This perspective essentially says that we are searching for the optimal > compression of all of the data relating to the brain. This means that we > don't want to overcompress relevant distinctions, but we don't want to > undercompress redundancies. This optimal compression, when represented as a > computer program that outputs all of the brain data (aka a model), has a > description length known as the Kolmogorov complexity. > > > > Now there is something weird about what I have just described, which is > that the resulting model will produce not just the data for a single brain, > but the data for *every* brain - a kind of meta-brain. And this is not > quite what we are looking for. And due to the turtles problem it is > probably ill-posed, in that the length of the description may be infinite > as we zoom in to finer levels of detail. > > > > So we need to provide some relevant constraints on the problem to make it > tractable. Based on what I just described, the MDL for your brain *is* your > brain. This is essentially because we haven't defined a utility function, > and we haven't done that because we aren't quite sure what exactly it is we > are doing, or what we are looking for, when modeling the brain. > > > > To begin fixing this problem, we can rotate this perspective into a tool > that we are all probably familiar with - factor analysis, i.e., PCA. What > we are essentially looking for, first and foremost, is a model that > explains the first principle component of just one person's comprehensive > brain dataset (which includes behavioral data). Then we want to study this > component (which is tantamount to a model of the brain) and see what it can > do. > > > > What will this first principle component look like? Now we need to define > what exactly it is that we are after. I would argue that our model should > be composed of neuron-like elements connected in networks, and that when we > look at the statistical properties of these networks, they should be quite > similar to what we see in humans. > > > > Most importantly, however, I would argue that this model, when raised as a > human, should exhibit some distinctly human traits. It should not pass a > trivial turing test, but rather a deep turing test. After having been > raised as and with human beings, but not exposed to any substantial > philosophy, this model should independently invent consciousness philosophy. > > > > As you might imagine, our abstract high level model brain which captures > the first principle component of the brain data might not be able to do > this. Thus, we will start adding in more components that explain more of > the variance, iteratively increasing our description length. This is a > distinctly top-down approach, in which we only add relevant detail as it > becomes obvious that the current model just isn't quite human. > > > > This approach follows a scientific gradient advocated for by Ockham's > razor, in that we start with the simplest description (brain model) that > explains the most amount of variance, and gradually increase the size of > the description until it finally reinvents consciousness philosophy and can > live among humans. > > > > In my admittedly biased experience, the first appropriate level of > analysis is approximately point-neuron deep neural network architectures. > However, this might actually be too low level - we might want to start with > even more abstract, modern day NIPS-level models, and confirm that, > although they can behave like humans, they can't reinvent consciousness > philosophy and are thus more akin to zombie-like automata. > > > > Of course, with sufficient computing power our modeling approach can be > somewhat more sloppy - we can begin experimenting with the synthesis of > different levels of analysis right away. > > > > However, before we do any of this "for real" we probably want to > comprehensively discuss the ethics of raising beings that are ultimately > similar to humans, but are not quite human, and further, the ethics of > raising digital humans. > > > > Lastly, to touch back to the original topic - Big Data - I think it's > clear that the more data we have, the merrier. However, it also makes sense > to follow the Ockham gradient. Ultimately, we are really just not as close > to creating a human being as it may seem, and so it is probably safe, for > the time being, to collect data from all levels of analysis willy nilly. > However, when it comes time to actually build the human, we should be more > careful, for the sake of the being we create. Indeed, perhaps we should be > *sure* that it will reinvent consciousness philosophy before we ever turn > it on in the first place. > > > > If anyone has an idea of how to do that, I would be extremely interested > to hear about it. > > > > Brian Mingus > > > > Graduate student > > Department of Psychology and Neuroscience > > University of Colorado at Boulder > > http://grey.colorado.edu/mingus > > > > On Sat, Jan 25, 2014 at 7:52 PM, Brad Wyble wrote: > > Jim, > > > > Great debate! There are several good points here.. > > > > First, I agree with you that models with tidy, analytical solutions are > probably not the ultimate answer, as biology is unlikely to exhibit > behavior that coincides with mathematical formalisms that are easy to > represent in equations. In fact, I think that seeking such solutions can > get in the way of progress in some cases. > > > > I also agree with you that community models are a good idea, and I am not > advocating that everyone should build their own model. But I think that we > need a hierarchy of such community models at multiple levels of > abstraction, with clear ways of translating ideas and constraints from each > level to the next. The goal of computational neuroscience is not to build > the ultimate model, but to build a shared understanding in the minds of the > entire body of neuroscientists with a minimum of communication failures. > > > > Next, I think that you're espousing a purely bottom-up approach to > modelling the brain. ( i.e. that if we just build it, understanding will > follow from the emergent dynamics). I very much admire your strong > position, but I really can't agree with it. I return to the question of > how we will even know what the bottom floor is in such an approach You > seem to imply in previous emails that it's a channel/cable model, but > someone else might argue that we'd have to represent interactions at the > atomic level to truly capture the dynamics of the circuit. So if that's > the only place to start, how will we ever make serious progress? The > computational requirements to simulate even a single neuron at the atomic > level on a super cluster is probably a decade away. And once we'd > accomplished that, someone might point out a case in which subatomic > interactions play a functional role in the neuron and then we've got to > wait another 10 years to be able to model a single neuron again? > > > > To me, it really looks like turtles all the way down which means that we > have to choose our levels of abstraction with an understanding that there > are important dynamics at lower levels that will be missed. However if we > build in constraints from the behavior of the system, such abstract models > can nevertheless provide a foothold for climbing a bit higher in our > understanding. > > > > Is there some reason that you think channels are a sufficient level of > detail? (or maybe I've mischaracterized your position) > > > > -Brad > > > > > > > > > > On Sat, Jan 25, 2014 at 7:09 PM, james bower wrote: > > About to sign off here - as have probably already taken too much > bandwidth. (although it has been a long time) > > > > But just for final clarity on the point about physics - I am not claiming > that the actual tools etc, developed by physics mostly to study > non-biological and mostly 'simpler' systems (for example, systems were the > elements (unlike neurons) aren't 'individualized' - and therefore can be > subjected to a certain amount of averaging (ie. thermodynamics), will apply. > > > > But I am suggesting (all be it in an oversimplified way) that the > transition from a largely folkloric, philosophically (religiously) driven > style of physics, to the physics of today was accomplished in the 15th > century by the rejection of the curve fitting, 'simplified' and self > reflective Ptolemic model of the solar system. (not actually, it turns out > for that reason, but because the Ptolemaic model has become too complex and > impure - the famous equint point). Instead, Newton, Kepler, etc, further > developed a model that actually valued the physical structure of that > system, independent of the philosophical, self reflecting previous set of > assumptions. I know, I know that this is an oversimplified description of > what happened, but, it is very likely that Newtons early (age 19) discovery > of what approximated the least squares law in the 'realistic model' he had > constructed of the earth moon system (where it was no problem and pretty > clearly evident that the moon orbited the earth in a regular way), lead in > later years to his development of mechanics - which, clearly provided an > important "community model" of the sort we completely lack in neuroscience > and seem to me continue to try to avoid. > > > > I have offered for years to buy the beer at the CNS meeting if all the > laboratories describing yet another model of the hippocampus or the visual > cortex would get together to agree on a single model they would all work > on. No takers yet. The paper I linked to in my first post describes how > that has happened for the Cerebellar Purkinje cell, because of GENESIS and > because we didn't block others from using the model, even to criticize us. > However, when I sent that paper recently to a computational neuroscience > I heard was getting into Purkinje cell modeling, he wrote back to say he > was developing his own model thank you very much. > > > > The proposal that we all be free to build our own models - and everyone is > welcome, is EXACTLY the wrong direction. > > > > We need more than calculous - and although I understand their > attractiveness believe me, models that can be solved in close formed > solutions are not likely to be particularly useful in biology, where the > averaging won't work in the same way. The relationship between scales is > different, lots of things are different - which means the a lot of the > tools will have to be different too. And I even agree that some of the > tools developed by engineering, where one is actually trying to make things > that work, might end up being useful, or even perhaps more useful. > However, the transition to paradigmatic science I believe will critically > depend on the acceptance of community models (they are the 'paradigm'), and > the models most likely with the most persuasive force as well as the ones > mostly likelihood of revealing unexpected functional relationships, are > ones that FIRST account for the structure of the brain, and SECOND are used > to explore function (rather than what is usually the other way around). > > > > As described in the paper I posted, that is exactly what has happened > through long hard work (since 1989) using the Purkinje cell model. > > > > In the end, unless you are a duelist (which I suspect many actually are, > in effect), brain computation involves nothing beyond the nervous system > and its physical and physiological structure. Therefore, that structure > will be the ultimate reference for how things really work, no matter what > level of scale you seek to describe. > > > > From 30 years of effort, I believe even more firmly now than I did back > then, that, like Newton and his friends, this is where we should start - > figuring out the principles and behavior from the physics of the elements > themselves. > > > > You can claim it is impossible - you can claim that models at other levels > of abstraction can help, however, in the end 'the truth' lies in the > circuitry in all its complexity. But you can't just jump into the > complexity, without a synergistic link to models that actually provide > insights at the detailed level of the data you seek to collect. > > > > IMHO. > > > > Jim > > > > (no ps) > > > > > > > > > > > > > > > > On Jan 25, 2014, at 4:44 PM, Dan Goodman > wrote: > > > > The comparison with physics is an interesting one, but we have to remember > that neuroscience isn't physics. For a start, neuroscience is clearly much > harder than physics in many ways. Linear and separable phenomena are much > harder to find in neuroscience, and so both analysing and modelling data is > much more difficult. Experimentally, it is much more difficult to control > for independent variables in addition to the difficulty of working with > living animals. > > So although we might be able to learn things from the history of physics - > and I tend to agree with Axel Hutt that one of those lessons is to use the > simplest possible model rather than trying to include all the biophysical > details we know to exist - while neuroscience is in its pre-paradigmatic > phase (agreed with Jim Bower on this) I would say we need to try a diverse > set of methodological approaches and see what wins. In terms of funding > agencies, I think the best thing they could do would be to not insist on > any one methodological approach to the exclusion of others. > > I also share doubts about the idea that if we collect enough data then > interesting results will just pop out. On the other hand, there are some > valid hypotheses about brain function that require the collection of large > amounts of data. Personally, I think that we need to understand the > coordinated behaviour of many neurons to understand how information is > encoded and processed in the brain. At present, it's hard to look at enough > neurons simultaneously to be very sure of finding this sort of coordinated > activity, and this is one of the things that the HBP and BRAIN initiative > are aiming at. > > Dan > > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > > > > > > > -- > > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > > > http://wyblelab.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrey.hinton at gmail.com Sun Jan 26 14:43:25 2014 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Sun, 26 Jan 2014 14:43:25 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: I can no longer resist making one point. A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. Geoff On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: > I am extremely pleased to see such vibrant discussion here and my thanks > to Juyang for getting the ball rolling. > > Jim, I appreciate your comments and I agree in large measure, but I have > always disagreed with you as regards the necessity of simulating everything > down to a lowest common denominator . Like you, I enjoy drawing lessons > from the history of other disciplines, but unlike you, I don't think the > analogy between neuroscience and physics is all that clear cut. The two > fields deal with vastly different levels of complexity and therefore I > don't think it should be expected that they will (or should) follow the > same trajectory. > > To take your Purkinje cell example, I imagine that there are those who > view any such model that lacks an explicit simulation of the RNA as being > incomplete. To such a person, your models would also be unfit for the > literature. So would we then change the standards such that no model can be > published unless it includes an explicit simulation of the RNA? And why > stop there? Where does it end? In my opinion, we can't make effective > progress in this field if everyone is bound to the molecular level. > > I really think that neuroscience presents a fundamental challenge that is > not present in physics, which is that progress can only occur when theory > is developed at different levels of abstraction that overlap with one > another. The challenge is not how to force everyone to operate at the same > level of formal specificity, but how to allow effective communication > between researchers operating at different levels. > > In aid of meeting this challenge, I think that our field should take more > inspiration from engineering, a model-based discipline that already has to > work simultaneously at many different scales of complexity and abstraction. > > > Best, > Brad Wyble > > > > > On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > >> Thanks for your comments Thomas, and good luck with your effort. >> >> I can?t refrain myself from making the probably culturist remark that >> this seems a very practical approach. >> >> I have for many years suggested that those interested in advancing >> biology in general and neuroscience in particular to a ?paradigmatic? as >> distinct from a descriptive / folkloric science, would benefit from >> understanding this transition as physics went through it in the 15th and >> 16th centuries. In many ways, I think that is where we are today, although >> with perhaps the decided disadvantage that we have a lot of physicists >> around who, again in my view, don?t really understand the origins of their >> own science. By that, I mean, that they don?t understand how much of their >> current scientific structure, for example the relatively clean separation >> between ?theorists? and ?experimentalists?, is dependent on the foundation >> build by those (like Newton) who were both in an earlier time. Once you >> have a sold underlying computational foundation for a science, then you >> have the luxury of this kind of specialization - as there is a framework >> that ties it all together. The Higgs effort being a very visible recent >> example. >> >> Neuroscience has nothing of the sort. As I point out in the article I >> linked to in my first posting - while it was first proposed 40 years ago >> (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites >> (i.e. that there were non directly-synaptically associated voltage >> dependent ion channels in the dendrite that governed its behavior), and 40 >> years of anatomically and physiologically realistic modeling has been >> necessary to start to understand what they do - many cerebellar modeling >> efforts today simply ignore these channels. While that again, to many on >> this list, may seem too far buried in the details, these voltage dependent >> channels make the Purkinje cell the computational device that it is. >> >> Recently, I was asked to review a cerebellar modeling paper in which the >> authors actually acknowledged that their model lacked these channels >> because they would have been too computationally expensive to include. >> Sadly for those authors, I was asked to review the paper for the usual >> reason - that several of our papers were referenced accordingly. They >> likely won?t make that mistake again - as after of course complementing >> them on the fact that they were honest (and knowledgable) enough to have >> remarked on the fact that their Purkinje cells weren?t really Purkinje >> cells - I had to reject the paper for the same reason. >> >> As I said, they likely won?t make that mistake again - and will very >> likely get away with it. >> >> Imagine a comparable situation in a field (like physics) which has >> established a structural base for its enterprise. ?We found it >> computational expedient to ignore the second law of thermodynamics in our >> computations - sorry?. BTW, I know that details are ignored all the time >> in physics as one deals with descriptions at different levels of scale - >> although even there, the field clearly would like to have a way to link >> across different levels of scale. I would claim, however, that that is >> precisely the ?trick? that biology uses to ?beat? the second law - linking >> all levels of scale together - another reason why you can?t ignore the >> details in biological models if you really want to understand how biology >> works. (too cryptic a comment perhaps). >> >> Anyway, my advice would be to consider how physics made this transition >> many years ago, and ask the question how neuroscience (and biology) can >> now. Key points I think are: >> - you need to produce students who are REALLY both experimental and >> theoretical (like Newton). (and that doesn?t mean programs that ?import? >> physicists and give them enough biology to believe they know what they are >> doing, or programs that link experimentalists to physicists to solve their >> computational problems) >> - you need to base the efforts on models (and therefore mathematics) of >> sufficient complexity to capture the physical reality of the system being >> studied (as Kepler was forced to do to make the sun centric model of the >> solar system even as close to as accurate as the previous earth centered >> system) >> - you need to build a new form of collaboration and communication that >> can support the complexity of those models. Fundamentally, we continue to >> use the publication system (short papers in a journal) that was invented as >> part of the transformation for physics way back then. Our laboratories are >> also largely isolated and non-cooperative, more appropriate for studying >> simpler things (like those in physics). Fortunate for us, we have a new >> communication tool (the Internet) although, as can be expected, we are >> mostly using it to reimplement old style communication systems (e-journals) >> with a few twists (supplemental materials). >> - funding agencies need to insist that anyone doing theory needs to be >> linked to the experimental side REALLY, and vice versa. I proposed a >> number of years ago to NIH that they would make it into the history books >> if they simply required the following monday, that any submitted >> experimental grant include a REAL theoretical and computational component - >> Sadly, they interpreted that as meaning that P.I.s should state "an >> hypothesis" - which itself is remarkable, because most of the ?hypotheses? >> I see stated in Federal grants are actually statements of what the P.I. >> believes to be true. Don?t get me started on human imaging studies. arggg >> - As long as we are talking about what funding agencies can do, how >> about the following structure for grants - all grants need to be submitted >> collaboratively by two laboratories who have different theories (better >> models) about how a particular part of the brain works. The grant should >> support at set of experiments, that both parties agree distinguish between >> their two points of view. All results need to be published with joint >> authorship. In effect that is how physics works - given its underlying >> structure. >> - You need to get rid, as quickly as possible, the pressure to >> ?translate? neuroscience research explicitly into clinical significance - >> we are not even close to being able to do that intentionally - and the >> pressure (which is essentially a give away to the pharma and bio-tech >> industries anyway) is forcing neurobiologists to link to what is arguably >> the least scientific form of research there is - clinical research. It >> just has to be the case that society needs to understand that an investment >> in basic research will eventually result in all the wonderful outcomes for >> humans we would all like, but this distortion now is killing real >> neuroscience just at a critical time, when we may finally have the tools to >> make the transition to a paradigmatic science. >> As some of you know, I have been all about trying to do these things >> for many years - with the GENESIS project, with the original CNS graduate >> program at Caltech, with the CNS meetings, (even originally with NIPS) and >> with the first ?Methods in Computational Neuroscience Course" at the >> Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) >> is actually wrapping up next week, and of course with my own research and >> students. Of course, I have not been alone in this, but it is remarkable >> how little impact all that has had on neuroscience or neuro-engineering. I >> have to say, honestly, that the strong tendency seems to be for these >> efforts to snap back to the non-realistic, non-biologically based modeling >> and theoretical efforts. >> >> Perhaps Canada, in its usual practical and reasonable way (sorry) can >> figure out how to do this right. >> >> I hope so. >> >> Jim >> >> p.s. I have also been proposing recently that we scuttle the ?intro >> neuroscience? survey courses in our graduate programs (religious >> instruction) and instead organize an introductory course built around the >> history of the discovery of the origin of the axon potential that >> culminated in the first (and last) Nobel prize work in computational >> neuroscience for the Hodkin Huxley model. The 50th anniversary of that >> prize was celebrated last year, and the year before I helped to organize a >> meeting celebrating the 60th anniversary of the publication of the original >> papers (which I care much more about anyway). That meeting was, I believe, >> the first meeting in neuroscience ever organized around a single >> (mathematical) model or theory - and in organizing it, I required all the >> speakers to show the HH model on their first slide, indicating which term >> or feature of the model their work was related to. Again, a first - but >> possible, as this is about the only ?community model? we have. >> >> Most Neuroscience textbooks today don?t include that equation (second >> order differential) and present the HH model primarily as a description of >> the action potential. Most theorists regard the HH model as a prime >> example of how progress can be made by ignoring the biological details. >> Both views and interpretations are historically and practically incorrect. >> In my opinion, if you can?t handle the math in the HH model, you shouldn?t >> be a neurobiologist, and if you don?t understand the profound impact of >> HH?s knowledge and experimental study of the squid giant axon on the model, >> you shouldn?t be a neuro-theorist either. just saying. :-) >> >> >> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: >> >> James, enjoyed your writing. >> >> So, what to do? We are trying to get organized in Canada and are thinking >> how we fit in with your (US) and the European approaches and big money. My >> thought is that our advantage might be flexibility by not having a single >> theme but rather a general supporting structure for theory and >> theory-experimental interactions. I believe the ultimate place where we >> want to be is to take theoretical proposals more seriously and try to make >> specific experiments for them; like the Higgs project. (Any other >> suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not >> already on there.) >> >> Also, with regards to big data, I believe that one very fascinating thing >> about the brain is that it can function with 'small data'. >> >> Cheers, Thomas >> >> >> On 2014-01-25 12:09 AM, "james bower" wrote: >> >>> Ivan thanks for the response, >>> >>> Actually, the talks at the recent Neuroscience Meeting about the Brain >>> Project either excluded modeling altogether - or declared we in the US >>> could leave it to the Europeans. I am not in the least bit nationalistic - >>> but, collecting data without having models (rather than imaginings) to >>> indicate what to collect, is simply foolish, with many examples from >>> history to demonstrate the foolishness. In fact, one of the primary >>> proponents (and likely beneficiaries) of this Brain Project, who gave the >>> big talk at Neuroscience on the project (showing lots of pretty pictures), >>> started his talk by asking: ?what have we really learned since Cajal, >>> except that there are also inhibitory neurons?? Shocking, not only because >>> Cajal actually suggested that there might be inhibitory neurons - in fact. >>> To quote ?Stupid is as stupid does?. >>> >>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, >>> conservatively. The Higgs experiment was absolutely the opposite of a Big >>> Data experiment - In fact, can you imagine the amount of money and time >>> that would have been required if one had simply decided to collect all data >>> at all possible energy levels? The Higgs experiment is all the more >>> remarkable because it had the nearly unified support of the high energy >>> physics community, not that there weren?t and aren?t skeptics, but still, >>> remarkable that the large majority could agree on the undertaking and >>> effort. The reason is, of course, that there was a theory - that dealt >>> with the particulars and the details - not generalities. In contrast, >>> there is a GREAT DEAL of skepticism (me included) about the Brain Project - >>> its politics and its effects (or lack therefore), within neuroscience. (of >>> course, many people are burring their concerns in favor of tin cups - >>> hoping). Neuroscience has had genome envy for ever - the connectome is >>> their response - who says its all in the connections? (sorry >>> ?connectionists?) Where is the theory? Hebb? You should read Hebb if you >>> haven?t - rather remarkable treatise. But very far from a theory. >>> >>> If you want an honest answer to your question - I have not seen any good >>> evidence so far that the approach works, and I deeply suspect that the >>> nervous system is very much NOT like any machine we have built or designed >>> to date. I don?t believe that Newton would have accomplished what he did, >>> had he not, first, been a remarkable experimentalist, tinkering with real >>> things. I feel the same way about Neuroscience. Having spent almost 30 >>> years building realistic models of its cells and networks (and also doing >>> experiments, as described in the article I linked to) we have made some >>> small progress - but only by avoiding abstractions and paying attention to >>> the details. OF course, most experimentalists and even most modelers have >>> paid little or no attention. We have a sociological and structural problem >>> that, in my opinion, only the right kind of models can fix, coupled with a >>> real commitment to the biology - in all its complexity. And, as the model >>> I linked tries to make clear - we also have to all agree to start working >>> on common ?community models?. But like big horn sheep, much safer to stand >>> on your own peak and make a lot of noise. >>> >>> You can predict with great accuracy the movement of the planets in the >>> sky using circles linked to other circles - nice and easy math, and very >>> adaptable model (just add more circles when you need more accuracy, and >>> invent entities like equant points, etc). Problem is, without getting into >>> the nasty math and reality of ellipses- you can?t possible know anything >>> about gravity, or the origins of the solar system, or its various and >>> eventual perturbations. >>> >>> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >>> >>> The details of reality matter. >>> >>> Jim >>> >>> >>> >>> >>> >>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov >>> wrote: >>> >>> >>> I think perhaps the objection to the Big Data approach is that it is >>> applied to the exclusion of all other modelling approaches. While it is >>> true that complete and detailed understanding of neurophysiology and >>> anatomy is at the heart of neuroscience, a lot can be learned about signal >>> propagation in excitable branching structures using statistical physics, >>> and a lot can be learned about information representation and transmission >>> in the brain using mathematical theories about distributed communicating >>> processes. As these modelling approaches have been successfully used in >>> various areas of science, wouldn't you agree that they can also be used to >>> understand at least some of the fundamental properties of brain structures >>> and processes? >>> >>> -Ivan Raikov >>> >>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>> >>>> [snip] >>>> >>> An enormous amount of engineering and neuroscience continues to think >>>> that the feedforward pathway is from the sensors to the inside - rather >>>> than seeing this as the actual feedback loop. Might to some sound like a >>>> semantic quibble, but I assure you it is not. >>>> >>>> If you believe as I do, that the brain solves very hard problems, in >>>> very sophisticated ways, that involve, in some sense the construction of >>>> complex models about the world and how it operates in the world, and that >>>> those models are manifest in the complex architecture of the brain - then >>>> simplified solutions are missing the point. >>>> >>>> What that means inevitably, in my view, is that the only way we will >>>> ever understand what brain-like is, is to pay tremendous attention >>>> experimentally and in our models to the actual detailed anatomy and >>>> physiology of the brains circuits and cells. >>>> >>>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> *Phone: 210 382 0553 <210%20382%200553>* >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged >>> or contain privileged and confidential information. This information is >>> only for the viewing or use of the intended recipient. If you have received >>> this e-mail in error or are not the intended recipient, you are hereby >>> notified that any disclosure, copying, distribution or use of, or the >>> taking of any action in reliance upon, any of the information contained in >>> this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that >>> this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, >>> this e-mail and all attachments to this e-mail must be immediately deleted >>> from your computer without making any copies hereof and any and all hard >>> copies made must be destroyed. If you have received this e-mail in error, >>> please notify the sender by e-mail immediately. >>> >>> >>> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> >> *Phone: 210 382 0553 <210%20382%200553>* >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged >> or contain privileged and confidential information. This information is >> only for the viewing or use of the intended recipient. If you have received >> this e-mail in error or are not the intended recipient, you are hereby >> notified that any disclosure, copying, distribution or use of, or the >> taking of any action in reliance upon, any of the information contained in >> this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that >> this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, >> this e-mail and all attachments to this e-mail must be immediately deleted >> from your computer without making any copies hereof and any and all hard >> copies made must be destroyed. If you have received this e-mail in error, >> please notify the sender by e-mail immediately. >> >> >> >> > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Sun Jan 26 15:56:20 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Sun, 26 Jan 2014 21:56:20 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: well, I think that's a bit what people are objecting to with the Brain Initiative - that all (but really, just a lot) of the money will go to these measurement devices without any theory to understand what we are measuring or why. On the other hand, Jim's approach is going way too far in the other direction, which is your point. Meanwhile, I can no longer resist quoting you (which I do, every time I send an email, since it is in my signoff): "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton On Jan 26, 2014, at 8:43 PM, Geoffrey Hinton wrote: > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: > I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. > > Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. > > To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. > > I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. > > In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. > > > Best, > Brad Wyble > > > > > On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > Thanks for your comments Thomas, and good luck with your effort. > > I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. > > I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. > > Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. > > Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and will very likely get away with it. > > Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: > - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) > - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) > - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg > - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. > - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. > > As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > >> James, enjoyed your writing. >> >> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >> >> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >> >> Cheers, Thomas >> >> >> >> On 2014-01-25 12:09 AM, "james bower" wrote: >> Ivan thanks for the response, >> >> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >> >> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >> >> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >> >> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >> >> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >> >> The details of reality matter. >> >> Jim >> >> >> >> >> >> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >> >>> >>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>> >>> -Ivan Raikov >>> >>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>> [snip] >>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>> >>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>> >>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Sun Jan 26 16:03:21 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Sun, 26 Jan 2014 22:03:21 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <0575DCF9-2ADB-4718-9729-EDB2184471FA@eng.ucsd.edu> and as someone once said, we folks in machine learning/neural networks were into data *before* it was big?. g. On Jan 26, 2014, at 8:43 PM, Geoffrey Hinton wrote: > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: > I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. > > Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. > > To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. > > I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. > > In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. > > > Best, > Brad Wyble > > > > > On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > Thanks for your comments Thomas, and good luck with your effort. > > I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. > > I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. > > Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. > > Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and will very likely get away with it. > > Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: > - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) > - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) > - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg > - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. > - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. > > As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > >> James, enjoyed your writing. >> >> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >> >> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >> >> Cheers, Thomas >> >> >> >> On 2014-01-25 12:09 AM, "james bower" wrote: >> Ivan thanks for the response, >> >> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >> >> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >> >> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >> >> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >> >> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >> >> The details of reality matter. >> >> Jim >> >> >> >> >> >> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >> >>> >>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>> >>> -Ivan Raikov >>> >>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>> [snip] >>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>> >>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>> >>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hava at cs.umass.edu Sun Jan 26 16:52:11 2014 From: hava at cs.umass.edu (Hava Siegelmann) Date: Sun, 26 Jan 2014 16:52:11 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <52E5838B.4030804@cs.umass.edu> nicely said. On 1/26/14 2:43 PM, Geoffrey Hinton wrote: > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should > NOT be doing. I think people should just get on and do whatever they > think might work. Obviously they will focus on approaches that make > use of their particular skills. We won't know until afterwards which > approaches led to major progress and which were dead ends. Maybe a > fruitful approach is to model every connection in a piece of retina > in order to distinguish between detailed theories of how cells get to > be direction selective. Maybe its building huge and very artificial > neural nets that are much better than other approaches at some > difficult task. Probably its both of these and many others too. The > way to really slow down the expected rate of progress in understanding > how the brain works is to insist that there is one right approach and > nearly all the money should go to that approach. > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble > wrote: > > I am extremely pleased to see such vibrant discussion here and my > thanks to Juyang for getting the ball rolling. > > Jim, I appreciate your comments and I agree in large measure, but > I have always disagreed with you as regards the necessity of > simulating everything down to a lowest common denominator . Like > you, I enjoy drawing lessons from the history of other > disciplines, but unlike you, I don't think the analogy between > neuroscience and physics is all that clear cut. The two fields > deal with vastly different levels of complexity and therefore I > don't think it should be expected that they will (or should) > follow the same trajectory. > > To take your Purkinje cell example, I imagine that there are those > who view any such model that lacks an explicit simulation of the > RNA as being incomplete. To such a person, your models would also > be unfit for the literature. So would we then change the standards > such that no model can be published unless it includes an explicit > simulation of the RNA? And why stop there? Where does it end? > In my opinion, we can't make effective progress in this field if > everyone is bound to the molecular level. > > I really think that neuroscience presents a fundamental challenge > that is not present in physics, which is that progress can only > occur when theory is developed at different levels of abstraction > that overlap with one another. The challenge is not how to force > everyone to operate at the same level of formal specificity, but > how to allow effective communication between researchers operating > at different levels. > > In aid of meeting this challenge, I think that our field should > take more inspiration from engineering, a model-based discipline > that already has to work simultaneously at many different scales > of complexity and abstraction. > > > Best, > Brad Wyble > > > > > On Sat, Jan 25, 2014 at 9:59 AM, james bower > wrote: > > Thanks for your comments Thomas, and good luck with your effort. > > I can?t refrain myself from making the probably culturist > remark that this seems a very practical approach. > > I have for many years suggested that those interested in > advancing biology in general and neuroscience in particular to > a ?paradigmatic? as distinct from a descriptive / folkloric > science, would benefit from understanding this transition as > physics went through it in the 15th and 16th centuries. In > many ways, I think that is where we are today, although with > perhaps the decided disadvantage that we have a lot of > physicists around who, again in my view, don?t really > understand the origins of their own science. By that, I mean, > that they don?t understand how much of their current > scientific structure, for example the relatively clean > separation between ?theorists? and ?experimentalists?, is > dependent on the foundation build by those (like Newton) who > were both in an earlier time. Once you have a sold underlying > computational foundation for a science, then you have the > luxury of this kind of specialization - as there is a > framework that ties it all together. The Higgs effort being a > very visible recent example. > > Neuroscience has nothing of the sort. As I point out in the > article I linked to in my first posting - while it was first > proposed 40 years ago (by Rodolfo Llinas) that the cerebellar > Purkinje cell had active dendrites (i.e. that there were non > directly-synaptically associated voltage dependent ion > channels in the dendrite that governed its behavior), and 40 > years of anatomically and physiologically realistic modeling > has been necessary to start to understand what they do - many > cerebellar modeling efforts today simply ignore these > channels. While that again, to many on this list, may seem > too far buried in the details, these voltage dependent > channels make the Purkinje cell the computational device that > it is. > > Recently, I was asked to review a cerebellar modeling paper in > which the authors actually acknowledged that their model > lacked these channels because they would have been too > computationally expensive to include. Sadly for those > authors, I was asked to review the paper for the usual reason > - that several of our papers were referenced accordingly. > They likely won?t make that mistake again - as after of > course complementing them on the fact that they were honest > (and knowledgable) enough to have remarked on the fact that > their Purkinje cells weren?t really Purkinje cells - I had to > reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and > will very likely get away with it. > > Imagine a comparable situation in a field (like physics) which > has established a structural base for its enterprise. ?We > found it computational expedient to ignore the second law of > thermodynamics in our computations - sorry?. BTW, I know that > details are ignored all the time in physics as one deals with > descriptions at different levels of scale - although even > there, the field clearly would like to have a way to link > across different levels of scale. I would claim, however, > that that is precisely the ?trick? that biology uses to ?beat? > the second law - linking all levels of scale together - > another reason why you can?t ignore the details in biological > models if you really want to understand how biology works. > (too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this > transition many years ago, and ask the question how > neuroscience (and biology) can now. Key points I think are: > - you need to produce students who are REALLY both > experimental and theoretical (like Newton). (and that doesn?t > mean programs that ?import? physicists and give them enough > biology to believe they know what they are doing, or programs > that link experimentalists to physicists to solve their > computational problems) > - you need to base the efforts on models (and therefore > mathematics) of sufficient complexity to capture the physical > reality of the system being studied (as Kepler was forced to > do to make the sun centric model of the solar system even as > close to as accurate as the previous earth centered system) > - you need to build a new form of collaboration and > communication that can support the complexity of those models. > Fundamentally, we continue to use the publication system > (short papers in a journal) that was invented as part of the > transformation for physics way back then. Our laboratories > are also largely isolated and non-cooperative, more > appropriate for studying simpler things (like those in > physics). Fortunate for us, we have a new communication tool > (the Internet) although, as can be expected, we are mostly > using it to reimplement old style communication systems > (e-journals) with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory > needs to be linked to the experimental side REALLY, and vice > versa. I proposed a number of years ago to NIH that they > would make it into the history books if they simply required > the following monday, that any submitted experimental grant > include a REAL theoretical and computational component - > Sadly, they interpreted that as meaning that P.I.s should > state "an hypothesis" - which itself is remarkable, because > most of the ?hypotheses? I see stated in Federal grants are > actually statements of what the P.I. believes to be true. > Don?t get me started on human imaging studies. arggg > - As long as we are talking about what funding agencies can > do, how about the following structure for grants - all grants > need to be submitted collaboratively by two laboratories who > have different theories (better models) about how a particular > part of the brain works. The grant should support at set of > experiments, that both parties agree distinguish between their > two points of view. All results need to be published with > joint authorship. In effect that is how physics works - given > its underlying structure. > - You need to get rid, as quickly as possible, the pressure to > ?translate? neuroscience research explicitly into clinical > significance - we are not even close to being able to do that > intentionally - and the pressure (which is essentially a give > away to the pharma and bio-tech industries anyway) is forcing > neurobiologists to link to what is arguably the least > scientific form of research there is - clinical research. It > just has to be the case that society needs to understand that > an investment in basic research will eventually result in all > the wonderful outcomes for humans we would all like, but this > distortion now is killing real neuroscience just at a critical > time, when we may finally have the tools to make the > transition to a paradigmatic science. > As some of you know, I have been all about trying to do these > things for many years - with the GENESIS project, with the > original CNS graduate program at Caltech, with the CNS > meetings, (even originally with NIPS) and with the first > ?Methods in Computational Neuroscience Course" at the Marine > Biological laboratory, whose latest incarnation in Brazil > (LASCON) is actually wrapping up next week, and of course with > my own research and students. Of course, I have not been > alone in this, but it is remarkable how little impact all that > has had on neuroscience or neuro-engineering. I have to say, > honestly, that the strong tendency seems to be for these > efforts to snap back to the non-realistic, non-biologically > based modeling and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way > (sorry) can figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the > ?intro neuroscience? survey courses in our graduate programs > (religious instruction) and instead organize an introductory > course built around the history of the discovery of the origin > of the axon potential that culminated in the first (and last) > Nobel prize work in computational neuroscience for the Hodkin > Huxley model. The 50th anniversary of that prize was > celebrated last year, and the year before I helped to organize > a meeting celebrating the 60th anniversary of the publication > of the original papers (which I care much more about anyway). > That meeting was, I believe, the first meeting in > neuroscience ever organized around a single (mathematical) > model or theory - and in organizing it, I required all the > speakers to show the HH model on their first slide, indicating > which term or feature of the model their work was related to. > Again, a first - but possible, as this is about the only > ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation > (second order differential) and present the HH model primarily > as a description of the action potential. Most theorists > regard the HH model as a prime example of how progress can be > made by ignoring the biological details. Both views and > interpretations are historically and practically incorrect. > In my opinion, if you can?t handle the math in the HH model, > you shouldn?t be a neurobiologist, and if you don?t understand > the profound impact of HH?s knowledge and experimental study > of the squid giant axon on the model, you shouldn?t be a > neuro-theorist either. just saying. :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg > wrote: > >> James, enjoyed your writing. >> >> So, what to do? We are trying to get organized in Canada and >> are thinking how we fit in with your (US) and the European >> approaches and big money. My thought is that our advantage >> might be flexibility by not having a single theme but rather >> a general supporting structure for theory and >> theory-experimental interactions. I believe the ultimate >> place where we want to be is to take theoretical proposals >> more seriously and try to make specific experiments for them; >> like the Higgs project. (Any other suggestions? Canadians, >> see http://www.neuroinfocomp.ca >> if you are not already on there.) >> >> Also, with regards to big data, I believe that one very >> fascinating thing about the brain is that it can function >> with 'small data'. >> >> Cheers, Thomas >> >> >> On 2014-01-25 12:09 AM, "james bower" > > wrote: >> >> Ivan thanks for the response, >> >> Actually, the talks at the recent Neuroscience Meeting >> about the Brain Project either excluded modeling >> altogether - or declared we in the US could leave it to >> the Europeans. I am not in the least bit nationalistic - >> but, collecting data without having models (rather than >> imaginings) to indicate what to collect, is simply >> foolish, with many examples from history to demonstrate >> the foolishness. In fact, one of the primary proponents >> (and likely beneficiaries) of this Brain Project, who >> gave the big talk at Neuroscience on the project (showing >> lots of pretty pictures), started his talk by asking: >> ?what have we really learned since Cajal, except that >> there are also inhibitory neurons?? Shocking, not only >> because Cajal actually suggested that there might be >> inhibitory neurons - in fact. To quote ?Stupid is as >> stupid does?. >> >> Forbes magazine estimated that finding the Higgs Boson >> cost over $13BB, conservatively. The Higgs experiment >> was absolutely the opposite of a Big Data experiment - In >> fact, can you imagine the amount of money and time that >> would have been required if one had simply decided to >> collect all data at all possible energy levels? The Higgs >> experiment is all the more remarkable because it had the >> nearly unified support of the high energy physics >> community, not that there weren?t and aren?t skeptics, >> but still, remarkable that the large majority could agree >> on the undertaking and effort. The reason is, of course, >> that there was a theory - that dealt with the particulars >> and the details - not generalities. In contrast, there >> is a GREAT DEAL of skepticism (me included) about the >> Brain Project - its politics and its effects (or lack >> therefore), within neuroscience. (of course, many people >> are burring their concerns in favor of tin cups - >> hoping). Neuroscience has had genome envy for ever - the >> connectome is their response - who says its all in the >> connections? (sorry ?connectionists?) Where is the >> theory? Hebb? You should read Hebb if you haven?t - >> rather remarkable treatise. But very far from a theory. >> >> If you want an honest answer to your question - I have >> not seen any good evidence so far that the approach >> works, and I deeply suspect that the nervous system is >> very much NOT like any machine we have built or designed >> to date. I don?t believe that Newton would have >> accomplished what he did, had he not, first, been a >> remarkable experimentalist, tinkering with real things. >> I feel the same way about Neuroscience. Having spent >> almost 30 years building realistic models of its cells >> and networks (and also doing experiments, as described in >> the article I linked to) we have made some small progress >> - but only by avoiding abstractions and paying attention >> to the details. OF course, most experimentalists and >> even most modelers have paid little or no attention. We >> have a sociological and structural problem that, in my >> opinion, only the right kind of models can fix, coupled >> with a real commitment to the biology - in all its >> complexity. And, as the model I linked tries to make >> clear - we also have to all agree to start working on >> common ?community models?. But like big horn sheep, much >> safer to stand on your own peak and make a lot of noise. >> >> You can predict with great accuracy the movement of the >> planets in the sky using circles linked to other circles >> - nice and easy math, and very adaptable model (just add >> more circles when you need more accuracy, and invent >> entities like equant points, etc). Problem is, without >> getting into the nasty math and reality of ellipses- you >> can?t possible know anything about gravity, or the >> origins of the solar system, or its various and eventual >> perturbations. >> >> As I have been saying for 30 years: Beware Ptolemy and >> curve fitting. >> >> The details of reality matter. >> >> Jim >> >> >> >> >> >> On Jan 24, 2014, at 7:02 PM, Ivan Raikov >> > > wrote: >> >>> >>> I think perhaps the objection to the Big Data approach >>> is that it is applied to the exclusion of all other >>> modelling approaches. While it is true that complete and >>> detailed understanding of neurophysiology and anatomy is >>> at the heart of neuroscience, a lot can be learned about >>> signal propagation in excitable branching structures >>> using statistical physics, and a lot can be learned >>> about information representation and transmission in the >>> brain using mathematical theories about distributed >>> communicating processes. As these modelling approaches >>> have been successfully used in various areas of science, >>> wouldn't you agree that they can also be used to >>> understand at least some of the fundamental properties >>> of brain structures and processes? >>> >>> -Ivan Raikov >>> >>> On Sat, Jan 25, 2014 at 8:31 AM, james bower >>> > wrote: >>> >>> [snip] >>> >>> An enormous amount of engineering and neuroscience >>> continues to think that the feedforward pathway is >>> from the sensors to the inside - rather than seeing >>> this as the actual feedback loop. Might to some >>> sound like a semantic quibble, but I assure you it >>> is not. >>> >>> If you believe as I do, that the brain solves very >>> hard problems, in very sophisticated ways, that >>> involve, in some sense the construction of complex >>> models about the world and how it operates in the >>> world, and that those models are manifest in the >>> complex architecture of the brain - then simplified >>> solutions are missing the point. >>> >>> What that means inevitably, in my view, is that the >>> only way we will ever understand what brain-like is, >>> is to pay tremendous attention experimentally and in >>> our models to the actual detailed anatomy and >>> physiology of the brains circuits and cells. >>> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> *Phone: 210 382 0553 * >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may >> be privileged or contain privileged and confidential >> information. This information is only for the viewing or >> use of the intended recipient. If you have received >> this e-mail in error or are not the intended recipient, >> you are hereby notified that any disclosure, copying, >> distribution or use of, or the taking of any action in >> reliance upon, any of the information contained in this >> e-mail, or >> >> any of the attachments to this e-mail, is strictly >> prohibited and that this e-mail and all of the >> attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in >> either case, this e-mail and all attachments to this >> e-mail must be immediately deleted from your computer >> without making any copies hereof and any and all hard >> copies made must be destroyed. If you have received this >> e-mail in error, please notify the sender by e-mail >> immediately. >> >> >> > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > *Phone: 210 382 0553 * > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be > privileged or contain privileged and confidential information. > This information is only for the viewing or use of the > intended recipient. If you have received this e-mail in error > or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, > or the taking of any action in reliance upon, any of the > information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited > and that this e-mail and all of the attachments to this > e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either > case, this e-mail and all attachments to this e-mail must be > immediately deleted from your computer without making any > copies hereof and any and all hard copies made must be > destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > > -- Hava T. Siegelmann, Ph.D. Professor Director, BINDS Lab (Biologically Inspired Neural Dynamical Systems) Dept. of Computer Science Program of Neuroscience and Behavior University of Massachusetts Amherst Amherst, MA, 01003 Phone - Grant Administrator ? Michele Roberts: 413-545-4389 Fax: 413-545-1249 LAB WEBSITE: http://binds.cs.umass.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala at csc.kth.se Sun Jan 26 17:38:50 2014 From: ala at csc.kth.se (Anders Lansner) Date: Sun, 26 Jan 2014 23:38:50 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: <017301cf1ae7$692a6f70$3b7f4e50$@csc.kth.se> [Resending this to the intended thread] Dear John and all, I was not aware until this morning that my simple announcement of the workshop on "Progress in Brain-Like Computing" at KTH Royal Institute of Technology in Stockholm next week had stirred up such a vivid discussion on the list. I was on a conference trip to Singapore with only occasional web access and got back only yesterday. Allow me some comments and reflections. I agree with some of the original points made by John Weng on that we need brain-scale theories in order to make real progress in brain-like computing and what the focus should be. Indeed, I think we see some maybe vague contours of partial theories that wait to be integrated to a more complete understanding. Since the terminology of "brain-like" was criticized from different perspectives, allow me some motivation why we use this term. We could have stated "neuromorphic" but in my opinion this term leads the thoughts a bit too much towards microscopic and microcircuit levels. After all real brains not only have very many neurons and synapses but also a very complex structure in terms of specialized neural populations and projections connecting them. We have today chips and clusters that are able to simulate with reasonable throughput such multi-neural-network structures (if not too complex components .) so we can at least computationally handle this level, rather than staying with small simple networks. Personally I think that to understand principles of brain function we need to avoid a lot but not all of the complexity we see at the single neuron and synapse levels. I also prefer the term "brain-like" rather than "brain-inspired" since the former defines the goal of building computational structures that are like brains and not just to start there and then perhaps quickly, in all our inspiration, diverge away from mimicking the essential aspects of real brains. It is interesting to note that the subject of the discussion quickly deviated from the main content of our workshop which has to do with designing and eventually building brain-like computational architectures in silicon - or some more exotic substrate. Such research has been going on for long time and is now seeing increasing efforts. It can obviously be argued whether this is still premature or if it is now finally the right time to boost such efforts. Despite the fact that our knowledge about the brain is still not complete . What also strikes me when I read this discussion is that we are still quite a divided and diverse community with minor consensus. There are many who think we are many decades away from doing the above, many who study abstract computational "deep learning" network models for classification and prediction without bothering much about the biology, many who study experimentally or model brain circuits without focusing much on what functions they perform, and many who design hardware without knowing exactly what features to include, etc. But I am optimistic! Perhaps, in the near future, these efforts will combine synergistically and the pieces of the puzzle will start falling in place, triggering a series of real breakthroughs in our understanding of how our brain works. To identify at what point in time and what stage in brain science this will happen is indeed critical. Then, those who have the best understanding of how to design the hardware appropriate for executing in real time or faster this integrated set of brain-like algorithms in a low-power way will be in an excellent position for exploiting such progress in many important applications - hopefully beneficial for mankind! This is some of the background for organizing the event I announced, which will hopefully contribute something to the further discussion on these very important topics. /Anders La From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng Sent: den 26 januari 2014 07:27 To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare I enjoyed many views of you, including those of Jim Bower, Richard Loosemore, Ali Minai, and Thomas Trappenberg. Let me give a humble suggestion, when you hear a detailed view that looks "polarizing", do not get discouraged as your emotion is mistreating you. Find out whether the detail fits many brain functions. Big data and brain-like computing will fail if such a project is without the guidance of a brain-scale model. Note: vague statements are not very useful here as everybody can give many. A brain-scale model must be very detailed computationally. The more detailed, the more brain-scale functions it covers, and the fewer mechanisms it uses, the better. For example, the Spaun model claimed to be " the world's largest functional brain model". With great respect, I congratulate its appearance in Science 2012. But, unfortunately, Science editors are not in a position to judge how close a brain model is. I understand that no model of the brain is perfect and every model is an approximation of the nature. From my brain-scale model, I think that a minimal requirement for a reviewer on a brain model must have a formal training (e.g., a 3-credit course and pass all its exams) in all the following: (1) computer vision, (2) artificial intelligence, (3) automata theory and computational complexity, (4) electrical engineering, such as signals and systems, control, conventional neural networks, (5) biology, (6) neuroscience, (7) cognitive science, such as learning and memory, human vision systems, and developmental psychology, (8) mathematics, such as linear algebra, probability, statistics, and optimization theory. If you do not have some of the above, take such courses as soon as possible. BMI summer courses 2014 will offer some. If you have taken all the above courses, you will know that the Spaun model is grossly wrong (and, with respect, the deep learning net of Geoffery Hinton for the same reason). Why? I just give the first mechanism that every brain must have and thus every brain model must have: learning and recognizing unknown objects FROM unknown cluttered backgrounds and producing desired behaviors Note: not just recognizing but learning; not a single object in a clean background that Spaun demonstrated but also simultaneous multiple objects in a cluttered backgrounds. No objects can be pre-segmented from the cluttered background during learning. That is how a baby learns. None of the tasks that Spaun did includes cluttered background, let along learning directly from cluttered scenes. Attention is the first basic mechanism of the brain learned from the baby time, not recognizing a pattern in a clean background. Autonomously learning attention is the single most important mechanism for Big Data and Brain-Like Computing! How? Read How the Brain-Mind Works: A Two-Page Introduction to a Theory banner -John On 1/24/14 9:03 PM, Thomas Trappenberg wrote: Thanks John for starting a discussion ... I think we need some. What I liked most about your original post was asking about "What are the underlying principles?" Let's make a list. Of course, there are so many levels of organizations and mechanisms in the brain, that we might speak about different things; but getting different views would be fun and I think very useful (without the need to offer the only and ultimate). Cheers, Thomas Trappenberg PS: John, I thought you started a good discussion before, but I got discouraged by your polarizing views. I think a lot of us can relate to you, but lhow about letting others come forward now? On Fri, Jan 24, 2014 at 9:02 PM, Ivan Raikov wrote: I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? -Ivan Raikov On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: [snip] An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- --- Detta epostmeddelande inneh?ller inget virus eller annan skadlig kod f?r avast! antivirus ?r aktivt. http://www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2734 bytes Desc: not available URL: From smart at neuralcorrelate.com Sun Jan 26 18:40:59 2014 From: smart at neuralcorrelate.com (Susana Martinez-Conde) Date: Sun, 26 Jan 2014 16:40:59 -0700 Subject: Connectionists: 2nd call for illusion submissions: Best illusion of the Year Contest 10th Anniversary Edition Message-ID: <003f01cf1af0$12862c50$379284f0$@neuralcorrelate.com> **** 2ND CALL FOR ILLUSION SUBMISSIONS: THE WORLD'S 10TH ANNUAL BEST ILLUSION OF THE YEAR CONTEST**** http://illusioncontest.neuralcorrelate.com *** We are happy to announce the 10th anniversary edition of world's Best Illusion of the Year Contest!!*** Submissions are now welcome! The 2014 contest will be held in St. Petersburg, Florida, at the TradeWinds Island Resorts (headquarters of the Vision Sciences Society conference), on May 18th. Past contests have been highly successful in drawing public attention to perceptual research, with over ***FIVE MILLION*** website hits from viewers all over the world, as well as hundreds of international media stories. The First, Second and Third Prize winners from the 2013 contest were Jun Ono, Akiyasu Tomoeda and Kokichi Sugihara (Meiji University and CREST, Japan), Arthur Shapiro and Alex Rose-Henig (American University, USA), and Arash Afraz and Ken Nakayama (Massachusetts Institute of Technology and Harvard University, USA). To see the illusions, photo galleries and other highlights from the 2013 and previous contests, go to http://illusionoftheyear.com. Eligible submissions to compete in the 2014 contest are novel perceptual or cognitive illusions (unpublished, or published no earlier than 2013) of all sensory modalities (visual, auditory, etc.) in standard image, movie or html formats. Exciting new variants of classic or known illusions are admissible. An international panel of impartial judges will rate the submissions and narrow them to the TOP TEN. Then, at the Contest Gala in St. Petersburg, the TOP TEN illusionists will present their contributions and the attendees of the event (that means you!) will vote to pick the TOP THREE WINNERS! The 2014 Contest Gala will be hosted by world-renowned magician Mac King. Mac King is the premiere comedy magician in the world today, with his own family-friendly show, "The Mac King Comedy Magic Show," at the Harrah's Las Vegas. He was named "Magician of the Year" by the Magic Castle in Hollywood in 2003, and is a frequent guest and host of television specials. Illusions submitted to previous editions of the contest can be re-submitted to the 2014 contest, so long as they meet the above requirements and were not among the TOP THREE winners in previous years. Submissions will be held in strict confidence by the panel of judges and the authors/creators will retain full copyright. The TOP TEN illusions will be posted on the illusion contest's website *after* the Contest Gala. Illusions not chosen among the TOP TEN will not be disclosed. Participating in to the Best Illusion of the Year Contest does not preclude the illusion authors/creators from also submitting their work for publication elsewhere. Submissions can be made to Dr. Susana Martinez-Conde (Illusion Contest Executive Producer, Neural Correlate Society) via email (smart at neuralcorrelate.com) until February 14, 2014. Illusion submissions should come with a (no more than) one-page description of the illusion and its theoretical underpinnings (if known). Women and underrepresented groups are especially encouraged to participate. The Neural Correlate Society reserves the right to disqualify illusion entries that are potentially offensive to some or all members of the public, or inappropriate for viewing by audiences of all ages. Illusions will be rated according to: . Significance to our understanding of the mind and brain . Simplicity of the description . Sheer beauty . Counterintuitive quality . Spectacularity Visit the illusion contest website for further information and to see last year's illusions: http://illusionoftheyear.com. Submit your ideas now and take home this prestigious award! On behalf of the Executive Board of the Neural Correlate Society: Jose-Manuel Alonso, Stephen Macknik, Susana Martinez-Conde, Luis Martinez, Xoana Troncoso, Peter Tse ---------------------------------------------------------------- Susana Martinez-Conde, PhD Executive Producer, Best Illusion of the Year Contest President, Neural Correlate Society Columnist, Scientific American Mind Author, Sleights of Mind Director, Laboratory of Visual Neuroscience Division of Neurobiology Barrow Neurological Institute 350 W. Thomas Rd Phoenix AZ 85013, USA Phone: +1 (602) 406-3484 Fax: +1 (602) 406-4172 Email: smart at neuralcorrelate.com http://smc.neuralcorrelate.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.silver at acadiau.ca Sun Jan 26 20:35:03 2014 From: danny.silver at acadiau.ca (Danny Silver) Date: Mon, 27 Jan 2014 01:35:03 +0000 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: I would echo Geoff Hinton's comment. This is a large and exciting area of research. Many points of view are necessary and many avenues should be explored. However, I would like to suggest that we sharpen our efforts in the area of problem definition as we proceed on our individual or collective efforts. Unlike the physical sciences we have had few well defined problems in the area of "how does the brain work" to which the current knowledge of machine learning can be applied. If you look up open problems in neuroscience (try - wikipedia or 23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowslti) you will find large sweeping problems such as "what is consciousness" and "how do we represent time in the brain". These are important and significant problems, however (at this time) it is challenging to wrap them in specific requirements that would make them well defined (ie. provide a shared common understanding of the problem) and amenable to the formulation of competing hypotheses from the machine learning community. Subsequently, machine learning researchers have tended to explore more specific areas beginning with a problem statement they "feel" is important to brain function -- such as "how can we learn invariants in a visual or auditory system", "how does one retain and transfer knowledge from one task to another". Once there is agreement on a problem then many researchers can work on that problem and argue the merits of their solutions. So .. If we wish to sharpen our focus, let us do so in the area of shared well defined problems upon which we can make meaningful headway. Asking good questions that come with well developed requirements is the starting point to good science. At least that is what we tell our graduate students. .. Danny ======================= Daniel L. Silver, Ph.D. danny.silver at acadiau.ca Professor, Jodrey School of Computer Science, Acadia University Office 314, Carnegie Hall, Wolfville, NS Canada B4P 2R6 p:902-585-1413 f:902-585-1067 From: Geoffrey Hinton > Date: Sunday, 26 January, 2014 3:43 PM To: Brad Wyble > Cc: Connectionists list > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare I can no longer resist making one point. A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. Geoff On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble > wrote: I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. Best, Brad Wyble On Sat, Jan 25, 2014 at 9:59 AM, james bower > wrote: Thanks for your comments Thomas, and good luck with your effort. I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. As I said, they likely won?t make that mistake again - and will very likely get away with it. Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. I hope so. Jim p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg > wrote: James, enjoyed your writing. So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. Cheers, Thomas On 2014-01-25 12:09 AM, "james bower" > wrote: Ivan thanks for the response, Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. As I have been saying for 30 years: Beware Ptolemy and curve fitting. The details of reality matter. Jim On Jan 24, 2014, at 7:02 PM, Ivan Raikov > wrote: I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? -Ivan Raikov On Sat, Jan 25, 2014 at 8:31 AM, james bower > wrote: [snip] An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sun Jan 26 21:38:46 2014 From: bower at uthscsa.edu (james bower) Date: Sun, 26 Jan 2014 20:38:46 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: Sorry, History tells us that it is simply not the case, in the vast number of cases, that anythings goes and every approach has equal probability of success. While we are certainly in a age of cultural ?inclusion? (at least supposedly) and many of us were raised with the idea that everyone?s opinion and effort is as worth while as everyone else?s. And, importantly, aggressive mammals (primates) adopt all kinds of strategies to avoid direct conflict (less risky), it remains the case that history tells us that some approaches lead to more progress than others. 30 years ago, it was only my hunch that the abstract form of computing structures designated ?Neural Networks? despite the fact that they bore no real resemblance to anything neural - would not, in the end, tell us very much about the nervous system. For sure, lots of smart and dedicated people, in part with funding provided (largely) by military agencies who were sold on the brain-like nature of these devices (I know, I was there when the sales pitches were being made - grumbling in the corner), have done remarkable things - in engineering. But, illuminate the function of the real nervous system, they have not. To my great relief, very few were even making that claim this year at NIPS - didn?t need to, Google is now the justification for the work. Now, some version of the idea that the connections are THE key (thus: ?connectionism?) has sold the white house on the idea that we should spend a large amount of money on further developing the ?look see?, stamp collecting approach to trying to understand brain function. Who cares that the connections are made on DENDRITES, which are still ignored by almost all abstract brain modelers, despite the fact that they clearly do the computation!!! Perhaps not surprisingly, the principle proponent of this approach in the OSTP is the son of a neuroanatomist. (and, I should say, a good one). Furthermore, we can assure that we will have lots of pretty colors to show, which sadly now stands for progress in neuroscience, whether we know what they mean or not. I restate for the last time on this list, in this half century: 1) ultimately, how the nervous system really works will depend on understanding its real circuitry and physiology - just like is also the case for cars, TV sets, computers, AND neural networks. (how many times in NN meetings were there can be a lot of bizarre claims made, have I heard serious scientists say (including some contributing to this discussion) - ?I don?t care what you claim it does, I want to know how it really does it? why doesn?t that apply to the brain as well??) 2) Over and over again in history, paying attention to the actual physical structure of a system has lead to unexpected breakthroughs - while theories and models rife with ad hoc assumptions and convenient abstractions have been a distraction. THEY ARE NOT TESTABLE either. So, yes, I stand by my conviction - with 30 years of Neural Network history (and abstract brain models as well) now behind us, that unless your theory or model is or explicitly can be linked to the actual physical structure of the nervous system, you are unlikely to make much useful progress in figuring out how the brain works. In the end, it is the structure of the bran itself that determines how it works - only the religious disagree with that. What that means, sorry to say, is that you actually DO have to REALLY learn about that structure, better yet, study it yourself. And I absolutely stand by the conviction that collecting data, free of any real (read rendered in mathematics) theory or model is NOT the way to go. For Tyco Brahe the precise motion of the planets was clearly the right data to collect. Nucleotide sequences, were clearly the structural basis for DNA - but who says that the most important information we need now to advance neuroscience is a complete map of neural connectivity? As Bard pointed out, in effect, this enterprise doesn?t even have a defined end point - nor could it. I predict now that many millions perhaps even billions of dollars later, we will have lots of pretty pictures, and that most of what everyone now thinks is true, they will still think is true - just like has happened in the vast majority of cases with the other major recent neuroscience operation based on the ?analysis" of pictures, human brain imaging. That is not progress. However, the BRAIN project as currently conceived certainly has a lower barrier to entry (i.e. coefficient of ?inclusiveness?) than the requirement that you actually do the hard work of establishing a computational framework based on the real structure of brains first. Remarkably enough, some have even stated its lack of computational structure as an asset! One for all and all for one I guess. Jim Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sun Jan 26 22:09:34 2014 From: bower at uthscsa.edu (james bower) Date: Sun, 26 Jan 2014 21:09:34 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <516FA9F0-3570-45FC-A566-4D4778860B89@uthscsa.edu> PS. With respect to questions like ?how does the brain represent time?: I recommend you find the documentary titled ?Berkeley? that showed recently on the "independent lens? series on PBS scan forward about an hour (its 4 1/2 hours long) until you hear the " cognitive neuroscientist" (probably how he lists his occupation, although I don?t know for sure) lecturing students in his class on how time IS represented in the brain. Even this mailing list should recognize that the assertion of the "amazing neuroscience discovery" that 40 Hz oscillations are driven by a special location in the brain stem, is Neuro-BS. This is NOT ok, but way way way too common, in those who want to talk about the brain, want to market what they are doing using neuroscience, but have NO IDEA what they are talking about. Shocking I watched this documentary last night AFTER engaging in the connectionist Tet-a-tet all day yesterday. you can imagine?.. Jim Bower On Jan 26, 2014, at 7:35 PM, Danny Silver wrote: > 23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowslti Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Sun Jan 26 23:39:26 2014 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Mon, 27 Jan 2014 00:39:26 -0400 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: Some of our discussion seems to be about 'How the brain works'. I am of course not smart enough to answer this question. So let me try another system. How does a radio work? I guess it uses an antenna to sense an electromagnetic wave that is then amplified so that an electromagnet can drive a membrane to produce an airwave that can be sensed by our ear. Hope this captures some essential aspects. Now that you know, can you repair it when it doesn't work? I believe that there can be explanations on different levels, and I think they can be useful in different circumstances. Maybe my above explanation is good for generally curious people, but if you want to build a super good sounding radio, you need to know much more about electronics, even quantitatively. And of course, if you want to explain how the electromagnetic force comes about you might need to dig down into quantum theory. And to take my point into the other direction, even knowing all the electronic components in a computer does not tell you how a word processor works. A multilayer perception is not the brain, but it captures some interesting insight into how mappings between different representations can be learned from examples. Is this how the brain works? It clearly does not explain everything, and I am not even sure if it really captures much if at all of the brain. But if we want to create smarter drugs than we have to know how ion channels and cell metabolism works. And if we want to help stroke patients, we have to understand how the brain can be reorganized. We need to work on several levels. Terry Sejnowski told us that the new Obama initiative is like the moon project. When this program was initiated we had no idea how to accomplish this, but dreams (and money) can be very motivating. This is a nice point, but I don't understand what a connection plan would give us. I think without knowing precisely where and how strong connections are made, and how each connection would influence a postsynaptic or glia etc cells, such information is useless. So why not having the goal of finding a cure for epilepsy? I do strongly believe we need theory in neuroscience. Only being descriptive is not enough. BTW, theoretical physics is physics. Physics would not be at the level where it is without theory. And of course, theory is meaningless without experiments. I think our point on this list is that theory must find its way into mainstream neuroscience, much more than it currently is. I have the feeling that we are digging our own grave by infighting and some narrow 'I know it all' mentality. Just try to publish something which is not mainstream even so it has solid experimental backing. Cheers, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.g.raikov at gmail.com Mon Jan 27 01:38:48 2014 From: ivan.g.raikov at gmail.com (Ivan Raikov) Date: Mon, 27 Jan 2014 15:38:48 +0900 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: Speaking of radio and electromagnetic waves, it is perhaps the case that neuroscience has not yet reached the maturity of 19th century physics: while the discovery of electromagnetism is attributed to great experimentalists such as Ampere and Faraday, and its mathematical model is attributed to one of the greatest modelers in physics, Maxwell, none of it happened in isolation. There was a lot of duplicated experimental work and simultaneous independent discoveries in that time period, and Maxwell's equations were readily accepted and quickly refined by a number of physicists after he first postulated them. So in a sense physics had a consensus community model of electromagnetism already in the first half of the 19th century. Neuroscience is perhaps more akin to physics in the 17th century, when Newton's infinitesimal calculus was rejected and even mocked by the scientific establishment on the continent, and many years would pass until calculus was understood and widely accepted. So a unifying theory of neuroscience may not come until a lot of independent and reproducible experimentation brings it about. -Ivan On Mon, Jan 27, 2014 at 1:39 PM, Thomas Trappenberg wrote: > Some of our discussion seems to be about 'How the brain works'. I am of > course not smart enough to answer this question. So let me try another > system. > > How does a radio work? I guess it uses an antenna to sense an > electromagnetic wave that is then amplified so that an electromagnet can > drive a membrane to produce an airwave that can be sensed by our ear. Hope > this captures some essential aspects. > > Now that you know, can you repair it when it doesn't work? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Mon Jan 27 04:09:40 2014 From: axel.hutt at inria.fr (Axel Hutt) Date: Mon, 27 Jan 2014 10:09:40 +0100 (CET) Subject: Connectionists: How the brain works In-Reply-To: Message-ID: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> I fully agree with Thomas and appreciate, once more, the comparison with physics. Neuroscientists want to understand how the brain works, but maybe this is the wrong question. In physics, researchers do not ask this question, since, honestly, who knows how electrons REALLY move around the atom core ? Even if we know it, this knowledge is not necessary, since we have quantum theory that tells uns something about probabilities in certain states and this more abstract (almost mesoscopic) description is sufficient to explain large scale phenomena. Even on a smaller scale, quantum theory works fine, but only since physicists DO NOT ASK what electrons do in detail, but accept the concept. In contrary, todays neuroscience ask questions on all the details, what why ? Probably it is better to come up with a "concept" or more abstract model on neural coding, for sure on multiple description levels. But I guess, looking into too much much (neurophysiological) detail slows us down and we need to ask other questions, maybe more directing towards experimental phenomena. A good example is the work of Hubel and Wiesel and the concept of columns (I know Jim and others do not like the concept), based on experimental data and deriving such a kind of concept. Of course these columns are not there 'physically' (there are no borders) but they represent more abstract functional units which allow to explain certain dynamical features on the level of neural populations (e.g. in the work on visual hallucinations). Today this concept is largely attacked since biologists 'do not see it'. But, again going back to physics, the trajetories of single electrons in atoms have not been measured yet and so the probability densi ty of their location has not been computed yet from the single trajectories, but the resulting concept of probability orbits of electrons is well established today since it works well. Another analogy from physics (sorry to bore you, but I find the comparison important): do you believe that an object changes when you look at it (quantum theory says so) ? No, sure not, since you do not experience/measure it. But, hey, the underlying quantum theory is a good description of things. What I want to say: in neuroscience we need more theory based on physiological (multi-scale) experiments that describes the found data and permit to accept more (apparently) abstract models and get rid of our dogmatic view on how to do research. If an abstract description explains well several different phenomena, then per se it is a good concept (e.g. like the neural columns concept). Well, I have to go back to theoretical work, but it was very nice and stimulating attending this discussion. Axel ----- Original Message ----- > Some of our discussion seems to be about 'How the brain works'. I am > of course not smart enough to answer this question. So let me try > another system. > How does a radio work? I guess it uses an antenna to sense an > electromagnetic wave that is then amplified so that an electromagnet > can drive a membrane to produce an airwave that can be sensed by our > ear. Hope this captures some essential aspects. > Now that you know, can you repair it when it doesn't work? > I believe that there can be explanations on different levels, and I > think they can be useful in different circumstances. Maybe my above > explanation is good for generally curious people, but if you want to > build a super good sounding radio, you need to know much more about > electronics, even quantitatively. And of course, if you want to > explain how the electromagnetic force comes about you might need to > dig down into quantum theory. And to take my point into the other > direction, even knowing all the electronic components in a computer > does not tell you how a word processor works. > A multilayer perception is not the brain, but it captures some > interesting insight into how mappings between different > representations can be learned from examples. Is this how the brain > works? It clearly does not explain everything, and I am not even > sure if it really captures much if at all of the brain. But if we > want to create smarter drugs than we have to know how ion channels > and cell metabolism works. And if we want to help stroke patients, > we have to understand how the brain can be reorganized. We need to > work on several levels. > Terry Sejnowski told us that the new Obama initiative is like the > moon project. When this program was initiated we had no idea how to > accomplish this, but dreams (and money) can be very motivating. > This is a nice point, but I don't understand what a connection plan > would give us. I think without knowing precisely where and how > strong connections are made, and how each connection would influence > a postsynaptic or glia etc cells, such information is useless. So > why not having the goal of finding a cure for epilepsy? > I do strongly believe we need theory in neuroscience. Only being > descriptive is not enough. BTW, theoretical physics is physics. > Physics would not be at the level where it is without theory. And of > course, theory is meaningless without experiments. I think our point > on this list is that theory must find its way into mainstream > neuroscience, much more than it currently is. I have the feeling > that we are digging our own grave by infighting and some narrow 'I > know it all' mentality. Just try to publish something which is not > mainstream even so it has solid experimental backing. > Cheers, Thomas -- Dr. rer. nat. Axel Hutt, HDR INRIA CR Nancy - Grand Est Equipe NEUROSYS (Head) 615, rue du Jardin Botanique 54603 Villers-les-Nancy Cedex France http://www.loria.fr/~huttaxel -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Mon Jan 27 03:36:30 2014 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 27 Jan 2014 09:36:30 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <516FA9F0-3570-45FC-A566-4D4778860B89@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <516FA9F0-3570-45FC-A566-4D4778860B89@uthscsa.edu> Message-ID: <0F4BE977-1131-4507-9830-F84091FC6715@idsia.ch> Related news that illustrate some of the forces driving this: Shane Legg, one of our former PhD students, is co-founder of deepmind (with Demis Hassabis and Mustafa Suleyman), reportedly just acquired by Google for ~$500m. Several additional ex-members of the Swiss AI Lab IDSIA are working for deepmind. https://www.theinformation.com/Google-beat-Facebook-For-DeepMind-Creates-Ethics-Board Juergen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Jan 27 08:56:37 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 27 Jan 2014 07:56:37 -0600 Subject: Connectionists: How the brain works In-Reply-To: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> Message-ID: <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> A couple of points I have actually been asked (under the wire) to comment on: Do you need to model quantum mechanics? No (sorry Penrose) - One of the straw men raised when talking about realistic models is always: ?at what level do you stop, quantum mechanics??. The answer is really quite simple, you need to model biological systems at the level that evolution (selection) operates and not lower. In some sense, what all biological investigation is about, is how evolution has patterned matter. Therefore, if evolution doesn?t manipulate at a particular level, it is not necessary to understand how the machine works. Thomas? original post regarding radios is actually illuminating on this point. It starts by saying: "I guess it uses an antenna to sense an electromagnetic wave that is then amplified so that an electromagnet can drive a membrane to produce an airwave that can be sensed by our ear. Hope this captures some essential aspects.? This level of description actually, captures the essential structure of a radio - i.e. the level of function of the radio at which the radio?s designers chose the components and their properties. So, how proteins interact is important, the fact that that interaction depends on the behavior of electrons is not. The behavior of electrons of course constrains how a particular molecule might interact - but evolution does not change the structure of electrons - (at least as far as I know). A other question that has been raised, somewhat defensively, has to do with the other level of modeling - and whether abstract models with no relationship to the actual structure of the nervous system, by definition, can not capture how the brain works? there are several answers to this question. The first and most obvious has to do with scientific process - how would you ever know? If a model is not testable at the level of the machine and its parts - then there is no way to know and the model is not useful for what I am trying to do, which is to understand the machinery. Put another way, if a model does not help in understanding ?the engineering? (if you allow me the short hand), then it is also not useful in figuring out how the brain really works. There is another issue here as well - and that has to do with the likelihood that the model is correct. This gets into issues of complexity theory, a subject many number of members of this list serve know better than I do. However, I believe that one of the insights attributed to Kolmogorov (who has been mentioned previously) is that there is a relationship between the complexity of a problem and its solution. If there is a solution to a problem the brain solves that is simpler than the brain itself, then there is some constraint that has forced the brain to be more complex than it needs to be: examples usually given include the component parts it has been ?forced? to work with, for example, or constraints imposed by the supposed sequential nature of evolutionary processes (making an arm from a fin). While it is at least worth considering whether the arm from fin argument applies to the nervous system, because we don?t understand how the brain works, we can?t really answer the question whether there is some simpler version that would have worked just as well. Accordingly, as with the radio analogy, in principle, asking whether a simpler version would work as well, depends on first figuring it out how the actual system works. As I have said, abstract models are less likely to be helpful there, because they don?t directly address the components. However, while there is now finally some actual scientific work on this - still, most neurobiologists and even a lot of Neural network types, don?t seem to take into account how expensive brains are to run and the extreme pressure that has likely put on the brain to reach a ridiculous level of efficiency. Accordingly, I am betting on the likelihood that the brain is not just some hacked solution, but in fact, may be an optimal solution to the problems it solves (remember, species also ?pick? (again forgive the short hand) the problems they solve based on the structures they already have. So, I myself assume that if there were a simpler physical solution, evolution would have found it. At least, I can say with certainty that where ever it has been possible to measure the physical ? sophistication" of the nervous system, it is operating a very close to the constraints posed by physics (single photon detection by the eye, the ear operating just above the level of brownian noise, etc). No reason, therefore not to assume until shown otherwise that the insides aren?t also ?optimized?. And, as I have said, we won?t be able to even address the question really, until we know how the thing works. That said, even if a simpler solution is as effective as the brain, I am interested in the brain. Engineers are interested in building other devices and they would likely always prefer (for obvious reasons) to be as simple as possible. As I have said, I believe evolution is under the same constraint, actually, but in any event the question I have chosen to try to understand (perhaps foolishly) is how brains really work - not if something else could do the same thing using a simpler form. Finally, the above argument also relates to the issue of simple redundancy - brain?s almost certainly can?t afford redundancy in the way that engineers have traditionally built in redundancy. To give credit where credit is due - one of the lessons that I did take from neural networks is that there are many more sophisticated ways to achieve fault tolerance than simple redundancy. Fault tolerance (this is how I actually think about learning, in fact) is a key requirement of brains. But almost certainly not accomplished by simple redundancy. Jim On Jan 27, 2014, at 3:09 AM, Axel Hutt wrote: > I fully agree with Thomas and appreciate, once more, the comparison with physics. > > Neuroscientists want to understand how the brain works, but maybe this is the wrong question. In physics, > researchers do not ask this question, since, honestly, who knows how electrons REALLY move around > the atom core ? Even if we know it, this knowledge is not necessary, since we have quantum theory that tells uns something about > probabilities in certain states and this more abstract (almost mesoscopic) description is sufficient to > explain large scale phenomena. Even on a smaller scale, quantum theory works fine, but only since physicists > DO NOT ASK what electrons do in detail, but accept the concept. In contrary, todays neuroscience ask > questions on all the details, what why ? Probably it is better to come up with a "concept" or more abstract > model on neural coding, for sure on multiple description levels. But I guess, looking into too much much > (neurophysiological) detail slows us down and we need to ask other questions, maybe more directing > towards experimental phenomena. > > A good example is the work of Hubel and Wiesel and the concept of columns (I know Jim and others do not like > the concept), based on experimental data and deriving such a kind of concept. Of course these columns are not there > 'physically' (there are no borders) but they represent more abstract functional units which allow to explain certain > dynamical features on the level of neural populations (e.g. in the work on visual hallucinations). Today this concept > is largely attacked since biologists 'do not see it'. But, again going back to physics, the trajetories of single electrons in > atoms have not been measured yet and so the probability density of their location has not been computed yet from > the single trajectories, but the resulting concept of probability orbits of electrons is well established today since it > works well. > > Another analogy from physics (sorry to bore you, but I find the comparison important): do you believe that an > object changes when you look at it (quantum theory says so) ? No, sure not, since you do not experience/measure it. > But, hey, the underlying quantum theory is a good description of things. What I want to say: in neuroscience we > need more theory based on physiological (multi-scale) experiments that describes the found data and permit to accept > more (apparently) abstract models and get rid of our dogmatic view on how to do research. If an abstract description > explains well several different phenomena, then per se it is a good concept (e.g. like the neural columns concept). > > Well, I have to go back to theoretical work, but it was very nice and stimulating attending this discussion. > > Axel > > Some of our discussion seems to be about 'How the brain works'. I am of course not smart enough to answer this question. So let me try another system. > How does a radio work? I guess it uses an antenna to sense an electromagnetic wave that is then amplified so that an electromagnet can drive a membrane to produce an airwave that can be sensed by our ear. Hope this captures some essential aspects. > Now that you know, can you repair it when it doesn't work? > I believe that there can be explanations on different levels, and I think they can be useful in different circumstances. Maybe my above explanation is good for generally curious people, but if you want to build a super good sounding radio, you need to know much more about electronics, even quantitatively. And of course, if you want to explain how the electromagnetic force comes about you might need to dig down into quantum theory. And to take my point into the other direction, even knowing all the electronic components in a computer does not tell you how a word processor works. > A multilayer perception is not the brain, but it captures some interesting insight into how mappings between different representations can be learned from examples. Is this how the brain works? It clearly does not explain everything, and I am not even sure if it really captures much if at all of the brain. But if we want to create smarter drugs than we have to know how ion channels and cell metabolism works. And if we want to help stroke patients, we have to understand how the brain can be reorganized. We need to work on several levels. > Terry Sejnowski told us that the new Obama initiative is like the moon project. When this program was initiated we had no idea how to accomplish this, but dreams (and money) can be very motivating. > This is a nice point, but I don't understand what a connection plan would give us. I think without knowing precisely where and how strong connections are made, and how each connection would influence a postsynaptic or glia etc cells, such information is useless. So why not having the goal of finding a cure for epilepsy? > I do strongly believe we need theory in neuroscience. Only being descriptive is not enough. BTW, theoretical physics is physics. Physics would not be at the level where it is without theory. And of course, theory is meaningless without experiments. I think our point on this list is that theory must find its way into mainstream neuroscience, much more than it currently is. I have the feeling that we are digging our own grave by infighting and some narrow 'I know it all' mentality. Just try to publish something which is not mainstream even so it has solid experimental backing. > Cheers, Thomas > > > > -- > > Dr. rer. nat. Axel Hutt, HDR > INRIA CR Nancy - Grand Est > Equipe NEUROSYS (Head) > 615, rue du Jardin Botanique > 54603 Villers-les-Nancy Cedex > France > http://www.loria.fr/~huttaxel Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazskegl at gmail.com Mon Jan 27 09:28:39 2014 From: balazskegl at gmail.com (=?windows-1252?Q?Bal=E1zs_K=E9gl?=) Date: Mon, 27 Jan 2014 15:28:39 +0100 Subject: Connectionists: How the brain works In-Reply-To: <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> Message-ID: > While it is at least worth considering whether the arm from fin argument applies to the nervous system, because we don?t understand how the brain works, we can?t really answer the question whether there is some simpler version that would have worked just as well. Accordingly, as with the radio analogy, in principle, asking whether a simpler version would work as well, depends on first figuring it out how the actual system works. As I have said, abstract models are less likely to be helpful there, because they don?t directly address the components. Wouldn?t the airplane/bird analogy work here? Does being able to design an airplane help understanding how birds fly? I think it does. Evolution didn?t invent the wheel, so it had to go in a complex (and not necessarily very efficient) way to ?design? locomotion, which means that airplane engines don?t really explain how birds propel themselves. On the other hand, both have wings, and controlling the flying devices looks pretty similar in the two cases. In the same way, if some artificial network can reproduce intelligent traits, we might be able to guide what we?re looking for in the brain (a model, whose necessity we agree on). Of course, scientific process rarely works in this way, but it?s because you need computers for this kind of ?experimentation?, and computers are quite new. Bal?zs ? Balazs Kegl Research Scientist (DR2) Linear Accelerator Laboratory CNRS / University of Paris Sud http://users.web.lal.in2p3.fr/kegl From gary at eng.ucsd.edu Mon Jan 27 09:43:08 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Mon, 27 Jan 2014 15:43:08 +0100 Subject: Connectionists: How the brain works In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> Message-ID: Somewhat irrelevant to this discussion, but the bird-plane analogy is actually the other way around. As many of you probably know, studying birds was key to the Wright Brothers' figuring out how to control the airplane, which they saw as the main problem (this is cut-and-pasted from Wikipedia): On the basis of observation, Wilbur concluded that birds changed the angle of the ends of their wings to make their bodies roll right or left.[30] The brothers decided this would also be a good way for a flying machine to turn?to "bank" or "lean" into the turn just like a bird?and just like a person riding a bicycle, an experience with which they were thoroughly familiar. Equally important, they hoped this method would enable recovery when the wind tilted the machine to one side (lateral balance). They puzzled over how to achieve the same effect with man-made wings and eventually discovered wing-warping when Wilbur idly twisted a long inner-tube box at the bicycle shop. g. On Jan 27, 2014, at 3:28 PM, "Bal?zs K?gl" wrote: >> While it is at least worth considering whether the arm from fin argument applies to the nervous system, because we don?t understand how the brain works, we can?t really answer the question whether there is some simpler version that would have worked just as well. Accordingly, as with the radio analogy, in principle, asking whether a simpler version would work as well, depends on first figuring it out how the actual system works. As I have said, abstract models are less likely to be helpful there, because they don?t directly address the components. > > Wouldn?t the airplane/bird analogy work here? Does being able to design an airplane help understanding how birds fly? I think it does. Evolution didn?t invent the wheel, so it had to go in a complex (and not necessarily very efficient) way to ?design? locomotion, which means that airplane engines don?t really explain how birds propel themselves. On the other hand, both have wings, and controlling the flying devices looks pretty similar in the two cases. In the same way, if some artificial network can reproduce intelligent traits, we might be able to guide what we?re looking for in the brain (a model, whose necessity we agree on). Of course, scientific process rarely works in this way, but it?s because you need computers for this kind of ?experimentation?, and computers are quite new. > > Bal?zs > > > ? > Balazs Kegl > Research Scientist (DR2) > Linear Accelerator Laboratory > CNRS / University of Paris Sud > http://users.web.lal.in2p3.fr/kegl > > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.jirsa at univ-amu.fr Mon Jan 27 04:51:56 2014 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Mon, 27 Jan 2014 10:51:56 +0100 Subject: Connectionists: Postdoc position in information processing for neuroscience References: <7321E85C-4FEF-4CD2-A575-FDF2B92FC264@univ-amu.fr> Message-ID: <220A2ABC-93E7-4604-A62B-C59BC0981C70@univ-amu.fr> Postdoc position in information processing for Neuroscience We are looking for someone willing to participate to ongoing projects conducted in vivo based on electrophysiological and molecular recordings in several brain regions in rodents (anaesthetised and freely moving) to analyse multidimensional datasets. These projects gather neurobiologists, theoretical neuroscientists, specialists in organic electronics and clinicians (some projects are epilepsy-oriented). The candidate must have good knowledge in both signal processing and basic neurophysiology (how neurones and neuronal networks communicate through spikes, LFP and oscillations) Signal processing: - LFP analysis: spectral analysis, correlation/coherence/causality - spike sorting - statistics Matlab-based programming These tools will ideally developed on different OS (Ubuntu mainly, Mac, Windows) Recent publications of the team: Quilichini et al., Neuron, 2012; Khodagholy et al., Nature Comm, 2013; Silva et al., Science TM 2013. Please send a motivation letter, CV and two reference letters to: Christophe Bernard INS - Institut de Neurosciences des Syst?mes UMR INSERM 1106, Aix-Marseille Universit? Equipe Physionet 27 Bd Jean Moulin 13385 Marseille Cedex 05 France christophe.bernard at univ-amu.fr +33 4 91 29 98 06 http://ins.medecine.univmed.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Sun Jan 26 22:05:17 2014 From: bower at uthscsa.edu (james bower) Date: Sun, 26 Jan 2014 21:05:17 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: Thanks Danny, Funny about coincidences. I almost posted earlier to the list a review I was asked to write of exactly the book you reference: 23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowski. It is appended to this email - Needless to say, while I couldn?t agree with you more on the importance of asking the right questions - many of the chapters in this book make clear, I believe, the fundamental underlying problem posed by having no common theoretical basis for neuroscience research. Jim Bower Published in The American Scientist Are We Ready for Hilbert? James M. Bower 23 Problems in Systems Neuroscience. Edited by J. Leo van Hemmen and Terrence J. Sejnowski. xvi + 514 pp. Oxford University Press, 2006. $79.95. 23 Problems in Systems Neuroscience grew out of a symposium held in Dresden in 2000 inspired by an address given by the great geometrist David Hilbert 100 years earlier. In his speech, Hilbert commemorated the start of the 20th century by delivering what is now regarded as one of the most influential mathematical expositions ever made. He outlined 23 essential problems that not only organized subsequent research in the field, but also clearly reflected Hilbert?s axiomatic approach to the further development of mathematics. Anticipating his own success, he began, ?Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?? I take seriously the premise represented in this new volume?s title and preface that it is intended to ?serve as a source of inspirations for future explorers of the brain.? Unfortunately, if the contributors sought to exert a ?Hilbertian? influence on the field by highlighting 23 of the most important problems in systems neuroscience, they have, in my opinion, failed. In failing, however, this book clearly illustrates fundamental differences between neuroscience (and biology in general) today and mathematics (and physics) in 1900. Implicit in Hilbert?s approach is the necessity for some type of formal structure underlying the problems at hand, allowing other investigators to understand their natures and then collaboratively explore a general path to their solutions. Yet there is little consistency in the form of problems presented in this book. Instead, many (perhaps most) of the chapters are organized, at best, around vague questions such as, ?How does the cerebral cortex work?? At worst, the authors simply recount what is, in effect, a story promoting their own point of view. The very first chapter, by Gilles Laurent, is a good example of the latter. After starting with a well-worn plea for considering the results of the nonmammalian, nonvisual systems he works on, Laurent summarizes a series of experiments (many of them his own) supporting his now-well-known position regarding the importance of synchrony in neuronal coding. This chapter could have presented a balanced discussion of the important questions surrounding the nature of the neural code (as attempted in one chapter by David McAlpine and Alan R. Palmer and another by C. van Vreeswijk), or even referenced and discussed some of the recently published papers questioning his interpretations. Instead, the author chose to attempt to convince us of his own particular solution. I don?t mean to pick on Laurent, as his chapter takes a standard form in symposia volumes; rather, his approach illustrates the general point that much of ?systems neuroscience? (and neuroscience in general) revolves around this kind of storytelling. The chapter by Bruno A. Olshausen and David J. Field makes this point explicitly, suggesting, that our current ?story based? view of the function of the well-studied visual cortex depends on (1) a biased sampling of neurons, (2) a bias in the kind of stimuli we present and (3) a bias in the kinds of theories we like to construct. In fairness, several chapters do attempt to address real problems in a concise and unbiased way. The chapter by L. F. Abbott, for example, positing, I think correctly, that the control of the flow of information in neural systems is a central (and unsolved) problem, is characteristically clear, circumscribed and open-minded. Refreshingly, Abbott?s introduction states, ?In the spirit of this volume, the point of this contribution is to raise a question, not to answer it. . . . I have my prejudices, which will become obvious, but I do not want to rule out any of these as candidates, nor do I want to leave the impression that the list is complete or that the problem is in any sense solved.? Given his physics background, Abbott may actually understand enough about Hilbert?s contribution to have sought its spirit. Most chapters, however, require considerable detective work, and probably also a near-professional understanding of the field, to find anything approaching Hilbert?s enumeration of fundamental research problems. In some sense I don?t think the authors are completely to blame. Although many are prominent in the field, this lack of focus on more general and well defined problems is, I believe, endemic in biology as a whole. While this may slowly be changing, the question of how and even if biology can move from a fundamentally descriptive, story-based science to one from which Hilbertian-style problems can be extracted may be THE problem in systems neuroscience. A few chapters do briefly raise this issue. For example, in their enjoyable article on synesthesia, V. S. Ramachandran and Edward M. Hubbard identify their approach as not fashionable in psychology partly because of ?the lingering pernicious effect of behaviorism? and partly because ?psychologists like to ape mature quantitative physics?even if the time isn?t ripe.? Laurenz Wiskott, in his chapter on possible mechanisms for size and shift invariance in visual (and perhaps other) cortices raises what may be the more fundamental question as to whether biology is even amenable to the form of quantification and explanation that has been so successful in physics: ?Either the brain solves all invariance problems in a similar way based on a few basic principles or it solves each invariance problem in a specific way that is different from all others. In the former case [asking] the more general question would be appropriate. . . . In the latter case, that is, if all invariance problems have their specific solution, the more general question would indeed be a set of questions and as such not appropriate to be raised and discussed here.? He then moderates the dichotomy by stating diplomatically, ?There is, of course, a third and most likely alternative, and that is that the truth lies somewhere between these two extremes.? Thus, Wiskott leaves unanswered the fundamental question about the generality of brain mechanisms or computational algorithms. As in mathematics 100 years ago, answering basic questions in systems neuroscience is tied up in assumptions regarding appropriate methodology. For Hilbert?s colleagues, this was obvious and constituted much of the debate following his address; this fundamental issue, however, is only rarely discussed in biology. Indeed, I want to be careful not to give the impression that these kinds of big-picture issues are given prominence in this volume?they are not. Rather, as is typical for books generated by these kinds of symposia, many of the chapters are simply filled with the particular details of a particular subject, although several authors should be commended for at least discussing their favorite systems in several species. However, given the lack of overall coordination, one wonders what impact this volume will have. One way to gauge the answer is to look for evidence that the meeting presentations influenced the other participants. As an exercise, I summarized the major points and concerns each author raised in their chapters and then checked that list against the assumptions and assertions made by the other authors writing on similar subjects. The resulting tally, I would assert, provides very little evidence that these authors attended the same meeting?or perhaps even that they are part of the same field! For example, the article titled ?What Is Fed Back? by Jean Bullier identifies, I think correctly, what will become a major shift in thinking about how brains are organized. As Bullier notes, there is growing evidence that the internal state of the brain has a much more profound effect on the way the brain processes sensory information than previously suspected. Yet this fundamental issue is scarcely mentioned in the other chapters, quite a few of which are firmly based on the old feed-forward ?behaviorist? model of brain function. Similarly, the chapter by Olshausen and Field is followed immediately by a paper by Steven W. Zucker on visual processing that depends on many of the assumptions that Olshausen and Field call into question. One hundred years ago, Hilbert?s 23 questions organized a field. The chapters in this book make pretty clear that we are still very far away from having a modern-day Hilbert or even a committee of ?experts? come up with a list of 23 fundamental questions that are accepted, or perhaps even understood, by the field of neuroscience as a whole. > > Asking good questions that come with well developed requirements is the starting point to good science. At least that is what we tell our graduate students. > > .. Danny > > ======================= > Daniel L. Silver, Ph.D. danny.silver at acadiau.ca > Professor, Jodrey School of Computer Science, Acadia University > Office 314, Carnegie Hall, Wolfville, NS Canada B4P 2R6 > p:902-585-1413 f:902-585-1067 > > > From: Geoffrey Hinton > Date: Sunday, 26 January, 2014 3:43 PM > To: Brad Wyble > Cc: Connectionists list > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare > > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: >> I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. >> >> Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. >> >> To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. >> >> I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. >> >> In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. >> >> >> Best, >> Brad Wyble >> >> >> >> >> On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: >>> Thanks for your comments Thomas, and good luck with your effort. >>> >>> I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. >>> >>> I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. >>> >>> Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. >>> >>> Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. >>> >>> As I said, they likely won?t make that mistake again - and will very likely get away with it. >>> >>> Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). >>> >>> Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: >>> - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) >>> - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) >>> - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). >>> - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg >>> - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. >>> - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. >>> As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. >>> >>> Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. >>> >>> I hope so. >>> >>> Jim >>> >>> p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. >>> >>> Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) >>> >>> >>> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: >>> >>>> James, enjoyed your writing. >>>> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >>>> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >>>> Cheers, Thomas >>>> >>>> On 2014-01-25 12:09 AM, "james bower" wrote: >>>>> Ivan thanks for the response, >>>>> >>>>> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >>>>> >>>>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >>>>> >>>>> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >>>>> >>>>> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >>>>> >>>>> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >>>>> >>>>> The details of reality matter. >>>>> >>>>> Jim >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >>>>> >>>>>> >>>>>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>>>>> >>>>>> -Ivan Raikov >>>>>> >>>>>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>>>>>> [snip] >>>>>>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>>>>>> >>>>>>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>>>>>> >>>>>>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>>>>>> >>>>> >>>>> >>>>> >>>>> Dr. James M. Bower Ph.D. >>>>> Professor of Computational Neurobiology >>>>> Barshop Institute for Longevity and Aging Studies. >>>>> 15355 Lambda Drive >>>>> University of Texas Health Science Center >>>>> San Antonio, Texas 78245 >>>>> >>>>> Phone: 210 382 0553 >>>>> Email: bower at uthscsa.edu >>>>> Web: http://www.bower-lab.org >>>>> twitter: superid101 >>>>> linkedin: Jim Bower >>>>> >>>>> CONFIDENTIAL NOTICE: >>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>> >>>>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> Professor of Computational Neurobiology >>> Barshop Institute for Longevity and Aging Studies. >>> 15355 Lambda Drive >>> University of Texas Health Science Center >>> San Antonio, Texas 78245 >>> >>> Phone: 210 382 0553 >>> Email: bower at uthscsa.edu >>> Web: http://www.bower-lab.org >>> twitter: superid101 >>> linkedin: Jim Bower >>> >>> CONFIDENTIAL NOTICE: >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> >>> >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From markovic at th.physik.uni-frankfurt.de Mon Jan 27 10:13:32 2014 From: markovic at th.physik.uni-frankfurt.de (Dimitrije Markovic) Date: Mon, 27 Jan 2014 16:13:32 +0100 Subject: Connectionists: a new review article on power law phenomena In-Reply-To: References: Message-ID: <52E6779C.6040907@th.physik.uni-frankfurt.de> Dear Colleges, we would like to bring your attention to the following review article, which will appear shortly in Physics Reports: "Power laws and Self-Organized Criticality in Theory and Nature" by: Dimitrije Markovi? and Claudius Gros. http://dx.doi.org/10.1016/j.physrep.2013.11.002 In short, we present a comparative overview of distinct modeling approaches together with a discussion of their potential relevance as generative models for real-world phenomena. The complexity of physical and biological scaling phenomena has been found to transcend the explanatory power of individual paradigmatic concepts. The interaction between theoretical development and experimental observations has been very fruitful, leading to a series of novel concepts and insights. A special section is devoted to an assessment of power-law scaling in the neural activity. Best wishes, D. Markovi? and C. Gros From janla at dtu.dk Mon Jan 27 11:09:55 2014 From: janla at dtu.dk (Jan Larsen) Date: Mon, 27 Jan 2014 16:09:55 +0000 Subject: Connectionists: Postdoc in Machine Learning Message-ID: Dear Connectionists The Technical University of Denmark invites applications for a position as postdoc in Machine Learning and Signal Processing Innovation. The position is available from March 1, 2014, and affiliated with the Cognitive Systems Section of the Department of Applied Mathematics and Compute Science. The position is sponsored by The Danish National Advanced Technology Foundation and the European Union Framework Program 7 Project CRIM-TRACK. The purpose of the present postdoc position is design, analysis, and interpretation of sensor signals (nano colometry sensor and infrared spectroscopy) in order to estimate the presence and/or quantity of relevant chemical substances. Implementation and integration of the developed methods in collaboration with project partners is an integral part of the position with the objective to demonstrate and evaluate the potential in end-user scenarios. Candidates must hold a PhD degree within the field of mathematical modeling or similar with specialization in machine learning and signal processing, and should demonstrate qualifications and experience with the following areas: Signal processing; Machine learning; and Monitoring and sensor systems. See more and apply on-line via http://www.dtu.dk/english/career//job?id=bd4254e8-501d-4602-b628-4692484e25ba Looking forward to receiving your application [Picture1] Jan Larsen Associate Professor, Ph.D. Section for Cognitive Systems Department of Applied Mathematics and Computer Science Matematiktorvet, Building 303B Technical University of Denmark DK-2800 Kongens Lyngby, Denmark Office location: Room 015, Building 321 Direct: (+45) 45 25 39 23 Mobile: (+45) 22 43 00 25 Secretary: (+45) 45 25 39 08 Fax: (+45) 45 88 26 73 Email: janla at dtu.dk Skype: janflynut Web: http://www.imm.dtu.dk/~jl Social networks: [t_small-a] [btn_profile_greytxt_80x15] [DTU 3] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 22255 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 1800 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 385 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 2112 bytes Desc: image004.jpg URL: From geoffrey.hinton at gmail.com Mon Jan 27 11:40:37 2014 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Mon, 27 Jan 2014 11:40:37 -0500 Subject: Connectionists: How the brain works In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> Message-ID: Actually, evolution did invent the time-shared wheel. To go over rough ground it needs to be 6 feet in diameter with very soft suspension. The way to do this without being too heavy or large is to time-share two small sections of the rim each connected to the axle by "spokes" (in compression rather than tension) that can easily change their length. The swapping in and out of the spokes is not as energy efficient as a wheel but it solves the problem of supplying the rim with nutrients. Geoff On Mon, Jan 27, 2014 at 9:28 AM, Bal?zs K?gl wrote: > > While it is at least worth considering whether the arm from fin argument > applies to the nervous system, because we don?t understand how the brain > works, we can?t really answer the question whether there is some simpler > version that would have worked just as well. Accordingly, as with the > radio analogy, in principle, asking whether a simpler version would work as > well, depends on first figuring it out how the actual system works. As I > have said, abstract models are less likely to be helpful there, because > they don?t directly address the components. > > Wouldn?t the airplane/bird analogy work here? Does being able to design an > airplane help understanding how birds fly? I think it does. Evolution > didn?t invent the wheel, so it had to go in a complex (and not necessarily > very efficient) way to ?design? locomotion, which means that airplane > engines don?t really explain how birds propel themselves. On the other > hand, both have wings, and controlling the flying devices looks pretty > similar in the two cases. In the same way, if some artificial network can > reproduce intelligent traits, we might be able to guide what we?re looking > for in the brain (a model, whose necessity we agree on). Of course, > scientific process rarely works in this way, but it?s because you need > computers for this kind of ?experimentation?, and computers are quite new. > > Bal?zs > > > ? > Balazs Kegl > Research Scientist (DR2) > Linear Accelerator Laboratory > CNRS / University of Paris Sud > http://users.web.lal.in2p3.fr/kegl > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Aaron.Clauset at colorado.edu Mon Jan 27 12:08:21 2014 From: Aaron.Clauset at colorado.edu (Aaron Clauset) Date: Mon, 27 Jan 2014 10:08:21 -0700 Subject: Connectionists: Network Science Math Research Camp, at Snowbird, June 2014 Message-ID: <4159CC72-1698-4EFA-8F80-3E2799DC3A95@colorado.edu> [This event is aimed at graduate students and postdocs interested in network science, with a particularly emphasis on learning from data and the mathematics of network models.] Network Science Dates: June 24-30, 2014 Organizers: Aaron Clauset, University of Colorado, Boulder David Kempe, University of Southern California Mason A. Porter, University of Oxford sponsored by The American Mathematical Society (AMS) Over the last decade, the quantitative study of networks has emerged as a fundamental tool for understanding and modeling complex systems of all kinds. Although it has built on prior foundational work in areas such as sociology, mathematics, and computer science, the increasing availability of detailed data sets has led to insights into --- and ambitions for attaining a much deeper understanding of --- the structure, dynamics, and function of social, biological, physical, and technological systems. A fundamental challenge in network science is to extract a solid foundation and a set of key principles for networked systems from the widely dispersed efforts in analyzing real-world networks and developing mathematical models of networks. Progress requires thorough investigations in many key areas, including (i) characterizing the complex and often multi-scale structural patterns of real networks, (ii) understanding the way network structures constrain or drive dynamical processes that operate on top of this structure (e.g., communication or epidemic processes), (iii) developing rigorous methods for fitting static and dynamic network models to data and for testing network hypotheses, (iv) identifying and detecting fundamental modes of organization in networks as well as the underlying processes that produce these structures, and many others. This MRC aims to introduce young mathematical and computational scientists to modern research in network science. It will explore some of the key techniques, develop in-depth knowledge of several overlapping topic areas, and engage in research to attack open problems. Application deadline: March 1, 2014 http://www.ams.org/programs/research-communities/mrc-14 From m.a.wiering at rug.nl Mon Jan 27 12:02:34 2014 From: m.a.wiering at rug.nl (M.A.Wiering) Date: Mon, 27 Jan 2014 18:02:34 +0100 Subject: Connectionists: How the brain works In-Reply-To: <76f084913553ed.52e6910c@rug.nl> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <7620ed00356603.52e69019@rug.nl> <7620d2e13555a8.52e69056@rug.nl> <7620fa11354358.52e69092@rug.nl> <7670dc7635390e.52e690d0@rug.nl> <76f084913553ed.52e6910c@rug.nl> Message-ID: <74f0a77c354efc.52e69f3a@rug.nl> I lvery much ike the part in the deep belief network of having real and imaginary data. I think there are also many opposing forces working in the brain. So maybe we can indeed make a complete complex system that is able to compute behavior at many levels, from physical behaviors, Cognitive Neuroscience, Neuroscience, to physical devices. I think the Human Brain Project is one small step in this direction, but a big step for human kind. Best wishes, Marco Wiering University of Groningen, the Netherlands =========== On 27-01-14, Geoffrey Hinton wrote: > Actually, evolution did invent the time-shared wheel. > > > To go over rough ground it needs to be 6 feet in diameter with very soft suspension. > > The way to do this without being too heavy or large is to time-share two small sections of the rim each connected to the axle by "spokes" (in compression rather than tension) that can easily change their length. The swapping in and out of the spokes is not as energy efficient as a wheel but it solves the problem of supplying the rim with nutrients. > > > Geoff > > > > > On Mon, Jan 27, 2014 at 9:28 AM, Bal?zs K?gl wrote: > > > > While it is at least worth considering whether the arm from fin argument applies to the nervous system, because we don?t understand how the brain works, we can?t really answer the question whether there is some simpler version that would have worked just as well. Accordingly, as with the radio analogy, in principle, asking whether a simpler version would work as well, depends on first figuring it out how the actual system works. As I have said, abstract models are less likely to be helpful there, because they don?t directly address the components. > > > > > > Wouldn?t the airplane/bird analogy work here? Does being able to design an airplane help understanding how birds fly? I think it does. Evolution didn?t invent the wheel, so it had to go in a complex (and not necessarily very efficient) way to ?design? locomotion, which means that airplane engines don?t really explain how birds propel themselves. On the other hand, both have wings, and controlling the flying devices looks pretty similar in the two cases. In the same way, if some artificial network can reproduce intelligent traits, we might be able to guide what we?re looking for in the brain (a model, whose necessity we agree on). Of course, scientific process rarely works in this way, but it?s because you need computers for this kind of ?experimentation?, and computers are quite new. > > > > > > Bal?zs > > > > > > > > > > > > ? > > > > Balazs Kegl > > > > Research Scientist (DR2) > > > > Linear Accelerator Laboratory > > > > CNRS / University of Paris Sud > > > > http://users.web.lal.in2p3.fr/kegl > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Jan 27 13:04:15 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 27 Jan 2014 12:04:15 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: Message-ID: On Jan 27, 2014, at 11:16 AM, Dario Ringach wrote: > > There is no doubt we are undergoing a revolution in terms of techniques in neuroscience. The mere fact that these new tools allow for far improved dissection and control of neural circuits make the enterprise worth supporting in my view. I suspect many of these methods will find their way to the clinic even before we fully understand the underlying circuitry. of course they will, that is how they are being sold - but think about what that means??? > > It is equally undeniable that the BRAIN initiative lacks a theoretical skeleton. If tomorrow we find ourselves able to record the activity of every single neuron in a human brain, combined with structural EM data down to a single synapse, it is not remotely clear what would we do with such a dataset. > precisely - so why do it? > Aside from the scientific merits and weaknesses of the BRAIN initiative, we must also consider how such projects are presented to the public and the potential for disappointment we create. fortunately (and unfortunately) , the public is already completely skeptical about our claims - and increasingly, I am sorry to say, view scientists as no more or less committed to good than businessman. (at the same time that our university administrators are telling us to think like businessmen - another subject). I spend a lot of time interacting with children regarding their perception of science - it is remarkable what they say. But ?we? are absolute saints compared to what others are doing with ?neuro-marketing?. Here is an article just sent to me this morning from the gaming and learning side of my life for those interested. http://bigthink.com/neurobonkers/its-time-for-teachers-to-wake-up-to-neuromyths Neuromythology is everywhere and getting worse - > We must be extremely careful not to set up unrealistic short-term expectations. Instead, we ought to grab this opportunity to educate, convey our fascination with the problem, how tremendously difficult it is, and what we stand to gain if we truly were to understand how the brain works. Unfortunately, if the core is rotten hard to make the outside golden. As we have been discussing - the BRAIN initiative is seriously flawed (and NOT because it left out the money hungry MRI community). It is scientifically flawed - but, as we have been discussing, IMHO reflects the actual state of neuroscience research. Unfortunately Jim > > ? > Dario > > > > > > > Dario Ringach, PhD > > Professor of Neurobiology and Psychology > Jules Stein Eye Institute > David Geffen School of Medicine > University of California, Los Angeles > > dario at ucla.edu | http://ringachlab.net > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Jan 27 12:55:25 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 27 Jan 2014 11:55:25 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> Message-ID: <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> Good points Carson However, I think one problem with this conversation is that it is (ironically given the number of engineers here) focused on outcomes rather than technology. 25 years ago, we introduced a modeling system (GENESIS) specifically with the intent to start to build an infrastructure (as a community i.e. one of the first open source modeling platforms out there), so that people could share models leading to community models and start to use the models (rather than story telling) as a way to communicate and collaborate. (Remember the Rochester connectionist simulator - similar idea). The paper I linked to in my first posting describes the process that lead to establishment of what is at this point in time I would claim, the only community single cell model in computational neuroscience. Several years ago we wrote several proposals to NIH and NSF about making explicit what was always implicit and building a new form of Journal around GENESIS - and in particular GENESIS version 3 - that would base publications on models rather than stories with pictures. Looking back to Newton?s time - the invention of the scientific journal and before that the invention of the printing press as a technology, drove huge change not only in science, but also in society as a whole. However, as you say, physics is the study of simple things, where, in principle (and certainly in those days) you could publish a sufficient level of equations in that form of journal so that others knew what you were talking about and could contribute. As I said in this sequence earlier, closed form solutions, no matter how attractive, aren?t likely to be able to represent the complexities of the nervous system, which instead will (and have) depended on numerical simulations. It is obviously absurd to take a digital entity (such a model), and convert it into a journal article build around pictures to publish in an e-journal. Anyway, much more can be said about that, - but the point is that those GENESIS grants were rejected, at least in part, because the study sections were dominated by those who believe that we shouldn?t be modeling things at this level of complexity. This point of view is now fully represented in the version of the summer courses derived from the one I started with Christof Koch many years ago, in the CNS meeting I started, and in the Journal of Computational Neuroscience I started as well (as a vehicle to actually start publishing models - had the publishers and the editorial board understood why this was important.) Until one week ago (and for the last 3 years), I was the co-coordinator of the Neuroscience Group of NIH?s interdisciplinary multi-scale modeling initiative (IMAG). Sitting on the review committee for that program, I once again found myself defending biologically based modeling - to no avail. So, in fact I believe the solution to this problem has to do with developing the right technology for collaboration and communication, which we now have with the Internet - combined with a commitment by funding agencies to support its development and use. Complex compartmental models have no chance competitively in a world where a theory that can be boiled down and published in some from of an energy functional - that can be understood by everyone on this list. We have a communication problem related to the complexity of the right technology, combined with the lack of the use of the right technology to have larger number of people understand the value of the approach. Instead of all this philosophy, I would MUCH RATHER, have someone on this list actually read the paper I linked on 40 years of realistically modeling the cerebellar Purkinje cell - and then not have to use analogies from well understood physics (and engineering) to discuss a question routed in Biology. Lets use the Purkinje cell as a for instance and talk about science, rather than philosophy. The model is even there for you to pick apart and understand (one reason it because the first cellular community model, we actually provided full access to it on the internet). So how about it guys???? Anyway, returning to neuroscience, the whole field is dominated by story tellers spinning yarns and myths, who have no interest in having anyone else have the tools to check their stories really (i.e. by putting them in the common language of mathematics) Sadly, they continue to be able to convince the government to fund story telling and the construction of the kinds of tools that facilitate story telling (BRAIN project and neuroimaging). I also spent 10 years as part of the original Human Brain Project at NIH - which was supposed to foster collaborating technology and didn?t for the same reason. Foolishly someone last year asked me to speak to a bunch of graduate students and postdocs about how to have a successful career in science. Foolish, of course, because I am not sure that mine applies. Doubly foolish because, as I told them, they probably don?t want graduate and postdoctoral students taking advice from me anyway. they persisted so I did: I told them it was easy, pick a story, any story and stick to it - and best if the story can be well understood in 1 1/2 pages or less. One of the students asked what you should do if your data or someone else?s data doesn?t support your story. I told them, bury or ?smooth? your own data, and do your best to make sure that the other guys data isn?t published and their grant isn?t renewed. Also, for sure exclude that person as a reviewer on your next papers - and NEVER reference their work in your paper. One of the students then said, ? that doesn?t sound much like science? I said: I wasn?t asked to tell you how to do science, I was asked to tell you how to have a successful scientific career. :-) Jim On Jan 27, 2014, at 11:14 AM, Carson Chow wrote: > I am greatly enjoying this discussion. > > The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you'll find that what the early greats actually did and believed differs from the current understanding. I think it's safe to say that computational neuroscience has not reached that level of maturity. Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before. > > The big question is why is this the case. This is a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one? It is safer to just follow the path we already know. We simply all don't believe enough in any one idea for all of us to pursue it right now. It takes a massive commitment to learn any one thing much less everything on John Weng's list. I don't know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology. There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out. But who is to say that thirty years is a long time. There were almost two millennia between Ptolomy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell. However, physics is so much simpler than neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen. > > ---------------------- > Carson C Chow > LBM, NIDDK, NIH > > On Jan 26, 2014, at 22:05, james bower wrote: > >> Thanks Danny, Funny about coincidences. >> >> I almost posted earlier to the list a review I was asked to write of exactly the book you reference: >> >> 23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowski. >> >> It is appended to this email - >> >> Needless to say, while I couldn?t agree with you more on the importance of asking the right questions - many of the chapters in this book make clear, I believe, the fundamental underlying problem posed by having no common theoretical basis for neuroscience research. >> >> >> Jim Bower >> >> >> >> Published in The American Scientist >> >> Are We Ready for Hilbert? >> >> James M. Bower >> >> >> >> 23 Problems in Systems Neuroscience. Edited by J. Leo van Hemmen and Terrence J. Sejnowski. xvi + 514 pp. Oxford University Press, 2006. $79.95. >> >> >> >> 23 Problems in Systems Neuroscience grew out of a symposium held in Dresden in 2000 inspired by an address given by the great geometrist David Hilbert 100 years earlier. In his speech, Hilbert commemorated the start of the 20th century by delivering what is now regarded as one of the most influential mathematical expositions ever made. He outlined 23 essential problems that not only organized subsequent research in the field, but also clearly reflected Hilbert?s axiomatic approach to the further development of mathematics. Anticipating his own success, he began, ?Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?? >> >> I take seriously the premise represented in this new volume?s title and preface that it is intended to ?serve as a source of inspirations for future explorers of the brain.? Unfortunately, if the contributors sought to exert a ?Hilbertian? influence on the field by highlighting 23 of the most important problems in systems neuroscience, they have, in my opinion, failed. In failing, however, this book clearly illustrates fundamental differences between neuroscience (and biology in general) today and mathematics (and physics) in 1900. >> >> Implicit in Hilbert?s approach is the necessity for some type of formal structure underlying the problems at hand, allowing other investigators to understand their natures and then collaboratively explore a general path to their solutions. Yet there is little consistency in the form of problems presented in this book. Instead, many (perhaps most) of the chapters are organized, at best, around vague questions such as, ?How does the cerebral cortex work?? At worst, the authors simply recount what is, in effect, a story promoting their own point of view. >> >> The very first chapter, by Gilles Laurent, is a good example of the latter. After starting with a well-worn plea for considering the results of the nonmammalian, nonvisual systems he works on, Laurent summarizes a series of experiments (many of them his own) supporting his now-well-known position regarding the importance of synchrony in neuronal coding. This chapter could have presented a balanced discussion of the important questions surrounding the nature of the neural code (as attempted in one chapter by David McAlpine and Alan R. Palmer and another by C. van Vreeswijk), or even referenced and discussed some of the recently published papers questioning his interpretations. Instead, the author chose to attempt to convince us of his own particular solution. >> >> I don?t mean to pick on Laurent, as his chapter takes a standard form in symposia volumes; rather, his approach illustrates the general point that much of ?systems neuroscience? (and neuroscience in general) revolves around this kind of storytelling. The chapter by Bruno A. Olshausen and David J. Field makes this point explicitly, suggesting, that our current ?story based? view of the function of the well-studied visual cortex depends on (1) a biased sampling of neurons, (2) a bias in the kind of stimuli we present and (3) a bias in the kinds of theories we like to construct. >> >> In fairness, several chapters do attempt to address real problems in a concise and unbiased way. The chapter by L. F. Abbott, for example, positing, I think correctly, that the control of the flow of information in neural systems is a central (and unsolved) problem, is characteristically clear, circumscribed and open-minded. Refreshingly, Abbott?s introduction states, ?In the spirit of this volume, the point of this contribution is to raise a question, not to answer it. . . . I have my prejudices, which will become obvious, but I do not want to rule out any of these as candidates, nor do I want to leave the impression that the list is complete or that the problem is in any sense solved.? Given his physics background, Abbott may actually understand enough about Hilbert?s contribution to have sought its spirit. Most chapters, however, require considerable detective work, and probably also a near-professional understanding of the field, to find anything approaching Hilbert?s enumeration of fundamental research problems. >> >> In some sense I don?t think the authors are completely to blame. Although many are prominent in the field, this lack of focus on more general and well defined problems is, I believe, endemic in biology as a whole. While this may slowly be changing, the question of how and even if biology can move from a fundamentally descriptive, story-based science to one from which Hilbertian-style problems can be extracted may be THE problem in systems neuroscience. A few chapters do briefly raise this issue. For example, in their enjoyable article on synesthesia, V. S. Ramachandran and Edward M. Hubbard identify their approach as not fashionable in psychology partly because of ?the lingering pernicious effect of behaviorism? and partly because ?psychologists like to ape mature quantitative physics?even if the time isn?t ripe.? >> >> Laurenz Wiskott, in his chapter on possible mechanisms for size and shift invariance in visual (and perhaps other) cortices raises what may be the more fundamental question as to whether biology is even amenable to the form of quantification and explanation that has been so successful in physics: >> >> >> >> ?Either the brain solves all invariance problems in a similar way based on a few basic principles or it solves each invariance problem in a specific way that is different from all others. In the former case [asking] the more general question would be appropriate. . . . In the latter case, that is, if all invariance problems have their specific solution, the more general question would indeed be a set of questions and as such not appropriate to be raised and discussed here.? >> >> >> >> He then moderates the dichotomy by stating diplomatically, ?There is, of course, a third and most likely alternative, and that is that the truth lies somewhere between these two extremes.? Thus, Wiskott leaves unanswered the fundamental question about the generality of brain mechanisms or computational algorithms. As in mathematics 100 years ago, answering basic questions in systems neuroscience is tied up in assumptions regarding appropriate methodology. For Hilbert?s colleagues, this was obvious and constituted much of the debate following his address; this fundamental issue, however, is only rarely discussed in biology. >> >> Indeed, I want to be careful not to give the impression that these kinds of big-picture issues are given prominence in this volume?they are not. Rather, as is typical for books generated by these kinds of symposia, many of the chapters are simply filled with the particular details of a particular subject, although several authors should be commended for at least discussing their favorite systems in several species. However, given the lack of overall coordination, one wonders what impact this volume will have. >> >> One way to gauge the answer is to look for evidence that the meeting presentations influenced the other participants. As an exercise, I summarized the major points and concerns each author raised in their chapters and then checked that list against the assumptions and assertions made by the other authors writing on similar subjects. The resulting tally, I would assert, provides very little evidence that these authors attended the same meeting?or perhaps even that they are part of the same field! >> >> For example, the article titled ?What Is Fed Back? by Jean Bullier identifies, I think correctly, what will become a major shift in thinking about how brains are organized. As Bullier notes, there is growing evidence that the internal state of the brain has a much more profound effect on the way the brain processes sensory information than previously suspected. Yet this fundamental issue is scarcely mentioned in the other chapters, quite a few of which are firmly based on the old feed-forward ?behaviorist? model of brain function. Similarly, the chapter by Olshausen and Field is followed immediately by a paper by Steven W. Zucker on visual processing that depends on many of the assumptions that Olshausen and Field call into question. >> >> One hundred years ago, Hilbert?s 23 questions organized a field. The chapters in this book make pretty clear that we are still very far away from having a modern-day Hilbert or even a committee of ?experts? come up with a list of 23 fundamental questions that are accepted, or perhaps even understood, by the field of neuroscience as a whole. >> >> >> >>> >>> Asking good questions that come with well developed requirements is the starting point to good science. At least that is what we tell our graduate students. >>> >>> .. Danny >>> >>> ======================= >>> Daniel L. Silver, Ph.D. danny.silver at acadiau.ca >>> Professor, Jodrey School of Computer Science, Acadia University >>> Office 314, Carnegie Hall, Wolfville, NS Canada B4P 2R6 >>> p:902-585-1413 f:902-585-1067 >>> >>> >>> From: Geoffrey Hinton >>> Date: Sunday, 26 January, 2014 3:43 PM >>> To: Brad Wyble >>> Cc: Connectionists list >>> Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare >>> >>> I can no longer resist making one point. >>> >>> A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. >>> >>> Geoff >>> >>> >>> >>> On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: >>>> I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. >>>> >>>> Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. >>>> >>>> To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. >>>> >>>> I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. >>>> >>>> In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. >>>> >>>> >>>> Best, >>>> Brad Wyble >>>> >>>> >>>> >>>> >>>> On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: >>>>> Thanks for your comments Thomas, and good luck with your effort. >>>>> >>>>> I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. >>>>> >>>>> I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. >>>>> >>>>> Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. >>>>> >>>>> Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. >>>>> >>>>> As I said, they likely won?t make that mistake again - and will very likely get away with it. >>>>> >>>>> Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). >>>>> >>>>> Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: >>>>> - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) >>>>> - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) >>>>> - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). >>>>> - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg >>>>> - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. >>>>> - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. >>>>> As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. >>>>> >>>>> Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. >>>>> >>>>> I hope so. >>>>> >>>>> Jim >>>>> >>>>> p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. >>>>> >>>>> Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) >>>>> >>>>> >>>>> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: >>>>> >>>>>> James, enjoyed your writing. >>>>>> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >>>>>> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >>>>>> Cheers, Thomas >>>>>> >>>>>> On 2014-01-25 12:09 AM, "james bower" wrote: >>>>>>> Ivan thanks for the response, >>>>>>> >>>>>>> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >>>>>>> >>>>>>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >>>>>>> >>>>>>> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >>>>>>> >>>>>>> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >>>>>>> >>>>>>> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >>>>>>> >>>>>>> The details of reality matter. >>>>>>> >>>>>>> Jim >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >>>>>>> >>>>>>>> >>>>>>>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>>>>>>> >>>>>>>> -Ivan Raikov >>>>>>>> >>>>>>>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>>>>>>>> [snip] >>>>>>>>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>>>>>>>> >>>>>>>>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>>>>>>>> >>>>>>>>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Dr. James M. Bower Ph.D. >>>>>>> Professor of Computational Neurobiology >>>>>>> Barshop Institute for Longevity and Aging Studies. >>>>>>> 15355 Lambda Drive >>>>>>> University of Texas Health Science Center >>>>>>> San Antonio, Texas 78245 >>>>>>> >>>>>>> Phone: 210 382 0553 >>>>>>> Email: bower at uthscsa.edu >>>>>>> Web: http://www.bower-lab.org >>>>>>> twitter: superid101 >>>>>>> linkedin: Jim Bower >>>>>>> >>>>>>> CONFIDENTIAL NOTICE: >>>>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>>>> >>>>>>> >>>>> >>>>> >>>>> >>>>> Dr. James M. Bower Ph.D. >>>>> Professor of Computational Neurobiology >>>>> Barshop Institute for Longevity and Aging Studies. >>>>> 15355 Lambda Drive >>>>> University of Texas Health Science Center >>>>> San Antonio, Texas 78245 >>>>> >>>>> Phone: 210 382 0553 >>>>> Email: bower at uthscsa.edu >>>>> Web: http://www.bower-lab.org >>>>> twitter: superid101 >>>>> linkedin: Jim Bower >>>>> >>>>> CONFIDENTIAL NOTICE: >>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>> >>>>> >>>> >>>> >>>> >>>> -- >>>> Brad Wyble >>>> Assistant Professor >>>> Psychology Department >>>> Penn State University >>>> >>>> http://wyblelab.com >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Mon Jan 27 13:31:45 2014 From: achler at gmail.com (Tsvi Achler) Date: Mon, 27 Jan 2014 10:31:45 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: Jim has referred twice now to a list of problems and brain-like phenomena that models should strive to emulate. In my mind this gets to the heart of the matter. However, there was a discussion of one or two points and then it fizzled. The brain shows many electrophysiological but also behavioral phenomena. I would like to revive that discussion (and include not just neuroscience phenomena) in a list to show: how significant these issues are, the size of the gap in our knowledge, and focus more specifically on what is brain-like. Let me motivate this even further. The biggest bottleneck to understanding the brain is understanding how the brain/neurons perform recognition. Recognition is an essential foundation upon which cognition and intelligence is based. Without recognition the brain cannot interact with the world. Thus a better knowledge of recognition will open up the brain for better understanding. Here is my humble list, and I would like to open it to discussions, opinions, suggestions, and additions. 1) Dynamics. Lets be very specific. Oscillations are observed during recognition (as Jim and others mentioned) and they are not satisfactorily accounted. Since single oscillation generators have not been found, I interpret this means the oscillations are likely due to some type of feedforward-feedback connections functioning during recognition. 2) Difficulty with Similarity. Discriminating between similar patterns recognition takes longer and is more prone to error. This is not primarily a spatial search phenomena because it occurs in all modalities including olfaction which has very poor spatial resolution. Thus appears to be a fundamental part of the neural mechanisms of recognition. 3) Asymmetry. This is related to signal-to-noise like phenomena to which difficulty with similarity belong. Asymmetry is a special case of difficulty with similarity, where a similar pattern with more information will predominate the one with less. 4) Biased competition (priming). Prior expectation affects recognition time and accuracy. 5) Recall-ability. The same neural recognition network that can perform recognition likely performs recall. This is suggested by studies where sensory region activation can be observed when recognition patterns are imagined, and by the existence of mirror neurons. 6) Update-ability. The brain can learn new information (online outside the IID assumption) and immediately use it. It does not have to retrain on all old information (IID requirement for feed-forward neural networks). If we do not seriously consider networks that inherently display these properties, I believe neural the network community will continue rehashing ideas and see limited progress. My strong yet humble opinions, -Tsvi -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy.oreilly at colorado.edu Mon Jan 27 14:57:06 2014 From: randy.oreilly at colorado.edu (Randall O'Reilly) Date: Mon, 27 Jan 2014 12:57:06 -0700 Subject: Connectionists: Building a healthy theoretical neuroscience community In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> Message-ID: <8106CF7E-DA55-4966-A9E1-26C31B6C75B7@colorado.edu> To pick up on this thread of the discussion: * If we want to build a healthy and vibrant community, we need to be supportive of a plurality of research approaches ? there are sub-fields of science where everyone is supportive and constructive, and these fields get their papers published in top journals, and cited widely, and grants funded, etc. Then there are fields where everyone is cutting down everything that is ?NIH? (not invented here). Those fields are suicidal. Perhaps because of the creative nature of the modeling process, people tend to get strongly attached to their creations, and mistakenly view other ideas as threatening. If there are multiple clear, sensible alternative models, with distinct testable predictions, then we are contributing productively to the larger field, and almost every experimentalist I?ve ever talked with is excited by that kind of thing. Along these lines, I personally have made a strong conscious effort to be as constructive and positive about papers that I also have strong concerns about, including one that was discussed earlier in this thread. The more theoretical modeling papers that are published in high-profile journals, the better ? we really do all win when any one of us is making an impact! [Obviously, you don?t want to suspend all criticism, and you don?t want obviously bad stuff to be published, but you really do have to work hard to distinguish opinion from quality..] * The perception expressed in several comments that theoretical work is not impactful rings false to me, and sends an overly pessimistic message and ?negative self image?, which is also not constructive to building a growing and vibrant community. I can point to a large number of domains where computational models have played central roles in shaping the broader theoretical discourse and experiments, including in the hippocampus, basal ganglia & dopamine reinforcement learning, prefrontal cortex, and in visual object recognition (and probably a lot of other areas I don?t know enough about). The recent work on grid cells in the entorhinal cortex is a spectacular example of the interplay between models and experiments, for example. * Also, whoever thinks the BRAIN initiative is draining resources is crazy. It is a TINY amount of $ relative to overall budgets, and furthermore all the recent DARPA initiative teams (which represent roughly $100 million I think) that I know about involved a major contribution from theoretical / computational modeling. More generally, various branches of the DOD and intelligence research communities, and obviously industry such as google etc, are increasingly optimistic about brain-inspired approaches to intelligence, so this is an incredible opportunity for growing our field. And from what I?ve seen, there is an strong recognition among those in charge of giving out the $ that this is *the* hard problem, and it will take a sustained investment to make progress, but there is enough promise already that these investments are clearly going to pay off. So we should be breaking out the champagne, not spewing the sour grapes! I would be doing so if I wasn?t so busy writing so many damn grant proposals! :) - Randy On Jan 26, 2014, at 12:43 PM, Geoffrey Hinton wrote: > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: > I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. > > Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. > > To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. > > I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. > > In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. > > > Best, > Brad Wyble From ccchow at pitt.edu Mon Jan 27 14:05:23 2014 From: ccchow at pitt.edu (Carson Chow) Date: Mon, 27 Jan 2014 14:05:23 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> Message-ID: <7F74AC09-4D7C-4A50-B253-0407D8CD3F63@pitt.edu> Although Jim already responded, here is the original comment that didn't appear because I posted with the wrong email. I am greatly enjoying this discussion. The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you'll find that what the early greats actually did and believed differs from the current understanding. I think it's safe to say that computational neuroscience has not reached that level of maturity. Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before. The big question is why is this the case. This is a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one? It is safer to just follow the path we already know. We simply all don't believe enough in any one idea for all of us to pursue it right now. It takes a massive commitment to learn any one thing much less everything on John Weng's list. I don't know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology. There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out. But who is to say that thirty years is a long time. There were almost two millennia between Ptolomy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell. However, physics is so much simpler than neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen. ---------------------- Carson C Chow LBM, NIDDK, NIH > On Jan 27, 2014, at 12:55, james bower wrote: > > Good points Carson > > However, I think one problem with this conversation is that it is (ironically given the number of engineers here) focused on outcomes rather than technology. > > 25 years ago, we introduced a modeling system (GENESIS) specifically with the intent to start to build an infrastructure (as a community i.e. one of the first open source modeling platforms out there), so that people could share models leading to community models and start to use the models (rather than story telling) as a way to communicate and collaborate. (Remember the Rochester connectionist simulator - similar idea). > > The paper I linked to in my first posting describes the process that lead to establishment of what is at this point in time I would claim, the only community single cell model in computational neuroscience. > > Several years ago we wrote several proposals to NIH and NSF about making explicit what was always implicit and building a new form of Journal around GENESIS - and in particular GENESIS version 3 - that would base publications on models rather than stories with pictures. > > Looking back to Newton?s time - the invention of the scientific journal and before that the invention of the printing press as a technology, drove huge change not only in science, but also in society as a whole. > > However, as you say, physics is the study of simple things, where, in principle (and certainly in those days) you could publish a sufficient level of equations in that form of journal so that others knew what you were talking about and could contribute. > > As I said in this sequence earlier, closed form solutions, no matter how attractive, aren?t likely to be able to represent the complexities of the nervous system, which instead will (and have) depended on numerical simulations. > > It is obviously absurd to take a digital entity (such a model), and convert it into a journal article build around pictures to publish in an e-journal. > > Anyway, much more can be said about that, - but the point is that those GENESIS grants were rejected, at least in part, because the study sections were dominated by those who believe that we shouldn?t be modeling things at this level of complexity. > > This point of view is now fully represented in the version of the summer courses derived from the one I started with Christof Koch many years ago, in the CNS meeting I started, and in the Journal of Computational Neuroscience I started as well (as a vehicle to actually start publishing models - had the publishers and the editorial board understood why this was important.) > > Until one week ago (and for the last 3 years), I was the co-coordinator of the Neuroscience Group of NIH?s interdisciplinary multi-scale modeling initiative (IMAG). Sitting on the review committee for that program, I once again found myself defending biologically based modeling - to no avail. > > So, in fact I believe the solution to this problem has to do with developing the right technology for collaboration and communication, which we now have with the Internet - combined with a commitment by funding agencies to support its development and use. Complex compartmental models have no chance competitively in a world where a theory that can be boiled down and published in some from of an energy functional - that can be understood by everyone on this list. > > We have a communication problem related to the complexity of the right technology, combined with the lack of the use of the right technology to have larger number of people understand the value of the approach. > > Instead of all this philosophy, I would MUCH RATHER, have someone on this list actually read the paper I linked on 40 years of realistically modeling the cerebellar Purkinje cell - and then not have to use analogies from well understood physics (and engineering) to discuss a question routed in Biology. Lets use the Purkinje cell as a for instance and talk about science, rather than philosophy. The model is even there for you to pick apart and understand (one reason it because the first cellular community model, we actually provided full access to it on the internet). So how about it guys???? > > Anyway, returning to neuroscience, the whole field is dominated by story tellers spinning yarns and myths, who have no interest in having anyone else have the tools to check their stories really (i.e. by putting them in the common language of mathematics) Sadly, they continue to be able to convince the government to fund story telling and the construction of the kinds of tools that facilitate story telling (BRAIN project and neuroimaging). > > I also spent 10 years as part of the original Human Brain Project at NIH - which was supposed to foster collaborating technology and didn?t for the same reason. > > > Foolishly someone last year asked me to speak to a bunch of graduate students and postdocs about how to have a successful career in science. Foolish, of course, because I am not sure that mine applies. Doubly foolish because, as I told them, they probably don?t want graduate and postdoctoral students taking advice from me anyway. > > they persisted so I did: > > I told them it was easy, pick a story, any story and stick to it - and best if the story can be well understood in 1 1/2 pages or less. > > One of the students asked what you should do if your data or someone else?s data doesn?t support your story. I told them, bury or ?smooth? your own data, and do your best to make sure that the other guys data isn?t published and their grant isn?t renewed. Also, for sure exclude that person as a reviewer on your next papers - and NEVER reference their work in your paper. > > One of the students then said, ? that doesn?t sound much like science? > > I said: I wasn?t asked to tell you how to do science, I was asked to tell you how to have a successful scientific career. > > :-) > > Jim > > > >> On Jan 27, 2014, at 11:14 AM, Carson Chow wrote: >> >> I am greatly enjoying this discussion. >> >> The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you'll find that what the early greats actually did and believed differs from the current understanding. I think it's safe to say that computational neuroscience has not reached that level of maturity. Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before. >> >> The big question is why is this the case. This is a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one? It is safer to just follow the path we already know. We simply all don't believe enough in any one idea for all of us to pursue it right now. It takes a massive commitment to learn any one thing much less everything on John Weng's list. I don't know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology. There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out. But who is to say that thirty years is a long time. There were almost two millennia between Ptolomy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell. However, physics is so much simpler than neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen. >> >> ---------------------- >> Carson C Chow >> LBM, NIDDK, NIH >> >>> On Jan 26, 2014, at 22:05, james bower wrote: >>> >>> Thanks Danny, Funny about coincidences. >>> >>> I almost posted earlier to the list a review I was asked to write of exactly the book you reference: >>> >>> 23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowski. >>> >>> It is appended to this email - >>> >>> Needless to say, while I couldn?t agree with you more on the importance of asking the right questions - many of the chapters in this book make clear, I believe, the fundamental underlying problem posed by having no common theoretical basis for neuroscience research. >>> >>> >>> Jim Bower >>> >>> >>> >>> Published in The American Scientist >>> >>> Are We Ready for Hilbert? >>> >>> James M. Bower >>> >>> >>> >>> 23 Problems in Systems Neuroscience. Edited by J. Leo van Hemmen and Terrence J. Sejnowski. xvi + 514 pp. Oxford University Press, 2006. $79.95. >>> >>> >>> >>> 23 Problems in Systems Neuroscience grew out of a symposium held in Dresden in 2000 inspired by an address given by the great geometrist David Hilbert 100 years earlier. In his speech, Hilbert commemorated the start of the 20th century by delivering what is now regarded as one of the most influential mathematical expositions ever made. He outlined 23 essential problems that not only organized subsequent research in the field, but also clearly reflected Hilbert?s axiomatic approach to the further development of mathematics. Anticipating his own success, he began, ?Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?? >>> >>> I take seriously the premise represented in this new volume?s title and preface that it is intended to ?serve as a source of inspirations for future explorers of the brain.? Unfortunately, if the contributors sought to exert a ?Hilbertian? influence on the field by highlighting 23 of the most important problems in systems neuroscience, they have, in my opinion, failed. In failing, however, this book clearly illustrates fundamental differences between neuroscience (and biology in general) today and mathematics (and physics) in 1900. >>> >>> Implicit in Hilbert?s approach is the necessity for some type of formal structure underlying the problems at hand, allowing other investigators to understand their natures and then collaboratively explore a general path to their solutions. Yet there is little consistency in the form of problems presented in this book. Instead, many (perhaps most) of the chapters are organized, at best, around vague questions such as, ?How does the cerebral cortex work?? At worst, the authors simply recount what is, in effect, a story promoting their own point of view. >>> >>> The very first chapter, by Gilles Laurent, is a good example of the latter. After starting with a well-worn plea for considering the results of the nonmammalian, nonvisual systems he works on, Laurent summarizes a series of experiments (many of them his own) supporting his now-well-known position regarding the importance of synchrony in neuronal coding. This chapter could have presented a balanced discussion of the important questions surrounding the nature of the neural code (as attempted in one chapter by David McAlpine and Alan R. Palmer and another by C. van Vreeswijk), or even referenced and discussed some of the recently published papers questioning his interpretations. Instead, the author chose to attempt to convince us of his own particular solution. >>> >>> I don?t mean to pick on Laurent, as his chapter takes a standard form in symposia volumes; rather, his approach illustrates the general point that much of ?systems neuroscience? (and neuroscience in general) revolves around this kind of storytelling. The chapter by Bruno A. Olshausen and David J. Field makes this point explicitly, suggesting, that our current ?story based? view of the function of the well-studied visual cortex depends on (1) a biased sampling of neurons, (2) a bias in the kind of stimuli we present and (3) a bias in the kinds of theories we like to construct. >>> >>> In fairness, several chapters do attempt to address real problems in a concise and unbiased way. The chapter by L. F. Abbott, for example, positing, I think correctly, that the control of the flow of information in neural systems is a central (and unsolved) problem, is characteristically clear, circumscribed and open-minded. Refreshingly, Abbott?s introduction states, ?In the spirit of this volume, the point of this contribution is to raise a question, not to answer it. . . . I have my prejudices, which will become obvious, but I do not want to rule out any of these as candidates, nor do I want to leave the impression that the list is complete or that the problem is in any sense solved.? Given his physics background, Abbott may actually understand enough about Hilbert?s contribution to have sought its spirit. Most chapters, however, require considerable detective work, and probably also a near-professional understanding of the field, to find anything approaching Hilbert?s enumeration of fundamental research problems. >>> >>> In some sense I don?t think the authors are completely to blame. Although many are prominent in the field, this lack of focus on more general and well defined problems is, I believe, endemic in biology as a whole. While this may slowly be changing, the question of how and even if biology can move from a fundamentally descriptive, story-based science to one from which Hilbertian-style problems can be extracted may be THE problem in systems neuroscience. A few chapters do briefly raise this issue. For example, in their enjoyable article on synesthesia, V. S. Ramachandran and Edward M. Hubbard identify their approach as not fashionable in psychology partly because of ?the lingering pernicious effect of behaviorism? and partly because ?psychologists like to ape mature quantitative physics?even if the time isn?t ripe.? >>> >>> Laurenz Wiskott, in his chapter on possible mechanisms for size and shift invariance in visual (and perhaps other) cortices raises what may be the more fundamental question as to whether biology is even amenable to the form of quantification and explanation that has been so successful in physics: >>> >>> >>> >>> ?Either the brain solves all invariance problems in a similar way based on a few basic principles or it solves each invariance problem in a specific way that is different from all others. In the former case [asking] the more general question would be appropriate. . . . In the latter case, that is, if all invariance problems have their specific solution, the more general question would indeed be a set of questions and as such not appropriate to be raised and discussed here.? >>> >>> >>> >>> He then moderates the dichotomy by stating diplomatically, ?There is, of course, a third and most likely alternative, and that is that the truth lies somewhere between these two extremes.? Thus, Wiskott leaves unanswered the fundamental question about the generality of brain mechanisms or computational algorithms. As in mathematics 100 years ago, answering basic questions in systems neuroscience is tied up in assumptions regarding appropriate methodology. For Hilbert?s colleagues, this was obvious and constituted much of the debate following his address; this fundamental issue, however, is only rarely discussed in biology. >>> >>> Indeed, I want to be careful not to give the impression that these kinds of big-picture issues are given prominence in this volume?they are not. Rather, as is typical for books generated by these kinds of symposia, many of the chapters are simply filled with the particular details of a particular subject, although several authors should be commended for at least discussing their favorite systems in several species. However, given the lack of overall coordination, one wonders what impact this volume will have. >>> >>> One way to gauge the answer is to look for evidence that the meeting presentations influenced the other participants. As an exercise, I summarized the major points and concerns each author raised in their chapters and then checked that list against the assumptions and assertions made by the other authors writing on similar subjects. The resulting tally, I would assert, provides very little evidence that these authors attended the same meeting?or perhaps even that they are part of the same field! >>> >>> For example, the article titled ?What Is Fed Back? by Jean Bullier identifies, I think correctly, what will become a major shift in thinking about how brains are organized. As Bullier notes, there is growing evidence that the internal state of the brain has a much more profound effect on the way the brain processes sensory information than previously suspected. Yet this fundamental issue is scarcely mentioned in the other chapters, quite a few of which are firmly based on the old feed-forward ?behaviorist? model of brain function. Similarly, the chapter by Olshausen and Field is followed immediately by a paper by Steven W. Zucker on visual processing that depends on many of the assumptions that Olshausen and Field call into question. >>> >>> One hundred years ago, Hilbert?s 23 questions organized a field. The chapters in this book make pretty clear that we are still very far away from having a modern-day Hilbert or even a committee of ?experts? come up with a list of 23 fundamental questions that are accepted, or perhaps even understood, by the field of neuroscience as a whole. >>> >>> >>> >>>> >>>> Asking good questions that come with well developed requirements is the starting point to good science. At least that is what we tell our graduate students. >>>> >>>> .. Danny >>>> >>>> ======================= >>>> Daniel L. Silver, Ph.D. danny.silver at acadiau.ca >>>> Professor, Jodrey School of Computer Science, Acadia University >>>> Office 314, Carnegie Hall, Wolfville, NS Canada B4P 2R6 >>>> p:902-585-1413 f:902-585-1067 >>>> >>>> >>>> From: Geoffrey Hinton >>>> Date: Sunday, 26 January, 2014 3:43 PM >>>> To: Brad Wyble >>>> Cc: Connectionists list >>>> Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare >>>> >>>> I can no longer resist making one point. >>>> >>>> A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work. Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task. Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach. >>>> >>>> Geoff >>>> >>>> >>>> >>>>> On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: >>>>> I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. >>>>> >>>>> Jim, I appreciate your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator . Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut. The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory. >>>>> >>>>> To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete. To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA? And why stop there? Where does it end? In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level. >>>>> >>>>> I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another. The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels. >>>>> >>>>> In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. >>>>> >>>>> >>>>> Best, >>>>> Brad Wyble >>>>> >>>>> >>>>> >>>>> >>>>>> On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: >>>>>> Thanks for your comments Thomas, and good luck with your effort. >>>>>> >>>>>> I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. >>>>>> >>>>>> I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries. In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don?t really understand the origins of their own science. By that, I mean, that they don?t understand how much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the foundation build by those (like Newton) who were both in an earlier time. Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. The Higgs effort being a very visible recent example. >>>>>> >>>>>> Neuroscience has nothing of the sort. As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is. >>>>>> >>>>>> Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would have been too computationally expensive to include. Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly. They likely won?t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje cells - I had to reject the paper for the same reason. >>>>>> >>>>>> As I said, they likely won?t make that mistake again - and will very likely get away with it. >>>>>> >>>>>> Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ?We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry?. BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale. I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels of scale together - another reason why you can?t ignore the details in biological models if you really want to understand how biology works. (too cryptic a comment perhaps). >>>>>> >>>>>> Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now. Key points I think are: >>>>>> - you need to produce students who are REALLY both experimental and theoretical (like Newton). (and that doesn?t mean programs that ?import? physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve their computational problems) >>>>>> - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system) >>>>>> - you need to build a new form of collaboration and communication that can support the complexity of those models. Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials). >>>>>> - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday, that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. believes to be true. Don?t get me started on human imaging studies. arggg >>>>>> - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. The grant should support at set of experiments, that both parties agree distinguish between their two points of view. All results need to be published with joint authorship. In effect that is how physics works - given its underlying structure. >>>>>> - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science. >>>>>> As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ?Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students. Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering. I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts. >>>>>> >>>>>> Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. >>>>>> >>>>>> I hope so. >>>>>> >>>>>> Jim >>>>>> >>>>>> p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway). That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to. Again, a first - but possible, as this is about the only ?community model? we have. >>>>>> >>>>>> Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of the action potential. Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. Both views and interpretations are historically and practically incorrect. In my opinion, if you can?t handle the math in the HH model, you shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on the model, you shouldn?t be a neuro-theorist either. just saying. :-) >>>>>> >>>>>> >>>>>>> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: >>>>>>> >>>>>>> James, enjoyed your writing. >>>>>>> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca if you are not already on there.) >>>>>>> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. >>>>>>> Cheers, Thomas >>>>>>> >>>>>>>> On 2014-01-25 12:09 AM, "james bower" wrote: >>>>>>>> Ivan thanks for the response, >>>>>>>> >>>>>>>> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether - or declared we in the US could leave it to the Europeans. I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness. In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, except that there are also inhibitory neurons?? Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact. To quote ?Stupid is as stupid does?. >>>>>>>> >>>>>>>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels? The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort. The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities. In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience. (of course, many people are burring their concerns in favor of tin cups - hoping). Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ?connectionists?) Where is the theory? Hebb? You should read Hebb if you haven?t - rather remarkable treatise. But very far from a theory. >>>>>>>> >>>>>>>> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. I feel the same way about Neuroscience. Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details. OF course, most experimentalists and even most modelers have paid little or no attention. We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity. And, as the model I linked tries to make clear - we also have to all agree to start working on common ?community models?. But like big horn sheep, much safer to stand on your own peak and make a lot of noise. >>>>>>>> >>>>>>>> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations. >>>>>>>> >>>>>>>> As I have been saying for 30 years: Beware Ptolemy and curve fitting. >>>>>>>> >>>>>>>> The details of reality matter. >>>>>>>> >>>>>>>> Jim >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? >>>>>>>>> >>>>>>>>> -Ivan Raikov >>>>>>>>> >>>>>>>>>> On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: >>>>>>>>>> [snip] >>>>>>>>>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop. Might to some sound like a semantic quibble, but I assure you it is not. >>>>>>>>>> >>>>>>>>>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point. >>>>>>>>>> >>>>>>>>>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Dr. James M. Bower Ph.D. >>>>>>>> Professor of Computational Neurobiology >>>>>>>> Barshop Institute for Longevity and Aging Studies. >>>>>>>> 15355 Lambda Drive >>>>>>>> University of Texas Health Science Center >>>>>>>> San Antonio, Texas 78245 >>>>>>>> >>>>>>>> Phone: 210 382 0553 >>>>>>>> Email: bower at uthscsa.edu >>>>>>>> Web: http://www.bower-lab.org >>>>>>>> twitter: superid101 >>>>>>>> linkedin: Jim Bower >>>>>>>> >>>>>>>> CONFIDENTIAL NOTICE: >>>>>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>>> >>>>>> >>>>>> >>>>>> Dr. James M. Bower Ph.D. >>>>>> Professor of Computational Neurobiology >>>>>> Barshop Institute for Longevity and Aging Studies. >>>>>> 15355 Lambda Drive >>>>>> University of Texas Health Science Center >>>>>> San Antonio, Texas 78245 >>>>>> >>>>>> Phone: 210 382 0553 >>>>>> Email: bower at uthscsa.edu >>>>>> Web: http://www.bower-lab.org >>>>>> twitter: superid101 >>>>>> linkedin: Jim Bower >>>>>> >>>>>> CONFIDENTIAL NOTICE: >>>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>>>> >>>>> >>>>> >>>>> -- >>>>> Brad Wyble >>>>> Assistant Professor >>>>> Psychology Department >>>>> Penn State University >>>>> >>>>> http://wyblelab.com >>> >>> >>> >>> >>> >>> Dr. James M. Bower Ph.D. >>> >>> Professor of Computational Neurobiology >>> >>> Barshop Institute for Longevity and Aging Studies. >>> >>> 15355 Lambda Drive >>> >>> University of Texas Health Science Center >>> >>> San Antonio, Texas 78245 >>> >>> >>> Phone: 210 382 0553 >>> >>> Email: bower at uthscsa.edu >>> >>> Web: http://www.bower-lab.org >>> >>> twitter: superid101 >>> >>> linkedin: Jim Bower >>> >>> >>> CONFIDENTIAL NOTICE: >>> >>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >>> >>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >>> >>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >>> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pelillo at dsi.unive.it Mon Jan 27 14:51:32 2014 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Mon, 27 Jan 2014 20:51:32 +0100 (CET) Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E15A9E.5080207@cse.msu.edu> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> Message-ID: > One of the students asked what you should do if your data or someone elses data doesnt support your > story. I told them, bury or smooth your own data, and do your best to make sure that the other guys data > isnt published and their grant isnt renewed. Also, for sure exclude that person as a reviewer on your next > papers - and NEVER reference their work in your paper. > >One of the students then said, that doesnt sound much like science > >I said: I wasnt asked to tell you how to do science, I was asked to tell you how to have a successful > scientific career. Well, this does indeed sound like "science" ... Let me use two of my favorite quotes: "It is remarkable how far people may be carried in the study of a science, even when an hypothesis turns everything upside-down" H. Butterfield, The Origins of Modern Science, 1957. (and then he mentions what happened to phlogiston theory, which manifestly contradicted the fact of the augmented weight of bodies after combustion or calcination, but similar examples can be found in more modern theories...). The "philosophical" grounds for this apparently irrational behavior (sorry, back to philosophy...) can be found in the well-known Duhen-Quine thesis: "In sum, the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed." P. Duhem, The Aim and Structure of Physical Theory, 1914. Which basically says that if your model doesn't fit the data you can always blame something else... Marcello --- Marcello Pelillo, FIEEE, FIAPR Professor of Computer Science Computer Vision and Pattern Recognition Lab, Director Center for Knowledge, Interaction and Intelligent Systems (KIIS), Director DAIS Ca' Foscari University, Venice Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: marcello.pelillo at gmail.com URL: http://www.dsi.unive.it/~pelillo On Mon, 27 Jan 2014, james bower wrote: > Good points Carson > However, I think one problem with this conversation is that it is (ironically given the number of engineers here) focused on outcomes rather than technology. > > 25 years ago, we introduced a modeling system (GENESIS) specifically with the intent to start to build an infrastructure (as a community i.e. one of the first open source > modeling platforms out there), so that people could share models leading to community models and start to use the models (rather than story telling) as a way to communicate > and collaborate. ?(Remember the Rochester connectionist simulator - similar idea). ? > > The paper I linked to in my first posting describes the process that lead to establishment of what is at this point in time I would claim, the only community single cell > model in computational neuroscience. > > Several years ago we wrote several proposals to NIH and NSF about making explicit what was always implicit and building a new form of Journal around GENESIS - and in > particular GENESIS version 3 ?- that would base publications on models rather than stories with pictures. ? > > Looking back to Newton?s time - the invention of the scientific journal and before that the invention of the printing press as a technology, drove huge change not only in > science, but also in society as a whole. > > However, as you say, physics is the study of simple things, where, in principle (and certainly in those days) you could publish ?a sufficient level of equations in that form > of journal so that others knew what you were talking about and could contribute. > > As I said in this sequence earlier, closed form solutions, no matter how attractive, aren?t likely to be able to represent the complexities of the nervous system, which > instead will (and have) depended on numerical simulations. > > It is obviously absurd to take a digital entity (such a model), and convert it into a journal article build around pictures to publish in an e-journal.? > > Anyway, much more can be said about that, - but the point is that those GENESIS grants were rejected, at least in part, because the study sections were dominated by those > who believe that we shouldn?t be modeling things at this level of complexity. > > This point of view is now fully represented in the version of the summer courses derived from the one I started with Christof Koch many years ago, in the CNS meeting I > started, and in the Journal of Computational Neuroscience I started as well (as a vehicle to actually start publishing models - had the publishers and the editorial board > understood why this was important.) > > Until one week ago (and for the last 3 years), I was the co-coordinator of the Neuroscience Group of NIH?s interdisciplinary multi-scale modeling initiative (IMAG). ?Sitting > on the review committee for that program, I once again found myself defending biologically based modeling - to no avail. > > So, in fact I believe the solution to this problem has to do with developing the right technology for collaboration and communication, which we now have with the Internet - > combined with a commitment by funding agencies to support its development and use. ?Complex compartmental models have no chance competitively in a world where a theory that > can be boiled down and published in some from of an energy functional - that can be understood by everyone on this list. > > We have a communication problem related to the complexity of the right technology, combined with the lack of the use of the right technology to have larger number of people > understand the value of the approach. > > Instead of all this philosophy, I would MUCH RATHER, have someone on this list actually read the paper I linked on 40 years of realistically modeling the cerebellar Purkinje > cell - and then not have to use analogies from well understood physics (and engineering) to discuss a question routed in Biology. ?Lets use the Purkinje cell as a for > instance and talk about science, rather than philosophy. ?The model is even there for you to pick apart and understand (one reason it because the first cellular community > model, we actually provided full access to it on the internet). ?So how about it guys???? > > Anyway, returning to neuroscience, ?the whole field is dominated by story tellers spinning yarns and myths, who have no interest in having anyone else have the tools to > check their stories really (i.e. by putting them in the common language of mathematics) ?Sadly, they continue to be able to convince the government to fund story telling and > the construction of the kinds of tools that facilitate story telling (BRAIN project and neuroimaging). > > I also spent 10 years as part of the original Human Brain Project at NIH - which was supposed to foster collaborating technology and didn?t for the same reason. > > > Foolishly someone last year asked me to speak to a bunch of graduate students and postdocs about how to have a successful career in science. ?Foolish, of course, because I > am not sure that mine applies. ?Doubly foolish because, as I told them, they probably don?t want graduate and postdoctoral students taking advice from me anyway. > > they persisted so I did: > > I told them it was easy, pick a story, any story and stick to it - and best if the story can be well understood in 1 1/2 pages or less. > > One of the students asked what you should do if your data or someone else?s data doesn?t support your story. ?I told them, bury or ?smooth? your own data, and do your best to make > sure that the other guys data isn?t published and their grant isn?t renewed. ?Also, for sure exclude that person as a reviewer on your next papers - and NEVER reference their > work in your paper. > > One of the students then said, ? that doesn?t sound much like science? > > I said: I wasn?t asked to tell you how to do science, I was asked to tell you how to have a successful scientific career. > > :-) > > Jim > > > > On Jan 27, 2014, at 11:14 AM, Carson Chow wrote: > > I am greatly enjoying this discussion.? > > The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In > physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often > if you actually read the original works you'll find that what the early greats actually did and believed differs from the current understanding. ?I think it's safe to > say that computational neuroscience has not reached that level of maturity. ?Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has > come before. ? > > The big question is why is this the case. This is a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is > optimal but the opportunity cost to follow it is great. How do we know it is the right one? ?It is safer to just follow the path we already know. We simply all don't > believe enough in any one idea for all of us to pursue it right now. It takes a massive commitment to learn any one thing much less everything on John Weng's list. I > don't know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology. ?There are only so many John Von > Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to > choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over > these points until one or a small set of ideas finally wins out. ?But who is to say that thirty years is a long time. There were almost two millennia between Ptolomy > and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell. ?However, physics is so much simpler than > neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience > remains to be seen.? > > ---------------------- > Carson C ChowLBM, NIDDK, NIH > > On Jan 26, 2014, at 22:05, james bower wrote: > > Thanks Danny, Funny about coincidences. > > I almost posted earlier to the list a review I was asked to write of exactly the book you reference: > > ??23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowski. > > It is appended to this email -? > > Needless to say, while I couldn?t agree with you more on the importance of asking the right questions - many of the chapters in this book make clear, I believe, > the fundamental underlying problem posed by having no common theoretical basis for neuroscience research. > > > Jim Bower > > > > Published in The American Scientist > > Are We Ready for Hilbert? > > James M. Bower > > ? > > 23 Problems in Systems Neuroscience. Edited by J. Leo van Hemmen and Terrence J. Sejnowski. xvi + 514 pp. Oxford University Press, 2006. $79.95. > > ? > > 23 Problems in Systems Neuroscience grew out of a symposium held in Dresden in 2000 inspired by an address given by the great geometrist David Hilbert 100 years > earlier. In his speech, Hilbert commemorated the start of the 20th century by delivering what is now regarded as one of the most influential mathematical > expositions ever made. He outlined 23 essential problems that not only organized subsequent research in the field, but also clearly reflected Hilbert?s axiomatic > approach to the further development of mathematics. Anticipating his own success, he began, ?Who of us would not be glad to lift the veil behind which the future > lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?? > > I take seriously the premise represented in this new volume?s title and preface that it is intended to ?serve as a source of inspirations for future explorers of the > brain.? Unfortunately, if the contributors sought to exert a ?Hilbertian? influence on the field by highlighting 23 of the most important problems in systems > neuroscience, they have, in my opinion, failed. In failing, however, this book clearly illustrates fundamental differences between neuroscience (and biology in > general) today and mathematics (and physics) in 1900. > > Implicit in Hilbert?s approach is the necessity for some type of formal structure underlying the problems at hand, allowing other investigators to understand their > natures and then collaboratively explore a general path to their solutions. Yet there is little consistency in the form of problems presented in this book. > Instead, many (perhaps most) of the chapters are organized, at best, around vague questions such as, ?How does the cerebral cortex work?? At worst, the authors > simply recount what is, in effect, a story promoting their own point of view. > > The very first chapter, by Gilles Laurent, is a good example of the latter. After starting with a well-worn plea for considering the results of the nonmammalian, > nonvisual systems he works on, Laurent summarizes a series of experiments (many of them his own) supporting his now-well-known position regarding the importance > of synchrony in neuronal coding. This chapter could have presented a balanced discussion of the important questions surrounding the nature of the neural code (as > attempted in one chapter by David McAlpine and Alan R. Palmer and another by C. van Vreeswijk), or even referenced and discussed some of the recently published > papers questioning his interpretations. Instead, the author chose to attempt to convince us of his own particular solution. > > I don?t mean to pick on Laurent, as his chapter takes a standard form in symposia volumes; rather, his approach illustrates the general point that much of ?systems > neuroscience? (and neuroscience in general) revolves around this kind of storytelling. The chapter by Bruno A. Olshausen and David J. Field makes this point > explicitly, suggesting, that our current ?story based? view of the function of the well-studied visual cortex depends on (1) a biased sampling of neurons, (2) a bias > in the kind of stimuli we present and (3) a bias in the kinds of theories we like to construct. > > In fairness, several chapters do attempt to address real? problems in a concise and unbiased way. The chapter by L. F. Abbott, for example, positing, I think > correctly, that the control of the flow of information in neural systems is a central (and unsolved) problem, is characteristically clear, circumscribed and > open-minded. Refreshingly, Abbott?s introduction states, ?In the spirit of this volume, the point of this contribution is to raise a question, not to answer it. . . > . I have my prejudices, which will become obvious, but I do not want to rule out any of these as candidates, nor do I want to leave the impression that the list > is complete or that the problem is in any sense solved.? Given his physics background, Abbott may actually understand enough about Hilbert?s contribution to have > sought its spirit. Most chapters, however, require considerable detective work, and probably also a near-professional understanding of the field, to find > anything approaching Hilbert?s enumeration of fundamental research problems. > > In some sense I don?t think the authors are completely to blame. Although many are prominent in the field, this lack of focus on more general and well defined > problems is, I believe, endemic in biology as a whole. While this may slowly be changing, the question of how and even if biology can move from a fundamentally > descriptive, story-based science to one from which Hilbertian-style problems can be extracted may be THE problem in systems neuroscience. A few chapters do > briefly raise this issue. For example, in their enjoyable article on synesthesia, V. S. Ramachandran and Edward M. Hubbard identify their approach as not > fashionable in psychology partly because of ?the lingering pernicious effect of behaviorism? and partly because ?psychologists like to ape mature quantitative > physics?even if the time isn?t ripe.? > > Laurenz Wiskott, in his chapter on possible mechanisms for size and shift invariance in visual (and perhaps other) cortices raises what may be the more > fundamental question as to whether biology is even amenable to the form of quantification and explanation that has been so successful in physics: > > ? > > ?Either the brain solves all invariance problems in a similar way based on a few basic principles or it solves each invariance problem in a specific way that is > different from all others. In the former case [asking] the more general question would be appropriate. . . . In the latter case, that is, if all invariance > problems have their specific solution, the more general question would indeed be a set of questions and as such not appropriate to be raised and discussed here.? > > ? > > He then moderates the dichotomy by stating diplomatically, ?There is, of course, a third and most likely alternative, and that is that the truth lies somewhere > between these two extremes.? Thus, Wiskott leaves unanswered the fundamental question about the generality of brain mechanisms or computational algorithms. As in > mathematics 100 years ago, answering basic questions in systems neuroscience is tied up in assumptions regarding appropriate methodology. For Hilbert?s colleagues, > this was obvious and constituted much of the debate following his address; this fundamental issue, however, is only rarely discussed in biology. > > Indeed, I want to be careful not to give the impression that these kinds of big-picture issues are given prominence in this volume?they are not. Rather, as is > typical for books generated by these kinds of symposia, many of the chapters are simply filled with the particular details of a particular subject, although > several authors should be commended for at least discussing their favorite systems in several species. However, given the lack of overall coordination, one > wonders what impact this volume will have. > > One way to gauge the answer is to look for evidence that the meeting presentations influenced the other participants. As an exercise, I summarized the major > points and concerns each author raised in their chapters and then checked that list against the assumptions and assertions made by the other authors writing on > similar subjects. The resulting tally, I would assert, provides very little evidence that these authors attended the same meeting?or perhaps even that they are > part of the same field! > > For example, the article titled ?What Is Fed Back? by Jean Bullier identifies, I think correctly, what will become a major shift in thinking about how brains are > organized. As Bullier notes, there is growing evidence that the internal state of the brain has a much more profound effect on the way the brain processes > sensory information than previously suspected. Yet this fundamental issue is scarcely mentioned in the other chapters, quite a few of which are firmly based on > the old feed-forward ?behaviorist? model of brain function. Similarly, the chapter by Olshausen and Field is followed immediately by a paper by Steven W. Zucker on > visual processing that depends on many of the assumptions that Olshausen and Field call into question. > > One hundred years ago, Hilbert?s 23 questions organized a field. The chapters in this book make pretty clear that we are still very far away from having a > modern-day Hilbert or even a committee of ?experts? come up with a list of 23 fundamental questions that are accepted, or perhaps even understood, by the field of > neuroscience as a whole. > > > > > Asking good questions that come with well developed requirements is the starting point to good science.? At least that is what we tell our graduate > students.? > > .. Danny > > ======================= > Daniel L. Silver, Ph.D.???????danny.silver at acadiau.ca > Professor,??Jodrey School of Computer Science,?? Acadia University > Office 314, Carnegie Hall,???? Wolfville, NS? Canada? B4P 2R6 > p:902-585-1413 ? ? ? ? ? ? ?f:902-585-1067 > > > From: Geoffrey Hinton > Date: Sunday, 26 January, 2014 3:43 PM > To: Brad Wyble > Cc: Connectionists list > Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare > > I can no longer resist making one point. > > A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might > work.? Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major > progress and which were dead ends. Maybe a fruitful approach is to? model every connection in a piece of retina in order to distinguish between detailed > theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at > some difficult task.? Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the > brain works is to insist that there is one right approach and nearly all the money should go to that approach.? > > Geoff > > > > On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble wrote: > I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling. > Jim, I appreciate ?your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything > down to a lowest common denominator . ?Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the > analogy between neuroscience and physics is all that clear cut. ?The two fields deal with vastly different levels of complexity and therefore I don't > think it should be expected that they will (or should) follow the same trajectory. ? > > To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being > incomplete. ?To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be > published unless it includes an explicit simulation of the RNA? ?And why stop there? ?Where does it end? ?In my opinion, we can't make effective > progress in this field if everyone is bound to the molecular level. ? > > I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory > is developed at different levels of abstraction that overlap with one another. ?The challenge is not how to force everyone to operate at the same > level of formal specificity, but how to allow effective communication between researchers operating at different levels. ? > > In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a ?model-based discipline that already has to > work simultaneously at many different scales of complexity and abstraction.? > > > Best,? > Brad Wyble > > > > > On Sat, Jan 25, 2014 at 9:59 AM, james bower wrote: > Thanks for your comments Thomas, and good luck with your effort. > I can?t refrain myself from making the probably culturist remark that this seems a very practical approach. > > I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ?paradigmatic? as > distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and > 16th centuries. ?In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of > physicists around who, again in my view, don?t really understand the origins of their own science. ?By that, I mean, that they don?t understand how > much of their current scientific structure, for example the relatively clean separation between ?theorists? and ?experimentalists?, is dependent on the > foundation build by those (like Newton) who were both in an earlier time. ?Once you have a sold underlying computational foundation for a > science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together. ?The Higgs effort being a > very visible recent example. > > Neuroscience has nothing of the sort. ?As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago > (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage > dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has > been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels. ?While that again, to > many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device > that it is. ? > > Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels > because they would ?have been too computationally expensive to include. ?Sadly for those authors, I was asked to review the paper for the usual > reason - that several of our papers were referenced accordingly. ?They likely won?t make that mistake again - as after of course complementing > them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren?t really Purkinje > cells - I had to reject the paper for the same reason. > > As I said, they likely won?t make that mistake again - and will very likely get away with it. > > Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise. ??We found it computational > expedient to ignore the second law of thermodynamics in our computations - sorry?. ?BTW, I know that details are ignored all the time in physics > as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across > different levels of scale. ? I would claim, however, that that is precisely the ?trick? that biology uses to ?beat? the second law - linking all levels > of scale together - another reason why you can?t ignore the details in biological models if ?you really want to understand how biology works. > ?(too cryptic a comment perhaps). > > Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can > now. ?Key points I think are: > - you need to produce students who are REALLY both experimental and theoretical (like Newton). ?(and that doesn?t mean programs that ?import? > physicists and give them enough biology to believe they know what they are doing, or programs that link experimentalists to physicists to solve > their computational problems) > - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system > being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth > centered system) > - you need to build a new form of collaboration and communication that can support the complexity of those models. ?Fundamentally, we continue > to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then. ?Our > laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics). ?Fortunate > for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style > communication systems (e-journals) with a few twists (supplemental materials). > - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa. ?I proposed a > number of years ago to NIH that they would make it into the history books if they simply required the following monday, ?that any submitted > experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an > hypothesis" - which itself is remarkable, because most of the ?hypotheses? I see stated in Federal grants are actually statements of what the P.I. > believes to be true. ?Don?t get me started on human imaging studies. ?arggg > - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted > collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works. ?The grant > should support at set of experiments, that both parties agree distinguish between their two points of view. ?All results need to be published > with joint authorship. ?In effect that is how physics works - given its underlying structure. > - You need to get rid, as quickly as possible, the pressure to ?translate? neuroscience research explicitly into clinical significance - we are not > even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries > anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research. ?It just has > to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for > humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to > make the transition to a paradigmatic science. ? > As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate > program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first ??Methods in Computational Neuroscience Course" at the > Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research > and students. ?Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or > neuro-engineering. ?I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, > non-biologically based modeling and theoretical efforts. > > Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right. > > I hope so. > > Jim > > p.s. I have also been proposing recently that we scuttle the ?intro neuroscience? survey courses in our graduate programs (religious instruction) > ?and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in > the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model. ?The 50th anniversary of that prize was > celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original > papers (which I care much more about anyway). ?That meeting was, I believe, the first meeting in neuroscience ever organized around a single > (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which > term or feature of the model their work was related to. ?Again, a first - but possible, as this is about the only ?community model? we have. > > Most Neuroscience textbooks today don?t include that equation (second order differential) and present the HH model primarily as a description of > the action potential. ? Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details. > ?Both views and interpretations are historically and practically incorrect. ?In my opinion, if you can?t handle the math in the HH model, you > shouldn?t be a neurobiologist, and if you don?t understand the profound impact of HH?s knowledge and experimental study of the squid giant axon on > the model, ?you shouldn?t be a neuro-theorist either. ?just saying. ? :-) > > > On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg wrote: > > James, enjoyed your writing. > So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and > big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure > for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more > seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see > http://www.neuroinfocomp.ca? if you are not already on there.) > Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'. > Cheers, Thomas > > On 2014-01-25 12:09 AM, "james bower" wrote: > Ivan thanks for the response, > > Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether ?- ?or declared > we in the US could leave it to the Europeans. ?I am not in the least bit nationalistic - but, collecting data without having models > (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the > foolishness. ?In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at > Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: ?what have we really learned since Cajal, > except that there are also inhibitory neurons?? ?Shocking, not only because Cajal actually suggested that there might be inhibitory > neurons - in fact. ?To quote ?Stupid is as stupid does?. > > Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively. ?The Higgs experiment was absolutely the > opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had > simply decided to collect all data at all possible energy levels? ? The Higgs experiment is all the more remarkable because it had > the nearly unified support of the high energy physics community, not that there weren?t and aren?t skeptics, but still, remarkable that > the large majority could agree on the undertaking and effort. ?The reason is, of course, that there was a theory - that dealt with > the particulars and the details - not generalities. ?In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain > Project - its politics and its effects (or lack therefore), within neuroscience. ?(of course, many people are burring their > concerns in favor of tin cups - hoping). ?Neuroscience has had genome envy for ever - the connectome is their response - who says > its all in the connections? (sorry ?connectionists?) ?Where is the theory? ?Hebb? ?You should read Hebb if you haven?t - rather > remarkable treatise. ?But very far from a theory. > > If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply > suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don?t believe that Newton > would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things. ?I feel the > same way about Neuroscience. ?Having spent almost 30 years building realistic models of its cells and networks (and also doing > experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and > paying attention to the details. ?OF course, most experimentalists and even most modelers have paid little or no attention. ?We > have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real > commitment to the biology - in all its complexity. ?And, as the model I linked tries to make clear - we also have to all agree to > start working on common ?community models?. ?But like big horn sheep, much safer to stand on your own peak and make a lot of noise. ? > > You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy > math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc). > ?Problem is, without getting into the nasty math and reality of ellipses- you can?t possible know anything about gravity, or the > origins of the solar system, or its various and eventual perturbations. ? > > As I have been saying for 30 years: ?Beware Ptolemy and curve fitting. > > The details of reality matter. > > Jim > > > > > > On Jan 24, 2014, at 7:02 PM, Ivan Raikov wrote: > > > I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling > approaches. While it is true that complete and detailed understanding of? neurophysiology and anatomy is at the heart > of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical > physics, and a lot can be learned about information representation and transmission in the brain using mathematical > theories about distributed communicating processes. As these modelling approaches have been successfully used in > various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental > properties of brain structures and processes? > > ? -Ivan Raikov > > On Sat, Jan 25, 2014 at 8:31 AM, james bower wrote: > [snip] > > An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors > to the inside - rather than seeing this as the actual feedback loop. ?Might to some sound like a semantic quibble, ?but > I assure you it is not. > > If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some > sense the construction of complex models about the world and how it operates in the world, and that those models are > manifest in the complex architecture of the brain - then simplified solutions are missing the point. > > What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay > tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains > circuits and cells. > > > ? > ? > Dr. James M. Bower Ph.D. > Professor of Computational Neurobiology > Barshop Institute for Longevity and Aging Studies. > 15355 Lambda Drive > University of Texas Health Science Center? > San Antonio, Texas ?78245 > ? > Phone: ?210 382 0553 > Email:?bower at uthscsa.edu > Web: http://www.bower-lab.org > twitter: superid101 > linkedin: Jim Bower > ? > CONFIDENTIAL NOTICE: > The contents of this email and any attachments to it may be privileged or?contain privileged and confidential information. This > information is only?for the viewing or use of the intended recipient. If you have received this?e-mail in error or are not the > intended recipient, you are hereby notified?that any disclosure, copying, distribution or use of, or the taking of any?action in > reliance upon, any of the information contained in this e-mail, or > any of the attachments to this e-mail, is strictly prohibited and that this?e-mail and all of the attachments to this e-mail, if > any, must be > immediately returned to the sender or destroyed and, in either case, this?e-mail and all attachments to this e-mail must be > immediately deleted from?your computer without making any copies hereof and any and all hard copies?made must be destroyed. If you > have received this e-mail in error, please?notify the sender by e-mail immediately. > ? > > > ? > ? > Dr. James M. Bower Ph.D. > Professor of Computational Neurobiology > Barshop Institute for Longevity and Aging Studies. > 15355 Lambda Drive > University of Texas Health Science Center? > San Antonio, Texas ?78245 > ? > Phone: ?210 382 0553 > Email:?bower at uthscsa.edu > Web: http://www.bower-lab.org > twitter: superid101 > linkedin: Jim Bower > ? > CONFIDENTIAL NOTICE: > The contents of this email and any attachments to it may be privileged or?contain privileged and confidential information. This information is > only?for the viewing or use of the intended recipient. If you have received this?e-mail in error or are not the intended recipient, you are > hereby notified?that any disclosure, copying, distribution or use of, or the taking of any?action in reliance upon, any of the information > contained in this e-mail, or > any of the attachments to this e-mail, is strictly prohibited and that this?e-mail and all of the attachments to this e-mail, if any, must be > immediately returned to the sender or destroyed and, in either case, this?e-mail and all attachments to this e-mail must be immediately deleted > from?your computer without making any copies hereof and any and all hard copies?made must be destroyed. If you have received this e-mail in > error, please?notify the sender by e-mail immediately. > ? > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > http://wyblelab.com > > > > ? > > ? > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center? > > San Antonio, Texas ?78245 > > ? > > Phone: ?210 382 0553 > > Email:?bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > ? > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or?contain privileged and confidential information. This information is only?for the > viewing or use of the intended recipient. If you have received this?e-mail in error or are not the intended recipient, you are hereby notified?that any > disclosure, copying, distribution or use of, or the taking of any?action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this?e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this?e-mail and all attachments to this e-mail must be immediately deleted from?your > computer without making any copies hereof and any and all hard copies?made must be destroyed. If you have received this e-mail in error, please?notify the sender > by e-mail immediately. > > ? > > > ? > > ? > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center? > > San Antonio, Texas ?78245 > > ? > > Phone: ?210 382 0553 > > Email:?bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > ? > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or?contain privileged and confidential information. This information is only?for the viewing or use of > the intended recipient. If you have received this?e-mail in error or are not the intended recipient, you are hereby notified?that any disclosure, copying, distribution or > use of, or the taking of any?action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this?e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this?e-mail and all attachments to this e-mail must be immediately deleted from?your computer without > making any copies hereof and any and all hard copies?made must be destroyed. If you have received this e-mail in error, please?notify the sender by e-mail immediately. > > ? > > > > From collins at phys.psu.edu Mon Jan 27 16:20:11 2014 From: collins at phys.psu.edu (John Collins) Date: Mon, 27 Jan 2014 16:20:11 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> Message-ID: <52E6CD8B.7040405@phys.psu.edu> On 01/27/2014 02:51 PM, Marcello Pelillo wrote: > > The "philosophical" grounds for this apparently irrational behavior > (sorry, back to philosophy...) can be found in the well-known > Duhen-Quine thesis: > > "In sum, the physicist can never subject an isolated hypothesis to > experimental test, but only a whole group of hypotheses; when the > experiment is in disagreement with his predictions, what he learns is > that at least one of the hypotheses constituting this group is > unacceptable and ought to be modified; but the experiment does not > designate which one should be changed." > > P. Duhem, The Aim and Structure of Physical Theory, 1914. That's not the whole story. For modern physics, a common happening is that when theory and experiment disagree, it is the experiment that is wrong, at least if the theory is well established. (Faster-than-light neutrinos are only one example.) John Collins From bower at uthscsa.edu Mon Jan 27 17:57:55 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 27 Jan 2014 16:57:55 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: Message-ID: <9A7A48A1-6328-46AD-A274-BE6E29F32ADB@uthscsa.edu> Tsvi, Nice list and I think a productive approach - the nature of the questions is obviously important. Its another long story, but might I ask that you, and the community consider how the answer to each of these questions might change, if for example, the nervous system already ?knows? in advance what it is looking for and ?learning? in the usual sense, is not involved. In our own work, to my great surprise, evidence has emerged from both detailed physiological models of the olfactory cortex and also from studies of the classification systems that humans use to describe odors, that the olfactory system might already know a great deal in advance about the metabolic structure of the organic world it is trying to detect and that that prior knowledge plays a key role in recognition. These results, first obtained 15 years ago, were never published except in thesis form in large part because they were so antithetical to current thinking about how the olfactory system worked in particular and learning worked in general that we decided it wasn?t worth the effort. A previous paper showing that olfactory receptive fields changed in the olfactory bulb, using one of the first ever awake behaving multi-single unit recording procedures, took 5 years to get published. However, the metabolic hypothesis (as we have called it) was recently a subject in a meeting in Germany for which I append at the end of this comment a brief description of the idea. You might find it interesting that these two lines of evidence pointing in the same direction were obtained completely independently, and that the modeling result in particular was completely unexpected. What we set out to do in what was and I think still is the most detailed biological model of certainly olfactory cortex ever made, was to duplicate the pattern of current source densities found during (natural) cortical oscillations. (BTW: the first version of this model 25 years ago showed that these oscillations are not ?driven? from anywhere, but in fact are an intrinsic property of the network itself). It turned out that the only way the experimental data could be reconstructed was if there were independent subnetworks in the cortex. For 25 previous years, I had assumed that the olfactory cortex was some kind of associative learning network (influence of the NN community actually), based on its apparently highly defuse and topographically unorganized set of intrinsic excitatory connections. Turns out, the model predicted that this apparent diffuseness may be concealing what is actually a highly organized network structure - but not of the usual topographic type. I suspect, although we don?t know that these subnets reflect the structure of the metabolic world. So, after 25 years, i was forced by the modeling work to completely change how I was thinking about how the system worked. This is the value and power of this type of modeling, to fundamentally change what you think about how something works. Perhaps it doesn?t need to be pointed out, that this is also obviously the kind of result that could support and drive the kind of experimental effort that the Brain project is intent on undertaking. Except that instead of blind data collection - the data collection is organized in the context of a particular hypothesis. My guess is (and we could probably use the model to test this) that finding these subnetworks with blind data collection would be much more difficult or perhaps even impossible. Anyway, a good list of questions, but as with any list of questions, they make assumptions about how the system works. A question like: "what do we have to assume about the intrinsic connectivity of olfactory cortex to duplicate the pattern of current source density distributions following electrical shock of the lateral olfactory track in a detailed biological model of the olfactory cortex? makes many fewer functional assumptions. But, in this case (and in most cases of this type of modeling we have done), what falls out is something we didn?t know was there, with, it would seem, significant potential functional significance to return again to Newton - while he clearly was interested in why the moon remained in a circular orbit around the earth, he had no idea that the apparent force between them had a regular relationship to the distance, until he first invented (or stole depending) the calculous and actually saw the relationship. Had nothing to do with the inspiration of an apple falling in his sister?s orchard. That was a story that he apparently made up subsequently to impress others with his insight and genius. :-) Jim Metabolic ? hypothesis Summary meeting report The Structure of Olfactory Space Hannover Germany Sept, 2013. Question: Is the olfactory system a chemical classifier, or a detector of natural biological chemical processes? In the first century BC, the Roman poet and philosopher Lucretius speculated about olfaction: ?Thus simple 'tis to see that whatsoever can touch the senses pleasingly are made of smooth and rounded elements, whilst those which seem the bitter and the sharp, are held Entwined by elements more crook'd?. This intuition that the olfactory system generates olfactory percepts by interpreting the general chemical structure of odorant molecules continues to underlie much olfactory research. Practically, it is manifest in the continued reliance on monomolecular odorant stimuli most often presented as chemical families (alchohols and aldehydes) varying along a single chemical metric (e.g. carbon length chain). The results, at multiple levels of scale from single receptor neurons to networks, typically show individual elements responding to a large and complex range of compounds, leading in turn to the suggestion that the olfactory system uses a distributed combinatorial code to learn to recognize objects. Perceptually however, compounds with highly different chemical structures can elicit similar odors, while small changes in chemical structure can render a highly odorous compound completely odorless. For these and other reasons, traditional approaches to classifying the perception of odorant molecules based on their physical structure continue to have minimal predictive value. We believe, as an alternative, it is worth considering whether the olfactory system may not be a chemical classifier in the traditional sense, but instead has evolved to detect known chemical patterns reflecting biologically important signals in nature. In this view, ?odor perceptual space? is predicted to be organized around the chemical structure of the organic world including, for example, the chemical signature of specific metabolic pathways (from traditional food sources), chemical patterns generated by one species to specifically attract other species (allomones, kairomones, or even compounds given off by fruit to signal ripeness), or stimuli signaling the interactions of ?consortia of organisms? (microbial digestion of plant or animal tissue). What we are proposing as the ?Metabolomics Hypothesis? makes several specific predictions: The core prediction is that the olfactory system will be organized around biologically significant mixtures of molecules, in effect, seeking evidence for the presence of particular chemical interactions within the environment; This structure may be apparent as early as single olfactory receptor proteins which could, for example, bind odorants that are metabolically related, even if structurally dissimilar; As a special case, molecules employed as signals between species (allomones, kairomones , or molecules signally the ripeness of fruit for example) might induce responses in a broad number of receptors; Receptor neuron projections to the olfactory bulb as well as bulbar projections to the olfactory cortex may be more ordered than previously assumed reflecting this structure; This hypothesis further predicts that metabolic relatedness is more likely to predict perception and perceptual interactions (cross adaptation for example) than would either simple structural similarity, or chemical class. Finally, and perhaps most importantly, we would predict that this structural knowledge of the chemical world may be ?built into? the olfactory system at the outset, providing a non-learned basis for olfactory perception. Such an existing structure would relegate ?learning? to changes in aversive/preference (hedonic) scale based on individual experience. While preliminary evidence exists for each of these predictions (c.f. Chee, 2003; Vanier, 2001), further experimental work is necessary to test this new hypothesis. That work will depend, however, on the use of panels or mixtures of odorants with known behavioral significance. On Jan 27, 2014, at 12:31 PM, Tsvi Achler wrote: > Jim has referred twice now to a list of problems and brain-like phenomena that models should strive to emulate. In my mind this gets to the heart of the matter. However, there was a discussion of one or two points and then it fizzled. The brain shows many electrophysiological but also behavioral phenomena. > > I would like to revive that discussion (and include not just neuroscience phenomena) in a list to show: how significant these issues are, the size of the gap in our knowledge, and focus more specifically on what is brain-like. > > Let me motivate this even further. The biggest bottleneck to understanding the brain is understanding how the brain/neurons perform recognition. Recognition is an essential foundation upon which cognition and intelligence is based. Without recognition the brain cannot interact with the world. Thus a better knowledge of recognition will open up the brain for better understanding. > > Here is my humble list, and I would like to open it to discussions, opinions, suggestions, and additions. > > 1) Dynamics. Lets be very specific. Oscillations are observed during recognition (as Jim and others mentioned) and they are not satisfactorily accounted. Since single oscillation generators have not been found, I interpret this means the oscillations are likely due to some type of feedforward-feedback connections functioning during recognition. > 2) Difficulty with Similarity. Discriminating between similar patterns recognition takes longer and is more prone to error. This is not primarily a spatial search phenomena because it occurs in all modalities including olfaction which has very poor spatial resolution. Thus appears to be a fundamental part of the neural mechanisms of recognition. > > 3) Asymmetry. This is related to signal-to-noise like phenomena to which difficulty with similarity belong. Asymmetry is a special case of difficulty with similarity, where a similar pattern with more information will predominate the one with less. > > 4) Biased competition (priming). Prior expectation affects recognition time and accuracy. > > 5) Recall-ability. The same neural recognition network that can perform recognition likely performs recall. This is suggested by studies where sensory region activation can be observed when recognition patterns are imagined, and by the existence of mirror neurons. > > 6) Update-ability. The brain can learn new information (online outside the IID assumption) and immediately use it. It does not have to retrain on all old information (IID requirement for feed-forward neural networks). > > If we do not seriously consider networks that inherently display these properties, I believe neural the network community will continue rehashing ideas and see limited progress. > > My strong yet humble opinions, > > -Tsvi > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Mon Jan 27 17:59:30 2014 From: bower at uthscsa.edu (james bower) Date: Mon, 27 Jan 2014 16:59:30 -0600 Subject: Connectionists: How the brain works In-Reply-To: <74f0a77c354efc.52e69f3a@rug.nl> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <7620ed00356603.52e69019@rug.nl> <7620d2e13555a8.52e69056@rug.nl> <7620fa11354358.52e69092@rug.nl> <7670dc7635390e.52e690d0@rug.nl> <76f084913553ed.52e6910c@rug.nl> <74f0a77c354efc.52e69f3a@rug.nl> Message-ID: As long as we are titling at windmills the Human Brain Project suffers from a different kind of megalomania than the BRAIN project in the US, but megalomania, just the same. IMHO Jim On Jan 27, 2014, at 11:02 AM, M.A.Wiering wrote: > I lvery much ike the part in the deep belief network of having real and imaginary data. > > I think there are also many opposing forces working in the brain. So maybe we > can indeed make a complete complex system that is able to compute behavior at > many levels, from physical behaviors, Cognitive Neuroscience, Neuroscience, > to physical devices. > > I think the Human Brain Project is one small step in this direction, but a big step > for human kind. > > Best wishes, > Marco Wiering > University of Groningen, the Netherlands > =========== > > On 27-01-14, Geoffrey Hinton wrote: >> >> Actually, evolution did invent the time-shared wheel. >> >> To go over rough ground it needs to be 6 feet in diameter with very soft suspension. >> The way to do this without being too heavy or large is to time-share two small sections of the rim each connected to the axle by "spokes" (in compression rather than tension) that can easily change their length. The swapping in and out of the spokes is not as energy efficient as a wheel but it solves the problem of supplying the rim with nutrients. >> >> Geoff >> >> >> >> On Mon, Jan 27, 2014 at 9:28 AM, Bal?zs K?gl wrote: >> > While it is at least worth considering whether the arm from fin argument applies to the nervous system, because we don?t understand how the brain works, we can?t really answer the question whether there is some simpler version that would have worked just as well. Accordingly, as with the radio analogy, in principle, asking whether a simpler version would work as well, depends on first figuring it out how the actual system works. As I have said, abstract models are less likely to be helpful there, because they don?t directly address the components. >> >> Wouldn?t the airplane/bird analogy work here? Does being able to design an airplane help understanding how birds fly? I think it does. Evolution didn?t invent the wheel, so it had to go in a complex (and not necessarily very efficient) way to ?design? locomotion, which means that airplane engines don?t really explain how birds propel themselves. On the other hand, both have wings, and controlling the flying devices looks pretty similar in the two cases. In the same way, if some artificial network can reproduce intelligent traits, we might be able to guide what we?re looking for in the brain (a model, whose necessity we agree on). Of course, scientific process rarely works in this way, but it?s because you need computers for this kind of ?experimentation?, and computers are quite new. >> >> Bal?zs >> >> >> ? >> Balazs Kegl >> Research Scientist (DR2) >> Linear Accelerator Laboratory >> CNRS / University of Paris Sud >> http://users.web.lal.in2p3.fr/kegl >> >> >> >> >> >> Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsontag at cs.nyu.edu Mon Jan 27 18:30:59 2014 From: dsontag at cs.nyu.edu (David Sontag) Date: Mon, 27 Jan 2014 18:30:59 -0500 Subject: Connectionists: NYU Moore-Sloan Data Science Fellows Message-ID: Hi all, Please see below for a postdoctoral research opportunity in data science at NYU. The original deadline has passed, but we will consider your application if your statement and letters of recommendation are received by next Monday (February 3rd). The positions will be highly competitive, and machine learning candidates are expected to have a strong publication record at top machine learning conferences (e.g. NIPS, UAI, ICML, and AI Stats) or journals. In your application, be sure to explicitly mention the names of one or more faculty members associated with the NYU Center for Data Science with whom you think you might collaborate. More details are below. Best, David ---- NYU Moore-Sloan Data Science Fellows http://cds.nyu.edu/opportunities/ The NYU Center for Data Science is pleased to invite applications for its inaugural class of Data Science Fellows. The positions are a prominent feature of the Moore-Sloan Data Science Environment at NYU and Data Science Fellows will be expected to work at the boundaries between the data-science methods and the sciences. These positions are part of a multi-institutional effort funded in part by a generous grant from the Moore and Sloan Foundations. As part of the multi-institutional effort, fellows will be encouraged to develop collaborations with partners at the University of California, Berkeley, and the University of Washington. Fellows are expected to lead independent, original research programs with impact in one or more scientific domains (natural science or social science) and in one or more methodological domains (computer science, statistics, and applied mathematics). Ideal candidates will have earned a PhD in one of these disciplines with experience in one of the other areas (for instance a PhD in machine learning with applications to biology or a PhD in politics with extensive use of statistical inference). Superior candidates will bring a research agenda that can take advantage of the unique intellectual opportunities afforded by NYU, and will have experience in working with researchers across different fields. Appointments will be initially for two years, with an expectation of renewal for a third on satisfactory performance. Fellowships will be offered competitive salary and benefits, with funds to support research and travel. There is some flexibility about start date, but September 1, 2014 is expected. Fellowship applicants should send a curriculum vitae, list of publications, and brief statement of research interests (no longer than 4 pages) to datascience-group at nyu.edu, and also arrange to have three letters of recommendation sent by January 6, 2014. The statement of research interests should mention the names of one or more faculty members associated with the NYU Center for Data Science who would have substantial intellectual overlap with the applicant?s interests and likely program of research. More information about the Center can be found at http://cds.nyu.edu/. From bwyble at gmail.com Mon Jan 27 21:39:47 2014 From: bwyble at gmail.com (Brad Wyble) Date: Mon, 27 Jan 2014 21:39:47 -0500 Subject: Connectionists: Best practices in model publication Message-ID: Dear connectionists, I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory. At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of a small handful of its own predictions. While I am certainly in favor of model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons: It encourages the creation only of predictions that are easy to test with the techniques available to the modeller. It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction. It encourages a practice of running a battery of experiments and reporting only those that match the model's output. It encourages the creation of predictions which cannot fail, and are therefore less informative It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one. It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions. Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians. I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later. I also realize that this shift in publication expectations wouldn't prevent the problems described above, but it would at least not reward them. I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models. This effort should coincide with a shift in writing style to make such models more accessible to non modellers. What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors? Any advice welcome! -Brad -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mo2259 at columbia.edu Mon Jan 27 21:48:39 2014 From: mo2259 at columbia.edu (Mark Orr) Date: Mon, 27 Jan 2014 21:48:39 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: Brad, Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), suggesting the same practice. See http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf Mark On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: > Dear connectionists, > > I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory. At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of a small handful of its own predictions. While I am certainly in favor of model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons: > > It encourages the creation only of predictions that are easy to test with the techniques available to the modeller. > > It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction. > > It encourages a practice of running a battery of experiments and reporting only those that match the model's output. > > It encourages the creation of predictions which cannot fail, and are therefore less informative > > It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one. > > It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". > > I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions. Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians. > > I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later. > > I also realize that this shift in publication expectations wouldn't prevent the problems described above, but it would at least not reward them. > > I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models. This effort should coincide with a shift in writing style to make such models more accessible to non modellers. > > What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors? > > Any advice welcome! > > -Brad > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Mon Jan 27 23:29:52 2014 From: bwyble at gmail.com (Brad Wyble) Date: Mon, 27 Jan 2014 23:29:52 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: Thank you Mark, I hadn't seen this paper. She includes this other point that should have been in my list: "From a practical point of view, as noted the time required to build and analyze a computational model is quite substantial and validation may require teams. To delay model presentation until validation has occurred retards the development of the scientific field. " ----Carley (1999) And here is a citation for this paper. Carley, Kathleen M., 1999. Validating Computational Models. CASOS Working Paper, CMU -Brad On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr wrote: > Brad, > Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), > suggesting the same practice. See > http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf > > Mark > > On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: > > Dear connectionists, > > I wanted to get some feedback regarding some recent ideas concerning the > publication of models because I think that our current practices are > slowing down the progress of theory. At present, at least in many > psychology journals, it is often expected that a computational modelling > paper includes experimental evidence in favor of a small handful of its > own predictions. While I am certainly in favor of model testing, I have > come to the suspicion that the practice of including empirical validation > within the same paper as the initial model is problematic for several > reasons: > > It encourages the creation only of predictions that are easy to test with > the techniques available to the modeller. > > It strongly encourages a practice of running an experiment, designing a > model to fit those results, and then claiming this as a bona fide > prediction. > > It encourages a practice of running a battery of experiments and reporting > only those that match the model's output. > > It encourages the creation of predictions which cannot fail, and are > therefore less informative > > It encourages a mindset that a model is a failure if all of its > predictions are not validated, when in fact we actually learn more from a > failed prediction than a successful one. > > It makes it easier for experimentalists to ignore models, since such > modelling papers are "self contained". > > I was thinking that, instead of the current practice, it should be > permissible and even encouraged that a modelling paper should not include > empirical validation, but instead include a broader array of predictions. > Thus instead of 3 successfully tested predictions from the PI's own lab, a > model might include 10 untested predictions for a variety of different > experimental techniques. This practice will, I suspect, lead to the > development of bolder theories, stronger tests, and most importantly, > tighter ties between empiricists and theoreticians. > > I am certainly not advocating that modellers shouldn't test their own > models, but rather that it should be permissible to publish a model without > testing it first. The testing paper could come later. > > I also realize that this shift in publication expectations wouldn't > prevent the problems described above, but it would at least not reward > them. > > I also think that modellers should make a concerted effort to target > empirical journals to increase the visibility of models. This effort > should coincide with a shift in writing style to make such models more > accessible to non modellers. > > What do people think of this? If there is broad agreement, what would be > the best way to communicate this desire to journal editors? > > Any advice welcome! > > -Brad > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Mon Jan 27 23:30:53 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Mon, 27 Jan 2014 21:30:53 -0700 Subject: Connectionists: How the brain works In-Reply-To: <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> Message-ID: > Do you need to model quantum mechanics? No (sorry Penrose) - One of the straw men raised when talking about realistic models is always: ?at what level do you stop, quantum mechanics??. The answer is really quite simple, you need to model biological systems at the level that evolution (selection) operates and not lower. In some sense, what all biological investigation is about, is how evolution has patterned matter. Therefore, if evolution doesn?t manipulate at a particular level, it is not necessary to understand how the machine works. It seems that the efficiency of photosynthesis in leaves suggests that natural selection has led to a biological system that exploits quantum computation at least once. Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. Brian Mingus http://grey.colorado.edu/mingus On Mon, Jan 27, 2014 at 6:56 AM, james bower wrote: > A couple of points I have actually been asked (under the wire) to comment > on: > > Do you need to model quantum mechanics? No (sorry Penrose) - One of the > straw men raised when talking about realistic models is always: ?at what > level do you stop, quantum mechanics??. The answer is really quite simple, > you need to model biological systems at the level that evolution > (selection) operates and not lower. In some sense, what all biological > investigation is about, is how evolution has patterned matter. Therefore, > if evolution doesn?t manipulate at a particular level, it is not necessary > to understand how the machine works. > > Thomas? original post regarding radios is actually illuminating on this > point. It starts by saying: "I guess it uses an antenna to sense an > electromagnetic wave that is then amplified so that an electromagnet can > drive a membrane to produce an airwave that can be sensed by our ear. Hope > this captures some essential aspects.? This level of description actually, > captures the essential structure of a radio - i.e. the level of function of > the radio at which the radio?s designers chose the components and their > properties. So, how proteins interact is important, the fact that that > interaction depends on the behavior of electrons is not. The behavior of > electrons of course constrains how a particular molecule might interact - > but evolution does not change the structure of electrons - (at least as far > as I know). > > A other question that has been raised, somewhat defensively, has to do > with the other level of modeling - and whether abstract models with no > relationship to the actual structure of the nervous system, by definition, > can not capture how the brain works? there are several answers to this > question. The first and most obvious has to do with scientific process - > how would you ever know? If a model is not testable at the level of the > machine and its parts - then there is no way to know and the model is not > useful for what I am trying to do, which is to understand the machinery. > Put another way, if a model does not help in understanding ?the > engineering? (if you allow me the short hand), then it is also not useful > in figuring out how the brain really works. > > There is another issue here as well - and that has to do with > the likelihood that the model is correct. This gets into issues of > complexity theory, a subject many number of members of this list serve know > better than I do. However, I believe that one of the insights attributed > to Kolmogorov (who has been mentioned previously) is that there is a > relationship between the complexity of a problem and its solution. If > there is a solution to a problem the brain solves that is simpler than the > brain itself, then there is some constraint that has forced the brain to be > more complex than it needs to be: examples usually given include > the component parts it has been ?forced? to work with, for example, > or constraints imposed by the supposed sequential nature of evolutionary > processes (making an arm from a fin). > > While it is at least worth considering whether the arm from fin argument > applies to the nervous system, because we don?t understand how the brain > works, we can?t really answer the question whether there is some simpler > version that would have worked just as well. Accordingly, as with the > radio analogy, in principle, asking whether a simpler version would work as > well, depends on first figuring it out how the actual system works. As I > have said, abstract models are less likely to be helpful there, because > they don?t directly address the components. However, while there is now > finally some actual scientific work on this - still, most neurobiologists > and even a lot of Neural network types, don?t seem to take into account how > expensive brains are to run and the extreme pressure that has likely put on > the brain to reach a ridiculous level of efficiency. Accordingly, I am > betting on the likelihood that the brain is not just some hacked solution, > but in fact, may be an optimal solution to the problems it solves > (remember, species also ?pick? (again forgive the short hand) the problems > they solve based on the structures they already have. So, I myself assume > that if there were a simpler physical solution, evolution would have found > it. At least, I can say with certainty that where ever it has been > possible to measure the physical ? sophistication" of the nervous system, > it is operating a very close to the constraints posed by physics (single > photon detection by the eye, the ear operating just above the level of > brownian noise, etc). No reason, therefore not to assume until shown > otherwise that the insides aren?t also ?optimized?. And, as I have said, > we won?t be able to even address the question really, until we know how the > thing works. > > > That said, even if a simpler solution is as effective as the brain, I > am interested in the brain. Engineers are interested in building other > devices and they would likely always prefer (for obvious reasons) to be as > simple as possible. As I have said, I believe evolution is under the same > constraint, actually, but in any event the question I have chosen to try to > understand (perhaps foolishly) is how brains really work - not if something > else could do the same thing using a simpler form. > > Finally, the above argument also relates to the issue of simple redundancy > - brain?s almost certainly can?t afford redundancy in the way that > engineers have traditionally built in redundancy. To give credit where > credit is due - one of the lessons that I did take from neural networks is > that there are many more sophisticated ways to achieve fault tolerance than > simple redundancy. Fault tolerance (this is how I actually think about > learning, in fact) is a key requirement of brains. But almost certainly not > accomplished by simple redundancy. > > Jim > > > > > > > > > > > > On Jan 27, 2014, at 3:09 AM, Axel Hutt wrote: > > I fully agree with Thomas and appreciate, once more, the comparison with > physics. > > Neuroscientists want to understand how the brain works, but maybe this is > the wrong question. In physics, > researchers do not ask this question, since, honestly, who knows how > electrons REALLY move around > the atom core ? Even if we know it, this knowledge is not necessary, since > we have quantum theory that tells uns something about > probabilities in certain states and this more abstract (almost mesoscopic) > description is sufficient to > explain large scale phenomena. Even on a smaller scale, quantum theory > works fine, but only since physicists > DO NOT ASK what electrons do in detail, but accept the concept. In > contrary, todays neuroscience ask > questions on all the details, what why ? Probably it is better to come up > with a "concept" or more abstract > model on neural coding, for sure on multiple description levels. But I > guess, looking into too much much > (neurophysiological) detail slows us down and we need to ask other > questions, maybe more directing > towards experimental phenomena. > > A good example is the work of Hubel and Wiesel and the concept of columns > (I know Jim and others do not like > the concept), based on experimental data and deriving such a kind of > concept. Of course these columns are not there > 'physically' (there are no borders) but they represent more abstract > functional units which allow to explain certain > dynamical features on the level of neural populations (e.g. in the work on > visual hallucinations). Today this concept > is largely attacked since biologists 'do not see it'. But, again going > back to physics, the trajetories of single electrons in > atoms have not been measured yet and so the probability density of their > location has not been computed yet from > the single trajectories, but the resulting concept of probability orbits > of electrons is well established today since it > works well. > > Another analogy from physics (sorry to bore you, but I find the comparison > important): do you believe that an > object changes when you look at it (quantum theory says so) ? No, sure > not, since you do not experience/measure it. > But, hey, the underlying quantum theory is a good description of things. > What I want to say: in neuroscience we > need more theory based on physiological (multi-scale) experiments that > describes the found data and permit to accept > more (apparently) abstract models and get rid of our dogmatic view on how > to do research. If an abstract description > explains well several different phenomena, then per se it is a good > concept (e.g. like the neural columns concept). > > Well, I have to go back to theoretical work, but it was very nice and > stimulating attending this discussion. > > Axel > > ------------------------------ > > Some of our discussion seems to be about 'How the brain works'. I am of > course not smart enough to answer this question. So let me try another > system. > How does a radio work? I guess it uses an antenna to sense an > electromagnetic wave that is then amplified so that an electromagnet can > drive a membrane to produce an airwave that can be sensed by our ear. Hope > this captures some essential aspects. > Now that you know, can you repair it when it doesn't work? > I believe that there can be explanations on different levels, and I think > they can be useful in different circumstances. Maybe my above explanation > is good for generally curious people, but if you want to build a super good > sounding radio, you need to know much more about electronics, even > quantitatively. And of course, if you want to explain how the > electromagnetic force comes about you might need to dig down into quantum > theory. And to take my point into the other direction, even knowing all the > electronic components in a computer does not tell you how a word processor > works. > A multilayer perception is not the brain, but it captures some interesting > insight into how mappings between different representations can be learned > from examples. Is this how the brain works? It clearly does not explain > everything, and I am not even sure if it really captures much if at all of > the brain. But if we want to create smarter drugs than we have to know how > ion channels and cell metabolism works. And if we want to help stroke > patients, we have to understand how the brain can be reorganized. We need > to work on several levels. > Terry Sejnowski told us that the new Obama initiative is like the moon > project. When this program was initiated we had no idea how to accomplish > this, but dreams (and money) can be very motivating. > This is a nice point, but I don't understand what a connection plan would > give us. I think without knowing precisely where and how strong connections > are made, and how each connection would influence a postsynaptic or glia > etc cells, such information is useless. So why not having the goal of > finding a cure for epilepsy? > I do strongly believe we need theory in neuroscience. Only being > descriptive is not enough. BTW, theoretical physics is physics. Physics > would not be at the level where it is without theory. And of course, theory > is meaningless without experiments. I think our point on this list is that > theory must find its way into mainstream neuroscience, much more than it > currently is. I have the feeling that we are digging our own grave by > infighting and some narrow 'I know it all' mentality. Just try to publish > something which is not mainstream even so it has solid experimental backing. > Cheers, Thomas > > > > > -- > > Dr. rer. nat. Axel Hutt, HDR > INRIA CR Nancy - Grand Est > Equipe NEUROSYS (Head) > 615, rue du Jardin Botanique > 54603 Villers-les-Nancy Cedex > France > http://www.loria.fr/~huttaxel > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randal.a.koene at gmail.com Mon Jan 27 23:54:08 2014 From: randal.a.koene at gmail.com (Randal Koene) Date: Mon, 27 Jan 2014 20:54:08 -0800 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: Hi Brad, This reminds me of theoretical physics, where proposed models are expounded in papers, often without the ability to immediately carry out empirical tests of all the predictions. Subsequently, experiments are often designed to compare and contrast different models. Perhaps a way to advance this is indeed to make the analogy with physics? Cheers, Randal Dr. Randal A. Koene Randal.A.Koene at gmail.com - Randal.A.Koene at carboncopies.org http://randalkoene.com - http://carboncopies.org On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble wrote: > Thank you Mark, I hadn't seen this paper. She includes this other point > that should have been in my list: > > "From a practical point of view, as noted the time required to build > and analyze a computational model is quite substantial and validation may > require teams. To delay model presentation until validation has occurred > retards the development of the scientific field. " ----Carley (1999) > > > And here is a citation for this paper. > Carley, Kathleen M., 1999. Validating Computational Models. CASOS Working > Paper, CMU > > -Brad > > > > > On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr wrote: > >> Brad, >> Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), >> suggesting the same practice. See >> http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf >> >> Mark >> >> On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: >> >> Dear connectionists, >> >> I wanted to get some feedback regarding some recent ideas concerning the >> publication of models because I think that our current practices are >> slowing down the progress of theory. At present, at least in many >> psychology journals, it is often expected that a computational modelling >> paper includes experimental evidence in favor of a small handful of its >> own predictions. While I am certainly in favor of model testing, I have >> come to the suspicion that the practice of including empirical validation >> within the same paper as the initial model is problematic for several >> reasons: >> >> It encourages the creation only of predictions that are easy to test with >> the techniques available to the modeller. >> >> It strongly encourages a practice of running an experiment, designing a >> model to fit those results, and then claiming this as a bona fide >> prediction. >> >> It encourages a practice of running a battery of experiments and >> reporting only those that match the model's output. >> >> It encourages the creation of predictions which cannot fail, and are >> therefore less informative >> >> It encourages a mindset that a model is a failure if all of its >> predictions are not validated, when in fact we actually learn more from a >> failed prediction than a successful one. >> >> It makes it easier for experimentalists to ignore models, since such >> modelling papers are "self contained". >> >> I was thinking that, instead of the current practice, it should be >> permissible and even encouraged that a modelling paper should not include >> empirical validation, but instead include a broader array of predictions. >> Thus instead of 3 successfully tested predictions from the PI's own lab, a >> model might include 10 untested predictions for a variety of different >> experimental techniques. This practice will, I suspect, lead to the >> development of bolder theories, stronger tests, and most importantly, >> tighter ties between empiricists and theoreticians. >> >> I am certainly not advocating that modellers shouldn't test their own >> models, but rather that it should be permissible to publish a model without >> testing it first. The testing paper could come later. >> >> I also realize that this shift in publication expectations wouldn't >> prevent the problems described above, but it would at least not reward >> them. >> >> I also think that modellers should make a concerted effort to target >> empirical journals to increase the visibility of models. This effort >> should coincide with a shift in writing style to make such models more >> accessible to non modellers. >> >> What do people think of this? If there is broad agreement, what would be >> the best way to communicate this desire to journal editors? >> >> Any advice welcome! >> >> -Brad >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com >> >> >> > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levine at uta.edu Mon Jan 27 23:51:15 2014 From: levine at uta.edu (Levine, Daniel S) Date: Mon, 27 Jan 2014 22:51:15 -0600 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: <581625BB6C84AB4BBA1C969C69E269EC01C62554D4B6@MAVMAIL2.uta.edu> Brad, As a resident modeler within a psychology department (though my students run behavioral experiments) I am sensitive to this issue and heartily agree with you. As we become a mature science there will need to be an acceptance of theory having a life of its own and a roughly equal partner with experiment as it is in physics. There is often a long time lag from a successful simulation to setting up and successfully running an experiment to test its predictions, and that time lag shouldn't slow down the publication of the theory itself. After all there are mountains of existing data in the literature that need to be understood in the context of a sound theory, and a published theory can suggests experimental tests to other researchers who are reading it. Best, Dan Levine ________________________________________ From: Connectionists [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Brad Wyble [bwyble at gmail.com] Sent: Monday, January 27, 2014 8:39 PM To: Connectionists Subject: Connectionists: Best practices in model publication Dear connectionists, I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory. At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of a small handful of its own predictions. While I am certainly in favor of model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons: It encourages the creation only of predictions that are easy to test with the techniques available to the modeller. It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction. It encourages a practice of running a battery of experiments and reporting only those that match the model's output. It encourages the creation of predictions which cannot fail, and are therefore less informative It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one. It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions. Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians. I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later. I also realize that this shift in publication expectations wouldn't prevent the problems described above, but it would at least not reward them. I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models. This effort should coincide with a shift in writing style to make such models more accessible to non modellers. What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors? Any advice welcome! -Brad -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com From ivan.g.raikov at gmail.com Tue Jan 28 03:51:48 2014 From: ivan.g.raikov at gmail.com (Ivan Raikov) Date: Tue, 28 Jan 2014 17:51:48 +0900 Subject: Connectionists: How the brain works Message-ID: My summary of the history of physics was quite wrong: the idea of infinitesimals and their application has been around since the time of Archimedes: http://www.idsia.ch/~juergen/archimedes.html http://en.wikipedia.org/wiki/Infinitesimal The moral is, it takes a while for fundamental ideas in science to promulgate :-) On Mon, Jan 27, 2014 at 3:38 PM, Ivan Raikov wrote: > > Speaking of radio and electromagnetic waves, it is perhaps the case that > neuroscience has not yet reached the maturity of 19th century physics: > while the discovery of electromagnetism is attributed to great > experimentalists such as Ampere and Faraday, and its mathematical model is > attributed to one of the greatest modelers in physics, Maxwell, none of it > happened in isolation. There was a lot of duplicated experimental work and > simultaneous independent discoveries in that time period, and Maxwell's > equations were readily accepted and quickly refined by a number of > physicists after he first postulated them. So in a sense physics had a > consensus community model of electromagnetism already in the first half of > the 19th century. Neuroscience is perhaps more akin to physics in the 17th > century, when Newton's infinitesimal calculus was rejected and even mocked > by the scientific establishment on the continent, and many years would pass > until calculus was understood and widely accepted. So a unifying theory of > neuroscience may not come until a lot of independent and reproducible > experimentation brings it about. > > -Ivan > > > > On Mon, Jan 27, 2014 at 1:39 PM, Thomas Trappenberg wrote: > >> Some of our discussion seems to be about 'How the brain works'. I am of >> course not smart enough to answer this question. So let me try another >> system. >> >> How does a radio work? I guess it uses an antenna to sense an >> electromagnetic wave that is then amplified so that an electromagnet can >> drive a membrane to produce an airwave that can be sensed by our ear. Hope >> this captures some essential aspects. >> >> Now that you know, can you repair it when it doesn't work? >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Tue Jan 28 08:25:04 2014 From: bwyble at gmail.com (Brad Wyble) Date: Tue, 28 Jan 2014 08:25:04 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: Thanks Randal, that's a great suggestion. I'll ask my colleagues in physics for their perspective as well. -Brad On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene wrote: > Hi Brad, > This reminds me of theoretical physics, where proposed models are > expounded in papers, often without the ability to immediately carry out > empirical tests of all the predictions. Subsequently, experiments are often > designed to compare and contrast different models. > Perhaps a way to advance this is indeed to make the analogy with physics? > Cheers, > Randal > > Dr. Randal A. Koene > Randal.A.Koene at gmail.com - Randal.A.Koene at carboncopies.org > http://randalkoene.com - http://carboncopies.org > > > On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble wrote: > >> Thank you Mark, I hadn't seen this paper. She includes this other point >> that should have been in my list: >> >> "From a practical point of view, as noted the time required to build >> and analyze a computational model is quite substantial and validation may >> require teams. To delay model presentation until validation has occurred >> retards the development of the scientific field. " ----Carley (1999) >> >> >> And here is a citation for this paper. >> Carley, Kathleen M., 1999. Validating Computational Models. CASOS Working >> Paper, CMU >> >> -Brad >> >> >> >> >> On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr wrote: >> >>> Brad, >>> Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), >>> suggesting the same practice. See >>> http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf >>> >>> Mark >>> >>> On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: >>> >>> Dear connectionists, >>> >>> I wanted to get some feedback regarding some recent ideas concerning the >>> publication of models because I think that our current practices are >>> slowing down the progress of theory. At present, at least in many >>> psychology journals, it is often expected that a computational modelling >>> paper includes experimental evidence in favor of a small handful of its >>> own predictions. While I am certainly in favor of model testing, I have >>> come to the suspicion that the practice of including empirical validation >>> within the same paper as the initial model is problematic for several >>> reasons: >>> >>> It encourages the creation only of predictions that are easy to test >>> with the techniques available to the modeller. >>> >>> It strongly encourages a practice of running an experiment, designing a >>> model to fit those results, and then claiming this as a bona fide >>> prediction. >>> >>> It encourages a practice of running a battery of experiments and >>> reporting only those that match the model's output. >>> >>> It encourages the creation of predictions which cannot fail, and are >>> therefore less informative >>> >>> It encourages a mindset that a model is a failure if all of its >>> predictions are not validated, when in fact we actually learn more from a >>> failed prediction than a successful one. >>> >>> It makes it easier for experimentalists to ignore models, since such >>> modelling papers are "self contained". >>> >>> I was thinking that, instead of the current practice, it should be >>> permissible and even encouraged that a modelling paper should not include >>> empirical validation, but instead include a broader array of predictions. >>> Thus instead of 3 successfully tested predictions from the PI's own lab, a >>> model might include 10 untested predictions for a variety of different >>> experimental techniques. This practice will, I suspect, lead to the >>> development of bolder theories, stronger tests, and most importantly, >>> tighter ties between empiricists and theoreticians. >>> >>> I am certainly not advocating that modellers shouldn't test their own >>> models, but rather that it should be permissible to publish a model without >>> testing it first. The testing paper could come later. >>> >>> I also realize that this shift in publication expectations wouldn't >>> prevent the problems described above, but it would at least not reward >>> them. >>> >>> I also think that modellers should make a concerted effort to target >>> empirical journals to increase the visibility of models. This effort >>> should coincide with a shift in writing style to make such models more >>> accessible to non modellers. >>> >>> What do people think of this? If there is broad agreement, what would be >>> the best way to communicate this desire to journal editors? >>> >>> Any advice welcome! >>> >>> -Brad >>> >>> >>> >>> -- >>> Brad Wyble >>> Assistant Professor >>> Psychology Department >>> Penn State University >>> >>> http://wyblelab.com >>> >>> >>> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com >> > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Tue Jan 28 09:32:06 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Tue, 28 Jan 2014 09:32:06 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> Message-ID: <52E7BF66.3070000@susaro.com> On 1/27/14, 11:30 PM, Brian J Mingus wrote: > Consciousness is also such a bag of worms that we can't rule out that > qualia owes its totally non-obvious and a priori unpredicted existence > to concepts derived from quantum mechanics, such as nested observers, > or entanglement. > > As far as I know, my litmus test for a model is the only way to tell > whether low-level quantum effects are required: if the model, which > has not been exposed to a corpus containing consciousness philosophy, > then goes on to independently recreate consciousness philosophy, > despite the fact that it is composed of (for example) point neurons, > then we can be sure that low-level quantum mechanical details are not > important. > > Note, however, that such a model might still rely on nested observers > or entanglement. I'll let a quantum physicist chime in on that - > although I will note that according to news articles I've read that we > keep managing to entangle larger and larger objects - up to the size > of molecules at this time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. Richard Loosemore Wells College Aurora NY USA From brian.mingus at colorado.edu Tue Jan 28 10:34:50 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Tue, 28 Jan 2014 08:34:50 -0700 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E7BF66.3070000@susaro.com> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: Hi Richard, > I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. I'm not sure I see the argument you're trying to make here. If you have an outcome measure that you agree correlates with consciousness, then we have a framework for scientifically studying it. Here's my setup: If you create a society of models and do not expose them to a corpus containing consciousness philosophy and they then, in a reasonably short amount of time, independently rewrite it, they are almost certainly conscious. This design explicitly rules out a generative model that accidentally spits out consciousness philosophy. Another approach is to accept that our brains are so similar that you and I are almost certainly both conscious, and to then perform experiments on each other and study our subjective reports. Another approach is to perform experiments on your own brain and to write first person reports about your experience. These three approaches each have tradeoffs, and each provide unique information. The first approach, in particular, might ultimately allow us to draw some of the strongest possible conclusions. For example, it allows for the scientific study of the extent to which quantum effects may or may not be relevant. I'm very interested in hearing any counterarguments as to why this general approach won't work. If it *can't* work, then I would argue that perhaps we should not create full models of ourselves, but should instead focus on upgrading ourselves. From that perspective, getting this to work is extremely important, despite however futuristic it may seem. > So let's let that sleeping dog lie.... (?). Not gonna' happen. :) Brian Mingus http://grey.colorado.edu On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore wrote: > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > >> Consciousness is also such a bag of worms that we can't rule out that >> qualia owes its totally non-obvious and a priori unpredicted existence to >> concepts derived from quantum mechanics, such as nested observers, or >> entanglement. >> >> As far as I know, my litmus test for a model is the only way to tell >> whether low-level quantum effects are required: if the model, which has not >> been exposed to a corpus containing consciousness philosophy, then goes on >> to independently recreate consciousness philosophy, despite the fact that >> it is composed of (for example) point neurons, then we can be sure that >> low-level quantum mechanical details are not important. >> >> Note, however, that such a model might still rely on nested observers or >> entanglement. I'll let a quantum physicist chime in on that - although I >> will note that according to news articles I've read that we keep managing >> to entangle larger and larger objects - up to the size of molecules at this >> time, IIRC. >> >> >> Brian Mingus >> http://grey.colorado.edu/mingus >> >> Speaking as someone is both a physicist and a cognitive scientist, AND > someone who has written papers resolving that whole C-word issue, I can > tell you that the quantum story isn't nearly enough clear in the minds of > physicists, yet, so how it can be applied to the C question is beyond me. > Frankly, it does NOT apply: saying anything about observers and > entanglement does not at any point touch the kind of statements that > involve talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I > think it best to listen to George Bernard Shaw on this one: "Never do unto > others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what > happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikkelsen.kaare at gmail.com Tue Jan 28 10:03:14 2014 From: mikkelsen.kaare at gmail.com (Kaare Mikkelsen) Date: Tue, 28 Jan 2014 16:03:14 +0100 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E7BF66.3070000@susaro.com> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: Speaking as another physicist trying to bridge the gap between physics and neuroscience I must also say that how the most abstract ideas from quantum mechanics could meaningfully (read: scientifically) be applied to macroscopic neuroscience, given our present level of understanding of either field, is beyond me. To me, it is at the point where the connection is impossible to prove or disprove, but seems very unlikely. I do not see how valid scientific results can come in that direction, seeing as there is no theory, no reasonable path towards a theory, and absolutely no way of measuring anything. -------------------------------------------------------------------- Kaare Mikkelsen, M. Sc. Institut for Fysik og Astronomi Ny Munkegade 120 8000 Aarhus C Lok.: 1520-629 Tlf.: 87 15 56 37 -------------------------------------------------------------------- On 28 January 2014 15:32, Richard Loosemore wrote: > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > >> Consciousness is also such a bag of worms that we can't rule out that >> qualia owes its totally non-obvious and a priori unpredicted existence to >> concepts derived from quantum mechanics, such as nested observers, or >> entanglement. >> >> As far as I know, my litmus test for a model is the only way to tell >> whether low-level quantum effects are required: if the model, which has not >> been exposed to a corpus containing consciousness philosophy, then goes on >> to independently recreate consciousness philosophy, despite the fact that >> it is composed of (for example) point neurons, then we can be sure that >> low-level quantum mechanical details are not important. >> >> Note, however, that such a model might still rely on nested observers or >> entanglement. I'll let a quantum physicist chime in on that - although I >> will note that according to news articles I've read that we keep managing >> to entangle larger and larger objects - up to the size of molecules at this >> time, IIRC. >> >> >> Brian Mingus >> http://grey.colorado.edu/mingus >> >> Speaking as someone is both a physicist and a cognitive scientist, AND > someone who has written papers resolving that whole C-word issue, I can > tell you that the quantum story isn't nearly enough clear in the minds of > physicists, yet, so how it can be applied to the C question is beyond me. > Frankly, it does NOT apply: saying anything about observers and > entanglement does not at any point touch the kind of statements that > involve talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I > think it best to listen to George Bernard Shaw on this one: "Never do unto > others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what > happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From poirazi at imbb.forth.gr Tue Jan 28 04:34:02 2014 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Tue, 28 Jan 2014 11:34:02 +0200 Subject: Connectionists: Dendrites 2014, July 1-4, Heraklion, Crete: Call for Abstracts Message-ID: <52E7798A.9060600@imbb.forth.gr> Dendrites 2014 International Workshop on Dendrites Heraklion, Crete, Greece. July 1-4, 2014 http://dendrites2014.gr/ info at dendrites2014.gr Dendrites provide the substrate for inter-neuronal communication, and their nonlinear properties play a key role in information processing. Dendrites 2014 aims to bring together scientists from around the world to present their research on dendrites, ranging from the molecular to the anatomical and biophysical levels. With the backdrop of an informal yet spectacular setting on Crete, the meeting has been carefully planned to not only satisfy our scientific curiosity but also foster discussion and encourage interaction between attendees well beyond traditional presentations. In this spirit, the workshop will also provide a soft skills day for the training of young researchers in subjects such as research design and publication/dissemination. FORMAT AND SPEAKERS The meeting consists of the Main Event (July 1-3) and the Soft Skills Day (July 4). Invited speakers for the Main Event include: Susumu Tonegawa (Nobel Laureate) Angus Silver Tiago Branco Alcino J. Silva Michael H?usser Julietta U. Frey Stefan Remy Kristen Harris For the Soft Skills Day, Kerri Smith, Nature podcast editor, is going to present on communication and dissemination of scientific results.Alcino J. Silva (UCLA) will present his recent work on developing tools for integrating and planning research in Neuroscience. CALL FOR ABSTRACTS We are soliciting abstracts for oral and poster presentations. We welcome both experimental and theoretical contributions addressing novel findings related to dendrites that fall into one or more of the following domains: ?morphological and functional characterizations, ?dendritic integration and compartmentalization, ?dendritic channel distribution and their functional implications, ?molecular pathways and signaling networks, ?RNA trafficking and local protein synthesis, ?functional or structural plasticity and homeostasis, ?the role of dendrites in complex processes, including learning/memory, neural computations etc. Electronic abstract submission is hosted on the Frontiers platform , where abstracts will be published as a pdf ebook. Authors wishing to give oral presentations are required to submit an extended abstract describing the nature, scope and main results of the work in more detail. All submissions will be acknowledged by e-mail. *One of the authors has to register for the main event as a presenting author. *In case an abstract is not accepted for presentation, the registration fee will be refunded. Instructions for on-line submission can be found at http://dendrites2014.gr/call/. *Important dates * * Abstract submission opens: January 21, 2014, registration of presenting author required. * Abstract submission closes: March 10, 2014. * Notification of abstract acceptance: late March 2014. * Notification of oral/poster presentation: early April 2014. ORGANIZING COMMITTEE Panayiota Poirazi, IMBB, FORTH. Benjamin Auffarth, IMBB, FORTH. Daniel W?jcik, Nencki Institute of Experimental Biology. Dendrites 2014 will be hosted by FORTH, and is being organized as part of the Marie Curie Initial Training Network NAMASEN and the ERC Starting Grant dEMORY . FOR FURTHER INFORMATION Please see the conference web site (http://dendrites2014.gr/), subscribe to our twitter (https://twitter.com/search?q=dendrites2014&f=realtime) or RSS feeds (http://dendrites2014.gr/rss_feed/news.xml), or send an email to info at dendrites2014.org. -- Panayiota Poirazi, Ph.D. Director of Research Computational Biology Laboratory Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton P.O.Box 1385 GR 711 10 Heraklion, Crete GREECE Tel: +30 2810 391139 Fax: +30 2810 391101 ?mail:poirazi at imbb.forth.gr http://www.dendrites.gr http://www.imbb.forth.gr/personal_page/poirazi.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From neuroanalysis at gmail.com Tue Jan 28 04:03:50 2014 From: neuroanalysis at gmail.com (Avi Peled) Date: Tue, 28 Jan 2014 11:03:50 +0200 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: Ivan I agree - and even more, neuroscientific application to psychiatry is even more at its infancy - but unfortunately patients are suffering immensely and cannot wait - the attached paper tries to tackle the problem of neuroscientific psychiatry - comments are welcome Abraham On Tue, Jan 28, 2014 at 10:51 AM, Ivan Raikov wrote: > > My summary of the history of physics was quite wrong: the idea of > infinitesimals and their application has been around since the time of > Archimedes: > > http://www.idsia.ch/~juergen/archimedes.html > > http://en.wikipedia.org/wiki/Infinitesimal > > The moral is, it takes a while for fundamental ideas in science to > promulgate :-) > > On Mon, Jan 27, 2014 at 3:38 PM, Ivan Raikov wrote: > >> >> Speaking of radio and electromagnetic waves, it is perhaps the case that >> neuroscience has not yet reached the maturity of 19th century physics: >> while the discovery of electromagnetism is attributed to great >> experimentalists such as Ampere and Faraday, and its mathematical model is >> attributed to one of the greatest modelers in physics, Maxwell, none of it >> happened in isolation. There was a lot of duplicated experimental work and >> simultaneous independent discoveries in that time period, and Maxwell's >> equations were readily accepted and quickly refined by a number of >> physicists after he first postulated them. So in a sense physics had a >> consensus community model of electromagnetism already in the first half of >> the 19th century. Neuroscience is perhaps more akin to physics in the 17th >> century, when Newton's infinitesimal calculus was rejected and even mocked >> by the scientific establishment on the continent, and many years would pass >> until calculus was understood and widely accepted. So a unifying theory of >> neuroscience may not come until a lot of independent and reproducible >> experimentation brings it about. >> >> -Ivan >> >> >> >> On Mon, Jan 27, 2014 at 1:39 PM, Thomas Trappenberg wrote: >> >>> Some of our discussion seems to be about 'How the brain works'. I am of >>> course not smart enough to answer this question. So let me try another >>> system. >>> >>> How does a radio work? I guess it uses an antenna to sense an >>> electromagnetic wave that is then amplified so that an electromagnet can >>> drive a membrane to produce an airwave that can be sensed by our ear. Hope >>> this captures some essential aspects. >>> >>> Now that you know, can you repair it when it doesn't work? >>> >>> > -- Abraham Peled M.D. - Psychiatry Chair of Dept' SM, Mental Health Center Clinical Assistant Professor 'Technion' Israel Institute of Technology Book author of 'Optimizers 2050' and 'NeuroAnalysis' Email: neuroanalysis at gmail.com Web: http://neuroanalysis.org.il/ Web www.shaar-menashe.org Phone: +972522844050 Fax: +97246334869 CONFIDENTIALITY NOTICE: Information contained in this message and any attachments is intended only for the addressee(s). If you believe that you have received this message in error, please notify the sender immediately by return electronic mail, and please delete it without further review, disclosure, or copying. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Globalopathies YMEHY7358.pdf Type: application/pdf Size: 404118 bytes Desc: not available URL: From boris.gutkin at ens.fr Tue Jan 28 10:40:00 2014 From: boris.gutkin at ens.fr (boris gutkin) Date: Tue, 28 Jan 2014 16:40:00 +0100 Subject: Connectionists: Faculty, postdoc, doctoral positions in cognitive neuroscience, theoretical neuroscience Higher School of Economics, Moscow Message-ID: <6E304202-7DB4-4DFA-B95C-9E5D658942A4@ens.fr> 1. Postdoctoral Position (3 years) Postdoctoral Positions in Cognitive Neuroscience, ?30 000 - ?40 000 euro (1 350 000 RUB - 1 800 000 RUB): 1. Transcranial magnetic stimulation (TMS). The aim of the project is to investigate complex spatio-temporal dynamics of electro- and magnetoencephalographic (EEG/MEG) oscillations and evoked responses during different experimental paradigms. The emphasis would be placed on studying interplay between ongoing/pre-stimulus oscillations and the following behavioral/electrophysiological responses in different perceptual, motor and cognitive tasks. In addition to EEG/MEG, Transcranial Magnetic Stimulation (TMS) is planned to be used for probing a cortical excitability. The project will adapt concepts from statistical physics such as long-range temporal correlations, synchronization, and entropy in order to comprehensively describe cortical brain mechanisms. A candidate should hold PhD degree, e.g., in neuroscience, psychology, computer science, biomedical engineering, physics, applied mathematics. Experience with multi-channel EEG/MEG/LFP recordings and data analysis (e.g. with Matlab/Python) as well as an experience with TMS is an advantage. Group leader - Vadim Nikulin 2. Neuroeconomics. Our international group studies the brain mechanisms of decision making, social influence and persuasive communication. People are exposed to hundreds of persuasive messages per day in one form or another: from TV commercials to scientific publications. Persuasion has been a focus of extensive psychological research but has been nearly ignored by cognitive neuroscience. Understanding the neuronal mechanisms of effective persuasion might provide important insights for a realistic psychological model of persuasive communication. Group leader - Vasily Klucharev Selected Publications: - KlucharevV, Munneke MA, Smidts A, Fernandez G (2011) Downregulation of the posterior medial frontal cortex prevents social conformity. J Neurosci 31:11934-11940. - Klucharev V, Hyt?nen K, Rijpkema M, Smidts A, Fern?ndez G (2009) Reinforcement learning signal predicts social conformity. Neuron. 15;61(1):140-51. - Klucharev V, Smidts A, Fern?ndez G. (2008) Brain mechanisms of persuasion: how 'expert power' modulates memory and attitudes. Soc Cogn Affect Neurosci. 3(4):353-66. 3. Neuroscience of speech and language. The postholder will conduct research in the area of cognitive neuroscience (specifically speech language function) as well as take part in a wide range of interdisciplinary collaborations with researchers at HSE and external collaborators in Moscow, RF and abroad. The postholder?s research will be centered on neural mechanisms underlying speech and language processing in the human brain. Ideal candidates will therefore have experience in neuroimaging and in language research. Previous experience in using one or more of such techniques as EEG, MEG, TMS, fMRI, eye-tracking, psychophysiological/behavioural testing is essential. Eligible candidates should hold a PhD or similar degree in a relevant discipline, including (but not limited to) psychology, neuroinformatics or neuroscience. Group leader - Yury Shtyrov 4. Theoretical Neuroscience - computational and mathematical approaches to understanding neural function and cognition. Research interest of the group are wide ranging, carried out in a collaboration with the experimental labs at the research center and the faculty of applied mathematics. Research themes include models of social decision making, computational neuroeconomics, information processing in neurons and circuits as well as computational approaches to drug addiction and role of oscillations in cognition. The group is linked with the Group for Neural Theory at the Ecole Normale Superior in Paris, where research internships and visitorships can be made available. We are seeking highly qualified and motivated candidate with backgrounds in quantitative disciplines: applied mathematics,physics, computer science or engineering. Programming skills and ability to carry put interdisciplinary projects are required. Ability to work with data is recommended. Candidates will be trained in model building, analysis and will be offered advanced training in neuroscience and cognitive psychology. Candidates will be expected to develop independent research projects and collaborations under the direction of the group leading scientists. Publications in international peers reviewed venues are expected as tangible out outcomes of the research. Group leader -Boris Gutkin General Requirements: - Ph.D. in psychology, neuroscience, social psychology, neuroinformatics, psycholinguistcs or a related area - Fluent English (knowledge of Russian is not required) - Ability and high motivation to conduct high-quality research publishable in reputable peer-reviewed journals and international university presses - Experience in human neuroimaging Please provide a CV, at least 2 letters of reference forwarded directly, a statement of research interest and a recent research paper. All materials should be addressed to Anna Shestakova anna.shestakova at helsinki.fi no later than Fabruary 15, 2014 till the position is filled. 2. PhD students positions PhD students positions in Cognitive Neuroscience, ?18 000 - ?22 000 euro ( 800 000 RUB - 1 000 000 RUB): 1. Transcranial magnetic stimulation lab (TMS). TMS is applied in various multidisciplinary projects ? neuroscience of decision making (neuroeconomics), spatio-temporal dynamics of ongoing neuronal oscillations as predictors and descriptors of perceptual, motor and cognitive brain activity and language studies. Group leader - Vadim Nikulin 2. Neuroeconomics. Our international group studies the brain mechanisms of decision making, social influence and persuasive communication. People are exposed to hundreds of persuasive messages per day in one form or another: from TV commercials to scientific publications. Persuasion has been a focus of extensive psychological research but has been nearly ignored by cognitive neuroscience. Understanding the neuronal mechanisms of effective persuasion might provide important insights for a realistic psychological model of persuasive communication. Group leader - Vasily Klucharev Selected Publications:- KlucharevV, Munneke MA, Smidts A, Fernandez G (2011) Downregulation of the posterior medial frontal cortex prevents social conformity. J Neurosci 31:11934-11940. - Klucharev V, Hyt?nen K, Rijpkema M, Smidts A, Fern?ndez G (2009) Reinforcement learning signal predicts social conformity. Neuron. 15;61(1):140-51. - Klucharev V, Smidts A, Fern?ndez G. (2008) Brain mechanisms of persuasion: how 'expert power' modulates memory and attitudes. Soc Cogn Affect Neurosci. 3(4):353-66. 3. Neuroscience of speech and language. The postholder will conduct research in the area of cognitive neuroscience (specifically speech language function) as well as take part in a wide range of interdisciplinary collaborations with researchers at HSE and external collaborators in Moscow, RF and abroad. The postholder?s research will be centered on neural mechanisms underlying speech and language processing in the human brain. Ideal candidates will therefore have experience in neuroimaging and in language research. Previous experience in using one or more of such techniques as EEG, MEG, TMS, fMRI, eye-tracking, psychophysiological/behavioural testing is essential. Eligible candidates should hold a PhD or similar degree in a relevant discipline, including (but not limited to) psychology, neuroinformatics or neuroscience. Group leader - Yury Shtyrov 4. Theoretical Neuroscience - computational and mathematical approaches to understanding neural function and cognition. Research interest of the group are wide ranging, carried out in a collaboration with the experimental labs at the research center and the faculty of applied mathematics. Research themes include models of social decision making, computational neuroeconomics, information processing in neurons and circuits as well as computational approaches to drug addiction and role of oscillations in cognition. The group is linked with the Group for Neural Theory at the Ecole Normale Superior in Paris, where research internships and visitorships can be made available. We are seeking highly qualified and motivated candidate with backgrounds in quantitative disciplines: applied mathematics,physics, computer science or engineering. Programming skills and ability to carry put interdisciplinary projects are required. Ability to work with data is recommended. Candidates will be trained in model building, analysis and will be offered advanced training in neuroscience and cognitive psychology. Candidates will be expected to develop independent research projects and collaborations under the direction of the group leading scientists. Publications in international peers reviewed venues are expected as tangible out outcomes of the research. Group leader - Boris Gutkin General Requirements: - MS in psychology, neuroscience, social psychology, neuroinformatics, psycholinguistcs or a related area - Fluent English (knowledge of Russian is not required) - Experience in neuroimaging - High motivation to conduct high-quality research publishable in reputable peer-reviewed journals and international university presses Please provide a CV, at least 2 letters of reference forwarded directly, a statement of research interest. All materials should be addressed to Anna Shestakova anna.shestakova at helsinki.fi no later than Fabruary 15, 2014 till the position is filled. Working Conditions: The HSE is a young, dynamic, fast-growing Russian research university providing unique research opportunities (http://hse.ru/en, http://psy.hse.ru/en/) Work Conditions: - Access to the brain-navigated TMS, multichanel EGG, MEG, gaze-tracking, etc. http://psy.hse.ru/en/res-center - Internationally competitive ?ompensation, 13% flat income tax rate and other benefits - Generous travel support and research grants provided by the university?s Centre for Advanced Studies (www.cas.hse.ru) - Heavy emphasis on high quality research -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccchow at pitt.edu Tue Jan 28 11:30:24 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 11:30:24 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: <52E7DB20.2080509@pitt.edu> Hi Brad, Philip Anderson, Nobel Prize in Physics, once wrote that theory and experimental results should never be in the same paper. His reason was for the protection of the experiment because if the theory turns out wrong (as is often the case) then people often forget about the data. Carson On 1/28/14 8:25 AM, Brad Wyble wrote: > Thanks Randal, that's a great suggestion. I'll ask my colleagues in > physics for their perspective as well. > > -Brad > > > > > On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene > > wrote: > > Hi Brad, > This reminds me of theoretical physics, where proposed models are > expounded in papers, often without the ability to immediately > carry out empirical tests of all the predictions. Subsequently, > experiments are often designed to compare and contrast different > models. > Perhaps a way to advance this is indeed to make the analogy with > physics? > Cheers, > Randal > > Dr. Randal A. Koene > Randal.A.Koene at gmail.com - > Randal.A.Koene at carboncopies.org > > http://randalkoene.com - http://carboncopies.org > > > On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble > wrote: > > Thank you Mark, I hadn't seen this paper. She includes this > other point that should have been in my list: > > "From a practical point of view, as noted the time required to > build > and analyze a computational model is quite substantial and > validation may > require teams. To delay model presentation until validation > has occurred > retards the development of the scientific field. " ----Carley > (1999) > > And here is a citation for this paper. > Carley, Kathleen M., 1999. Validating Computational Models. > CASOS Working Paper, CMU > > -Brad > > > > > On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr > wrote: > > Brad, > Kathleen Carley, at CMU, has a paper on this idea (from > the 1990s), suggesting the same practice. See > http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf > > Mark > > On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: > >> Dear connectionists, >> >> I wanted to get some feedback regarding some recent ideas >> concerning the publication of models because I think that >> our current practices are slowing down the progress of >> theory. At present, at least in many psychology >> journals, it is often expected that a computational >> modelling paper includes experimental evidence in favor >> of a small handful of its own predictions. While I am >> certainly in favor of model testing, I have come to the >> suspicion that the practice of including empirical >> validation within the same paper as the initial model is >> problematic for several reasons: >> >> It encourages the creation only of predictions that are >> easy to test with the techniques available to the modeller. >> >> It strongly encourages a practice of running an >> experiment, designing a model to fit those results, and >> then claiming this as a bona fide prediction. >> >> It encourages a practice of running a battery of >> experiments and reporting only those that match the >> model's output. >> >> It encourages the creation of predictions which cannot >> fail, and are therefore less informative >> >> It encourages a mindset that a model is a failure if all >> of its predictions are not validated, when in fact we >> actually learn more from a failed prediction than a >> successful one. >> >> It makes it easier for experimentalists to ignore models, >> since such modelling papers are "self contained". >> >> I was thinking that, instead of the current practice, it >> should be permissible and even encouraged that a >> modelling paper should not include empirical validation, >> but instead include a broader array of predictions. Thus >> instead of 3 successfully tested predictions from the >> PI's own lab, a model might include 10 untested >> predictions for a variety of different experimental >> techniques. This practice will, I suspect, lead to the >> development of bolder theories, stronger tests, and most >> importantly, tighter ties between empiricists and >> theoreticians. >> >> I am certainly not advocating that modellers shouldn't >> test their own models, but rather that it should be >> permissible to publish a model without testing it first. >> The testing paper could come later. >> >> I also realize that this shift in publication >> expectations wouldn't prevent the problems described >> above, but it would at least not reward them. >> >> I also think that modellers should make a concerted >> effort to target empirical journals to increase the >> visibility of models. This effort should coincide with a >> shift in writing style to make such models more >> accessible to non modellers. >> >> What do people think of this? If there is broad >> agreement, what would be the best way to communicate this >> desire to journal editors? >> >> Any advice welcome! >> >> -Brad >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Tue Jan 28 14:05:31 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Tue, 28 Jan 2014 14:05:31 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: <52E7FF7B.7090100@susaro.com> Brian, Everything hinges on the definition of the concept ("consciousness") under consideration. In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of Artificial General Intelligence" I pointed out (echoing Chalmers) that too much is said about C without a clear enough understanding of what is meant by it .... and then I went on to clarify what exactly could be meant by it, and thereby came to a resolution of the problem (with testable predictions). So I think the answer to the question you pose below is that: (a) Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate), and (b) All three of the approaches you mention are sidelined and finessed by the approach I used in the abovementioned paper, where I clarify the definition by clarifying first why we have so much difficulty defining it. In other words, there is a fourth way, and that is to explain it as ... well, I have to leave that dangling because there is too much subtlety to pack into an elevator pitch. (The title is the best I can do: " Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism "). Certainly though, the weakness of all quantum mechanics 'answers' is that they are stranded on the wrong side of the explanatory gap. Richard Loosemore Reference Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), Theoretical Foundations of Artifical General Intelligence. Atlantis Press. http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf On 1/28/14, 10:34 AM, Brian J Mingus wrote: > Hi Richard, > > > I can tell you that the quantum story isn't nearly enough clear in > the minds of physicists, yet, so how it can be applied to the C > question is beyond me. Frankly, it does NOT apply: saying anything > about observers and entanglement does not at any point touch the kind > of statements that involve talk about qualia etc. > > I'm not sure I see the argument you're trying to make here. If you > have an outcome measure that you agree correlates with consciousness, > then we have a framework for scientifically studying it. > > Here's my setup: If you create a society of models and do not expose > them to a corpus containing consciousness philosophy and they then, in > a reasonably short amount of time, independently rewrite it, they are > almost certainly conscious. This design explicitly rules out a > generative model that accidentally spits out consciousness philosophy. > > Another approach is to accept that our brains are so similar that you > and I are almost certainly both conscious, and to then perform > experiments on each other and study our subjective reports. > > Another approach is to perform experiments on your own brain and to > write first person reports about your experience. > > These three approaches each have tradeoffs, and each provide unique > information. The first approach, in particular, might ultimately allow > us to draw some of the strongest possible conclusions. For example, it > allows for the scientific study of the extent to which quantum effects > may or may not be relevant. > > I'm very interested in hearing any counterarguments as to why this > general approach won't work. If it /can't/ work, then I would argue > that perhaps we should not create full models of ourselves, but should > instead focus on upgrading ourselves. From that perspective, getting > this to work is extremely important, despite however futuristic it may > seem. > > > So let's let that sleeping dog lie.... (?). > > Not gonna' happen. :) > > Brian Mingus > http://grey.colorado.edu > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore > > wrote: > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > Consciousness is also such a bag of worms that we can't rule > out that qualia owes its totally non-obvious and a priori > unpredicted existence to concepts derived from quantum > mechanics, such as nested observers, or entanglement. > > As far as I know, my litmus test for a model is the only way > to tell whether low-level quantum effects are required: if the > model, which has not been exposed to a corpus containing > consciousness philosophy, then goes on to independently > recreate consciousness philosophy, despite the fact that it is > composed of (for example) point neurons, then we can be sure > that low-level quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested > observers or entanglement. I'll let a quantum physicist chime > in on that - although I will note that according to news > articles I've read that we keep managing to entangle larger > and larger objects - up to the size of molecules at this time, > IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > > Speaking as someone is both a physicist and a cognitive scientist, > AND someone who has written papers resolving that whole C-word > issue, I can tell you that the quantum story isn't nearly enough > clear in the minds of physicists, yet, so how it can be applied to > the C question is beyond me. Frankly, it does NOT apply: saying > anything about observers and entanglement does not at any point > touch the kind of statements that involve talk about qualia etc. > So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci > ..... I think it best to listen to George Bernard Shaw on this > one: "Never do unto others as you would they do unto you: their > tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, > so what happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy.oreilly at colorado.edu Tue Jan 28 14:12:35 2014 From: randy.oreilly at colorado.edu (Randall O'Reilly) Date: Tue, 28 Jan 2014 12:12:35 -0700 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: I?m glad to hear this counterpoint to all this physics envy ? I took a deep dive into the current state of theory in quantum physics a while back, and was pretty shocked at what a mess it is! Sure, it works (?shut up and calculate? is a mantra) but from a conceptual level, there are some pretty serious unresolved issues, which don?t seem to be very widely appreciated in the lay press, with all those rah-rah unified theory books. The core issue is relevant for the discussion here: physics does NOT actually have anything approaching a ?mechanistic? model ? it is all a descriptive calculational tool. In other words, you can compute the right answers, but this is clearly not how ?nature computes physics?. Indeed, the notion of finding such a mechanistic model is considered naive and has long since been abandoned. Translating this to our field: the Bayesians have won, and nobody cares about how neurons actually work! As long as you can compute the ?behavioral? outcome of experiments (to high precision for sure), the underlying hardware is irrelevant. And those calculations seem a lot like the epicycles: you need to compute more and more terms in infinite sums to reach ever-closer approximations to the truth, with the seemingly arbitrary renormalization procedure added in to make sure everything converges. Do we think that nature is using the same technique? Anyway, I wrote up a critique and submitted it to a physics journal: http://arxiv.org/abs/1109.0880 Not surprisingly, the paper was not accepted, but the review did not undermine any of the major claims of the paper, and just reiterated the ?standard? lines about the whole entanglement issue, denying the validity of the various papers cited raising serious questions about this. I did make some friends in the ?alternative? physics community from that paper, and I am currently (very slowly) working on a ?neural network? inspired model of quantum physics, described here: http://grey.colorado.edu/WELD/index.php/WELDBook/Main ? in this model, everything emerges from interacting wave equations, just like we think everything in the brain emerges from interacting neurons.. - Randy On Jan 28, 2014, at 8:03 AM, Kaare Mikkelsen wrote: > Speaking as another physicist trying to bridge the gap between physics and neuroscience I must also say that how the most abstract ideas from quantum mechanics could meaningfully (read: scientifically) be applied to macroscopic neuroscience, given our present level of understanding of either field, is beyond me. To me, it is at the point where the connection is impossible to prove or disprove, but seems very unlikely. I do not see how valid scientific results can come in that direction, seeing as there is no theory, no reasonable path towards a theory, and absolutely no way of measuring anything. > > -------------------------------------------------------------------- > Kaare Mikkelsen, M. Sc. > Institut for Fysik og Astronomi > Ny Munkegade 120 > 8000 > Aarhus C > Lok.: 1520-629 > Tlf.: 87 15 56 37 > -------------------------------------------------------------------- > > > On 28 January 2014 15:32, Richard Loosemore wrote: > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. > > As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > > Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > From brian.mingus at colorado.edu Tue Jan 28 15:09:14 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Tue, 28 Jan 2014 13:09:14 -0700 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E7FF7B.7090100@susaro.com> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> Message-ID: Hi Richard, thanks for the feedback. > Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate) Actually, the outcome measure I described is independent of a clear and unambiguous meaning for C itself, and in an interesting way: the models, like us, essentially reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented (including the one in your paper). I will read your paper and see if it changes my position. At the present time, however, I can't imagine any information that would solve the so-called zombie problem. I'm not a big fan of integrative information theory - I don't think hydrogen atoms are conscious, and I don't think naive bayes trained on a large corpus and run in generative mode is conscious. Thus, if the model doesn't go through the same philosophical reasoning that we've collectively gone through with regards to subjective experience, then I'm going to wonder if its experience is anything like mine at all. Touching back on QM, if we create a point neuron-based model that doesn't wax philosophical on consciousness, I'm going to wonder if we should add lower levels of analysis. I will take a look at your paper, and see if it changes my view on this at all. Cheers, Brian Mingus http://grey.colorado.edu/mingus On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore wrote: > > > Brian, > > Everything hinges on the definition of the concept ("consciousness") under > consideration. > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of > Artificial General Intelligence" I pointed out (echoing Chalmers) that too > much is said about C without a clear enough understanding of what is meant > by it .... and then I went on to clarify what exactly could be meant by it, > and thereby came to a resolution of the problem (with testable > predictions). So I think the answer to the question you pose below is > that: > > (a) Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C itself > (which I don't think anyone has, so therefore it is, after all, of no value > to look for outcome measures that correlate), and > > (b) All three of the approaches you mention are sidelined and finessed by > the approach I used in the abovementioned paper, where I clarify the > definition by clarifying first why we have so much difficulty defining it. > In other words, there is a fourth way, and that is to explain it as ... > well, I have to leave that dangling because there is too much subtlety to > pack into an elevator pitch. (The title is the best I can do: " Human and > Machine Consciousness as a Boundary Effect in the Concept Analysis > Mechanism "). > > Certainly though, the weakness of all quantum mechanics 'answers' is that > they are stranded on the wrong side of the explanatory gap. > > > Richard Loosemore > > > Reference > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary > Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), > Theoretical Foundations of Artifical General Intelligence. Atlantis Press. > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > Hi Richard, > > > I can tell you that the quantum story isn't nearly enough clear in the > minds of physicists, yet, so how it can be applied to the C question is > beyond me. Frankly, it does NOT apply: saying anything about observers > and entanglement does not at any point touch the kind of statements that > involve talk about qualia etc. > > I'm not sure I see the argument you're trying to make here. If you have > an outcome measure that you agree correlates with consciousness, then we > have a framework for scientifically studying it. > > Here's my setup: If you create a society of models and do not expose > them to a corpus containing consciousness philosophy and they then, in a > reasonably short amount of time, independently rewrite it, they are almost > certainly conscious. This design explicitly rules out a generative model > that accidentally spits out consciousness philosophy. > > Another approach is to accept that our brains are so similar that you > and I are almost certainly both conscious, and to then perform experiments > on each other and study our subjective reports. > > Another approach is to perform experiments on your own brain and to > write first person reports about your experience. > > These three approaches each have tradeoffs, and each provide unique > information. The first approach, in particular, might ultimately allow us > to draw some of the strongest possible conclusions. For example, it allows > for the scientific study of the extent to which quantum effects may or may > not be relevant. > > I'm very interested in hearing any counterarguments as to why this > general approach won't work. If it *can't* work, then I would argue that > perhaps we should not create full models of ourselves, but should instead > focus on upgrading ourselves. From that perspective, getting this to work > is extremely important, despite however futuristic it may seem. > > > So let's let that sleeping dog lie.... (?). > > Not gonna' happen. :) > > Brian Mingus > http://grey.colorado.edu > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore > wrote: > >> On 1/27/14, 11:30 PM, Brian J Mingus wrote: >> >>> Consciousness is also such a bag of worms that we can't rule out that >>> qualia owes its totally non-obvious and a priori unpredicted existence to >>> concepts derived from quantum mechanics, such as nested observers, or >>> entanglement. >>> >>> As far as I know, my litmus test for a model is the only way to tell >>> whether low-level quantum effects are required: if the model, which has not >>> been exposed to a corpus containing consciousness philosophy, then goes on >>> to independently recreate consciousness philosophy, despite the fact that >>> it is composed of (for example) point neurons, then we can be sure that >>> low-level quantum mechanical details are not important. >>> >>> Note, however, that such a model might still rely on nested observers or >>> entanglement. I'll let a quantum physicist chime in on that - although I >>> will note that according to news articles I've read that we keep managing >>> to entangle larger and larger objects - up to the size of molecules at this >>> time, IIRC. >>> >>> >>> Brian Mingus >>> http://grey.colorado.edu/mingus >>> >>> Speaking as someone is both a physicist and a cognitive scientist, AND >> someone who has written papers resolving that whole C-word issue, I can >> tell you that the quantum story isn't nearly enough clear in the minds of >> physicists, yet, so how it can be applied to the C question is beyond me. >> Frankly, it does NOT apply: saying anything about observers and >> entanglement does not at any point touch the kind of statements that >> involve talk about qualia etc. So let's let that sleeping dog lie.... (?). >> >> As for using the methods/standards of physics over here in cog sci ..... >> I think it best to listen to George Bernard Shaw on this one: "Never do >> unto others as you would they do unto you: their tastes may not be the >> same." >> >> Our tastes (requirements/constraints/issues) are quite different, so what >> happens elsewhere cannot be directly, slavishly imported. >> >> >> Richard Loosemore >> >> Wells College >> Aurora NY >> USA >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccchow at pitt.edu Tue Jan 28 15:49:00 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 15:49:00 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: <52E817BC.30607@pitt.edu> Randy, I just skimmed your screed against physics and while I agree that most physicists don't pay attention to the foundations of quantum mechanics and I sympathize with your angst, I still must disagree with some of your assertions. It is not true that only entangled photons can demonstrate nonlocality. Once two massive particles are outside of their respective light cones, they are no longer causally related by the speed-of-light. If you have two measuring devices setup very far apart and you can keep the two particles coherent then they will violate Bell's inequalities. The fact that it took them a long time to get to the space-like separation is irrelevant. All that matters is once they are far enough apart they are causally separated but Bell's inequalities will still be violated. I also believe that Alain Aspect's experiments show photon entanglement. QED is manifestly Lorentz covariant so photons do travel at the speed of light in the theory. Also, the Langrangian for all quantum field theories have the same form as Maxwell's equations that you like so much. I also don't really follow what you are so distrubed about with regards to the locality of photons. The normal modes of photons are indeed pure Fourier modes but the photons that you know and love come in wave packets, which imparts locality to them. I'm not sure why this bothers you. I think you also impart some advantage to semi-classical calculations over QM that don't seem warranted. All calculations in QFT are perturbational and there are basically two small parameters you can use - the coupling constant, e.g. alpha, or Planck's constant hbar. A semi-classical calculation (also called a loop expansion) just uses small hbar. The agreement to experiments like the Lamb shift or electron-photon scattering, etc do improve as you go to higher order in the loop expansion. As mathematics, Quantum mechanics is beautifully self-consistent and rather simple. All you need is unitary transformation of a state function in Hilbert space together with the Born rule. You may find that distasteful as a representation of reality but I find that much more satisfying than our confusing nonlinear classical world. I think the biggest puzzle in quantum mechanics is the origin of the Born rule. Why is the L2 norm squared of the amplitude probability? Anyway, I had no idea you were thinking about these things. cheers, Carson On 1/28/14 2:12 PM, Randall O'Reilly wrote: > I?m glad to hear this counterpoint to all this physics envy ? I took a deep dive into the current state of theory in quantum physics a while back, and was pretty shocked at what a mess it is! Sure, it works (?shut up and calculate? is a mantra) but from a conceptual level, there are some pretty serious unresolved issues, which don?t seem to be very widely appreciated in the lay press, with all those rah-rah unified theory books. > > The core issue is relevant for the discussion here: physics does NOT actually have anything approaching a ?mechanistic? model ? it is all a descriptive calculational tool. In other words, you can compute the right answers, but this is clearly not how ?nature computes physics?. Indeed, the notion of finding such a mechanistic model is considered naive and has long since been abandoned. > > Translating this to our field: the Bayesians have won, and nobody cares about how neurons actually work! As long as you can compute the ?behavioral? outcome of experiments (to high precision for sure), the underlying hardware is irrelevant. And those calculations seem a lot like the epicycles: you need to compute more and more terms in infinite sums to reach ever-closer approximations to the truth, with the seemingly arbitrary renormalization procedure added in to make sure everything converges. Do we think that nature is using the same technique? > > Anyway, I wrote up a critique and submitted it to a physics journal: http://arxiv.org/abs/1109.0880 > Not surprisingly, the paper was not accepted, but the review did not undermine any of the major claims of the paper, and just reiterated the ?standard? lines about the whole entanglement issue, denying the validity of the various papers cited raising serious questions about this. > > I did make some friends in the ?alternative? physics community from that paper, and I am currently (very slowly) working on a ?neural network? inspired model of quantum physics, described here: http://grey.colorado.edu/WELD/index.php/WELDBook/Main ? in this model, everything emerges from interacting wave equations, just like we think everything in the brain emerges from interacting neurons.. > > - Randy > > On Jan 28, 2014, at 8:03 AM, Kaare Mikkelsen wrote: > >> Speaking as another physicist trying to bridge the gap between physics and neuroscience I must also say that how the most abstract ideas from quantum mechanics could meaningfully (read: scientifically) be applied to macroscopic neuroscience, given our present level of understanding of either field, is beyond me. To me, it is at the point where the connection is impossible to prove or disprove, but seems very unlikely. I do not see how valid scientific results can come in that direction, seeing as there is no theory, no reasonable path towards a theory, and absolutely no way of measuring anything. >> >> -------------------------------------------------------------------- >> Kaare Mikkelsen, M. Sc. >> Institut for Fysik og Astronomi >> Ny Munkegade 120 >> 8000 >> Aarhus C >> Lok.: 1520-629 >> Tlf.: 87 15 56 37 >> -------------------------------------------------------------------- >> >> >> On 28 January 2014 15:32, Richard Loosemore wrote: >> On 1/27/14, 11:30 PM, Brian J Mingus wrote: >> Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. >> >> As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. >> >> Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. >> >> >> Brian Mingus >> http://grey.colorado.edu/mingus >> >> Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). >> >> As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." >> >> Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. >> >> >> Richard Loosemore >> >> Wells College >> Aurora NY >> USA >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Tue Jan 28 15:50:05 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Tue, 28 Jan 2014 15:50:05 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> Message-ID: <52E817FD.2020400@susaro.com> On 1/28/14, 3:09 PM, Brian J Mingus wrote: > Hi Richard, thanks for the feedback. > > > Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C > itself (which I don't think anyone has, so therefore it is, after all, > of no value to look for outcome measures that correlate) > > Actually, the outcome measure I described is independent of a clear > and unambiguous meaning for C itself, and in an interesting way: the > models, like us, essentially reinvent the entire literature, and have > a conversation as we do, inventing almost all the same positions that > we've invented (including the one in your paper). > I can tell you in advance that the theory I propose in that paper makes a prediction there. If your models (I assume you mean models of the human cognitive system) have precisely the right positioning for their 'concept analysis mechanism' (and they almost certainly would have to... it is difficult to avoid), then they would indeed "reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented". However, I can say *why* they should do this, as a tightly-argued consequence of the theory itself, and I can also say why they should express those same confusions about consciousness that we do. I think that is the key. I don't think the naked fact that a model-of-cognition reinvents the philosophy of mind would actually tell us anything, sadly. There is no strong logical compulsion there. It would boot me little to know that they had done that. Anyhow, look forward to hearing your thoughts if/when you get a chance. Richard Loosemore -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Tue Jan 28 15:52:06 2014 From: gary at eng.ucsd.edu (Gary Cottrell) Date: Tue, 28 Jan 2014 21:52:06 +0100 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: I think everyone is a tad over-reacting - read the RFP below from NIH. It does include theory as a major component? The purpose of this FOA is to provide resources for integrated development of experimental, analytic and theoretical capabilities for large-scale analysis of neural systems and circuits. We seek applications for exploratory studies that use new and emerging methods for large scale recording and manipulation of neural circuits across multiple brain regions. Applications should propose to elucidate the contributions of dynamic circuit activity to a specific behavioral or neural system. Studies should incorporate rich information on cell-types, on circuit functionality and connectivity, and should be performed in conjunction with sophisticated analysis of ethologically relevant behaviors. Applications should propose teams of investigators that seek to cross boundaries of interdisciplinary collaboration by bridging fields and linking theory and data analysis to experimental design. Exploratory studies supported by this FOA are intended to develop experimental capabilities and theoretical frameworks in preparation for a future competition for large scale awards. On Jan 28, 2014, at 10:03 AM, Avi Peled wrote: > > Ivan I agree - and even more, neuroscientific application to psychiatry is even more at its infancy - but unfortunately patients are suffering immensely and cannot wait - the attached paper tries to tackle the problem of neuroscientific psychiatry - comments are welcome > > Abraham > > > On Tue, Jan 28, 2014 at 10:51 AM, Ivan Raikov wrote: > > My summary of the history of physics was quite wrong: the idea of infinitesimals and their application has been around since the time of Archimedes: > > http://www.idsia.ch/~juergen/archimedes.html > > http://en.wikipedia.org/wiki/Infinitesimal > > The moral is, it takes a while for fundamental ideas in science to promulgate :-) > > On Mon, Jan 27, 2014 at 3:38 PM, Ivan Raikov wrote: > > Speaking of radio and electromagnetic waves, it is perhaps the case that neuroscience has not yet reached the maturity of 19th century physics: while the discovery of electromagnetism is attributed to great experimentalists such as Ampere and Faraday, and its mathematical model is attributed to one of the greatest modelers in physics, Maxwell, none of it happened in isolation. There was a lot of duplicated experimental work and simultaneous independent discoveries in that time period, and Maxwell's equations were readily accepted and quickly refined by a number of physicists after he first postulated them. So in a sense physics had a consensus community model of electromagnetism already in the first half of the 19th century. Neuroscience is perhaps more akin to physics in the 17th century, when Newton's infinitesimal calculus was rejected and even mocked by the scientific establishment on the continent, and many years would pass until calculus was understood and widely accepted. So a unifying theory of neuroscience may not come until a lot of independent and reproducible experimentation brings it about. > > -Ivan > > > > On Mon, Jan 27, 2014 at 1:39 PM, Thomas Trappenberg wrote: > Some of our discussion seems to be about 'How the brain works'. I am of course not smart enough to answer this question. So let me try another system. > > How does a radio work? I guess it uses an antenna to sense an electromagnetic wave that is then amplified so that an electromagnet can drive a membrane to produce an airwave that can be sensed by our ear. Hope this captures some essential aspects. > > Now that you know, can you repair it when it doesn't work? > > > > > > > -- > Abraham Peled M.D. - Psychiatry > Chair of Dept' SM, Mental Health Center > Clinical Assistant Professor 'Technion' Israel Institute of Technology > Book author of ?Optimizers 2050? and ?NeuroAnalysis? > Email: neuroanalysis at gmail.com > Web: http://neuroanalysis.org.il/ > Web www.shaar-menashe.org > Phone: +972522844050 > Fax: +97246334869 > > CONFIDENTIALITY NOTICE: Information contained in this message and any attachments is intended only for the addressee(s). If you believe that you have received this message in error, please notify the sender immediately by return electronic mail, and please delete it without further review, disclosure, or copying. > [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271] Gary Cottrell 858-534-6640 FAX: 858-534-7029 My schedule is here: http://tinyurl.com/b7gxpwo Computer Science and Engineering 0404 IF USING FED EX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said. "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht. "Physical reality is great, but it has a lousy search function." -Matt Tong "Only connect!" -E.M. Forster "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton "There is nothing objective about objective functions" - Jay McClelland "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it." -David Mermin Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccchow at pitt.edu Tue Jan 28 16:01:16 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 16:01:16 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> Message-ID: <52E81A9C.2080303@pitt.edu> Brian, Quantum mechanics can be completely simulated on a classical computer so if quantum mechanics do matter for C then it must be a matter of computational efficiency and nothing more. We also know that BQP (i.e. set of problems solved efficiently on a quantum computer) is bigger than BPP (set of problems solved effficiently on a classical computer) but not by much. I'm not fully up to date on this but I think factoring and boson sampling or about the only two examples that are in BQP and not in BPP. We also know that BPP is much smaller than NP, so if C does require QM then for some reason it sits in a small sliver of complexity space. best, Carson PS I do like your self-consistent test for confirming consciousness. I once proposed that we could just run Turing machines and see which ones asked why they exist as a test of C. Kind of similar to your idea. On 1/28/14 3:09 PM, Brian J Mingus wrote: > Hi Richard, thanks for the feedback. > > > Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C > itself (which I don't think anyone has, so therefore it is, after all, > of no value to look for outcome measures that correlate) > > Actually, the outcome measure I described is independent of a clear > and unambiguous meaning for C itself, and in an interesting way: the > models, like us, essentially reinvent the entire literature, and have > a conversation as we do, inventing almost all the same positions that > we've invented (including the one in your paper). > > I will read your paper and see if it changes my position. At the > present time, however, I can't imagine any information that would > solve the so-called zombie problem. I'm not a big fan of integrative > information theory - I don't think hydrogen atoms are conscious, and I > don't think naive bayes trained on a large corpus and run in > generative mode is conscious. Thus, if the model doesn't go through > the same philosophical reasoning that we've collectively gone through > with regards to subjective experience, then I'm going to wonder if its > experience is anything like mine at all. > > Touching back on QM, if we create a point neuron-based model that > doesn't wax philosophical on consciousness, I'm going to wonder if we > should add lower levels of analysis. > > I will take a look at your paper, and see if it changes my view on > this at all. > > Cheers, > > Brian Mingus > > http://grey.colorado.edu/mingus > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore > > wrote: > > > > Brian, > > Everything hinges on the definition of the concept > ("consciousness") under consideration. > > In the chapter I wrote in Wang & Goertzel's "Theoretical > Foundations of Artificial General Intelligence" I pointed out > (echoing Chalmers) that too much is said about C without a clear > enough understanding of what is meant by it .... and then I went > on to clarify what exactly could be meant by it, and thereby came > to a resolution of the problem (with testable predictions). So I > think the answer to the question you pose below is that: > > (a) Yes, in general, having an outcome measure that correlates > with C ... that is good, but only with a clear and unambigous > meaning for C itself (which I don't think anyone has, so therefore > it is, after all, of no value to look for outcome measures that > correlate), and > > (b) All three of the approaches you mention are sidelined and > finessed by the approach I used in the abovementioned paper, where > I clarify the definition by clarifying first why we have so much > difficulty defining it. In other words, there is a fourth way, > and that is to explain it as ... well, I have to leave that > dangling because there is too much subtlety to pack into an > elevator pitch. (The title is the best I can do: " Human and > Machine Consciousness as a Boundary Effect in the Concept Analysis > Mechanism "). > > Certainly though, the weakness of all quantum mechanics 'answers' > is that they are stranded on the wrong side of the explanatory gap. > > > Richard Loosemore > > > Reference > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a > Boundary Effect in the Concept Analysis Mechanism. In: P. Wang & > B. Goertzel (Eds), Theoretical Foundations of Artifical General > Intelligence. Atlantis Press. > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: >> Hi Richard, >> >> > I can tell you that the quantum story isn't nearly enough clear >> in the minds of physicists, yet, so how it can be applied to the >> C question is beyond me. Frankly, it does NOT apply: saying >> anything about observers and entanglement does not at any point >> touch the kind of statements that involve talk about qualia etc. >> >> I'm not sure I see the argument you're trying to make here. If >> you have an outcome measure that you agree correlates with >> consciousness, then we have a framework for scientifically >> studying it. >> >> Here's my setup: If you create a society of models and do not >> expose them to a corpus containing consciousness philosophy and >> they then, in a reasonably short amount of time, independently >> rewrite it, they are almost certainly conscious. This design >> explicitly rules out a generative model that accidentally spits >> out consciousness philosophy. >> >> Another approach is to accept that our brains are so similar that >> you and I are almost certainly both conscious, and to then >> perform experiments on each other and study our subjective reports. >> >> Another approach is to perform experiments on your own brain and >> to write first person reports about your experience. >> >> These three approaches each have tradeoffs, and each provide >> unique information. The first approach, in particular, might >> ultimately allow us to draw some of the strongest possible >> conclusions. For example, it allows for the scientific study of >> the extent to which quantum effects may or may not be relevant. >> >> I'm very interested in hearing any counterarguments as to why >> this general approach won't work. If it /can't/ work, then I >> would argue that perhaps we should not create full models of >> ourselves, but should instead focus on upgrading ourselves. From >> that perspective, getting this to work is extremely important, >> despite however futuristic it may seem. >> >> > So let's let that sleeping dog lie.... (?). >> >> Not gonna' happen. :) >> >> Brian Mingus >> http://grey.colorado.edu >> >> On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore >> > wrote: >> >> On 1/27/14, 11:30 PM, Brian J Mingus wrote: >> >> Consciousness is also such a bag of worms that we can't >> rule out that qualia owes its totally non-obvious and a >> priori unpredicted existence to concepts derived from >> quantum mechanics, such as nested observers, or entanglement. >> >> As far as I know, my litmus test for a model is the only >> way to tell whether low-level quantum effects are >> required: if the model, which has not been exposed to a >> corpus containing consciousness philosophy, then goes on >> to independently recreate consciousness philosophy, >> despite the fact that it is composed of (for example) >> point neurons, then we can be sure that low-level quantum >> mechanical details are not important. >> >> Note, however, that such a model might still rely on >> nested observers or entanglement. I'll let a quantum >> physicist chime in on that - although I will note that >> according to news articles I've read that we keep >> managing to entangle larger and larger objects - up to >> the size of molecules at this time, IIRC. >> >> >> Brian Mingus >> http://grey.colorado.edu/mingus >> >> Speaking as someone is both a physicist and a cognitive >> scientist, AND someone who has written papers resolving that >> whole C-word issue, I can tell you that the quantum story >> isn't nearly enough clear in the minds of physicists, yet, so >> how it can be applied to the C question is beyond me. >> Frankly, it does NOT apply: saying anything about observers >> and entanglement does not at any point touch the kind of >> statements that involve talk about qualia etc. So let's let >> that sleeping dog lie.... (?). >> >> As for using the methods/standards of physics over here in >> cog sci ..... I think it best to listen to George Bernard >> Shaw on this one: "Never do unto others as you would they do >> unto you: their tastes may not be the same." >> >> Our tastes (requirements/constraints/issues) are quite >> different, so what happens elsewhere cannot be directly, >> slavishly imported. >> >> >> Richard Loosemore >> >> Wells College >> Aurora NY >> USA >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Jan 28 16:14:52 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 28 Jan 2014 15:14:52 -0600 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E817FD.2020400@susaro.com> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> Message-ID: Ok, had enough here - back to work. It is emblematic, however, for me of the larger problem that a discussion that started out by raising concerns about abstract models, disconnected from the physical realty of machine we are supposed to be understanding, has turned into a debate about quantum theory and consciousness. I rest my case The very best to everyone and to all of us as we try to figure this out. I have no doubt that everyone is sincere and truly believes in the approach they are taking. For my part, I will stick with the nuts and bolts. Jim Bower p.s Last one - personally I take Darwin?s view that the question of consciousness isn?t that interesting. On Jan 28, 2014, at 2:50 PM, Richard Loosemore wrote: > On 1/28/14, 3:09 PM, Brian J Mingus wrote: >> >> Hi Richard, thanks for the feedback. >> >> > Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate) >> >> Actually, the outcome measure I described is independent of a clear and unambiguous meaning for C itself, and in an interesting way: the models, like us, essentially reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented (including the one in your paper). >> > > I can tell you in advance that the theory I propose in that paper makes a prediction there. If your models (I assume you mean models of the human cognitive system) have precisely the right positioning for their 'concept analysis mechanism' (and they almost certainly would have to... it is difficult to avoid), then they would indeed "reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented". > > However, I can say *why* they should do this, as a tightly-argued consequence of the theory itself, and I can also say why they should express those same confusions about consciousness that we do. > > I think that is the key. I don't think the naked fact that a model-of-cognition reinvents the philosophy of mind would actually tell us anything, sadly. There is no strong logical compulsion there. It would boot me little to know that they had done that. > > Anyhow, look forward to hearing your thoughts if/when you get a chance. > > Richard Loosemore Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccchow at pitt.edu Tue Jan 28 16:19:31 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 16:19:31 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E81A9C.2080303@pitt.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E81A9C.2080303@pitt.edu> Message-ID: <52E81EE3.505@pitt.edu> Actually, we don't know if BQP is contained in NP so maybe there is hope for QM after all. On 1/28/14 4:01 PM, Carson Chow wrote: > Brian, > > Quantum mechanics can be completely simulated on a classical computer > so if quantum mechanics do matter for C then it must be a matter of > computational efficiency and nothing more. We also know that BQP > (i.e. set of problems solved efficiently on a quantum computer) is > bigger than BPP (set of problems solved effficiently on a classical > computer) but not by much. I'm not fully up to date on this but I > think factoring and boson sampling or about the only two examples that > are in BQP and not in BPP. We also know that BPP is much smaller than > NP, so if C does require QM then for some reason it sits in a small > sliver of complexity space. > > best, > Carson > > PS I do like your self-consistent test for confirming consciousness. I > once proposed that we could just run Turing machines and see which > ones asked why they exist as a test of C. Kind of similar to your idea. > > > On 1/28/14 3:09 PM, Brian J Mingus wrote: >> Hi Richard, thanks for the feedback. >> >> > Yes, in general, having an outcome measure that correlates with C >> ... that is good, but only with a clear and unambigous meaning for C >> itself (which I don't think anyone has, so therefore it is, after >> all, of no value to look for outcome measures that correlate) >> >> Actually, the outcome measure I described is independent of a clear >> and unambiguous meaning for C itself, and in an interesting way: the >> models, like us, essentially reinvent the entire literature, and have >> a conversation as we do, inventing almost all the same positions that >> we've invented (including the one in your paper). >> >> I will read your paper and see if it changes my position. At the >> present time, however, I can't imagine any information that would >> solve the so-called zombie problem. I'm not a big fan of integrative >> information theory - I don't think hydrogen atoms are conscious, and >> I don't think naive bayes trained on a large corpus and run in >> generative mode is conscious. Thus, if the model doesn't go through >> the same philosophical reasoning that we've collectively gone through >> with regards to subjective experience, then I'm going to wonder if >> its experience is anything like mine at all. >> >> Touching back on QM, if we create a point neuron-based model that >> doesn't wax philosophical on consciousness, I'm going to wonder if we >> should add lower levels of analysis. >> >> I will take a look at your paper, and see if it changes my view on >> this at all. >> >> Cheers, >> >> Brian Mingus >> >> http://grey.colorado.edu/mingus >> >> >> >> On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore >> > wrote: >> >> >> >> Brian, >> >> Everything hinges on the definition of the concept >> ("consciousness") under consideration. >> >> In the chapter I wrote in Wang & Goertzel's "Theoretical >> Foundations of Artificial General Intelligence" I pointed out >> (echoing Chalmers) that too much is said about C without a clear >> enough understanding of what is meant by it .... and then I went >> on to clarify what exactly could be meant by it, and thereby came >> to a resolution of the problem (with testable predictions). So >> I think the answer to the question you pose below is that: >> >> (a) Yes, in general, having an outcome measure that correlates >> with C ... that is good, but only with a clear and unambigous >> meaning for C itself (which I don't think anyone has, so >> therefore it is, after all, of no value to look for outcome >> measures that correlate), and >> >> (b) All three of the approaches you mention are sidelined and >> finessed by the approach I used in the abovementioned paper, >> where I clarify the definition by clarifying first why we have so >> much difficulty defining it. In other words, there is a fourth >> way, and that is to explain it as ... well, I have to leave that >> dangling because there is too much subtlety to pack into an >> elevator pitch. (The title is the best I can do: " Human and >> Machine Consciousness as a Boundary Effect in the Concept >> Analysis Mechanism "). >> >> Certainly though, the weakness of all quantum mechanics 'answers' >> is that they are stranded on the wrong side of the explanatory gap. >> >> >> Richard Loosemore >> >> >> Reference >> Loosemore, R.P.W. (2012). Human and Machine Consciousness as a >> Boundary Effect in the Concept Analysis Mechanism. In: P. Wang & >> B. Goertzel (Eds), Theoretical Foundations of Artifical General >> Intelligence. Atlantis Press. >> http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf >> >> >> >> On 1/28/14, 10:34 AM, Brian J Mingus wrote: >>> Hi Richard, >>> >>> > I can tell you that the quantum story isn't nearly enough >>> clear in the minds of physicists, yet, so how it can be applied >>> to the C question is beyond me. Frankly, it does NOT apply: >>> saying anything about observers and entanglement does not at >>> any point touch the kind of statements that involve talk about >>> qualia etc. >>> >>> I'm not sure I see the argument you're trying to make here. If >>> you have an outcome measure that you agree correlates with >>> consciousness, then we have a framework for scientifically >>> studying it. >>> >>> Here's my setup: If you create a society of models and do not >>> expose them to a corpus containing consciousness philosophy and >>> they then, in a reasonably short amount of time, independently >>> rewrite it, they are almost certainly conscious. This design >>> explicitly rules out a generative model that accidentally spits >>> out consciousness philosophy. >>> >>> Another approach is to accept that our brains are so similar >>> that you and I are almost certainly both conscious, and to then >>> perform experiments on each other and study our subjective reports. >>> >>> Another approach is to perform experiments on your own brain and >>> to write first person reports about your experience. >>> >>> These three approaches each have tradeoffs, and each provide >>> unique information. The first approach, in particular, might >>> ultimately allow us to draw some of the strongest possible >>> conclusions. For example, it allows for the scientific study of >>> the extent to which quantum effects may or may not be relevant. >>> >>> I'm very interested in hearing any counterarguments as to why >>> this general approach won't work. If it /can't/ work, then I >>> would argue that perhaps we should not create full models of >>> ourselves, but should instead focus on upgrading ourselves. From >>> that perspective, getting this to work is extremely important, >>> despite however futuristic it may seem. >>> >>> > So let's let that sleeping dog lie.... (?). >>> >>> Not gonna' happen. :) >>> >>> Brian Mingus >>> http://grey.colorado.edu >>> >>> On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore >>> > wrote: >>> >>> On 1/27/14, 11:30 PM, Brian J Mingus wrote: >>> >>> Consciousness is also such a bag of worms that we can't >>> rule out that qualia owes its totally non-obvious and a >>> priori unpredicted existence to concepts derived from >>> quantum mechanics, such as nested observers, or >>> entanglement. >>> >>> As far as I know, my litmus test for a model is the only >>> way to tell whether low-level quantum effects are >>> required: if the model, which has not been exposed to a >>> corpus containing consciousness philosophy, then goes on >>> to independently recreate consciousness philosophy, >>> despite the fact that it is composed of (for example) >>> point neurons, then we can be sure that low-level >>> quantum mechanical details are not important. >>> >>> Note, however, that such a model might still rely on >>> nested observers or entanglement. I'll let a quantum >>> physicist chime in on that - although I will note that >>> according to news articles I've read that we keep >>> managing to entangle larger and larger objects - up to >>> the size of molecules at this time, IIRC. >>> >>> >>> Brian Mingus >>> http://grey.colorado.edu/mingus >>> >>> Speaking as someone is both a physicist and a cognitive >>> scientist, AND someone who has written papers resolving that >>> whole C-word issue, I can tell you that the quantum story >>> isn't nearly enough clear in the minds of physicists, yet, >>> so how it can be applied to the C question is beyond me. >>> Frankly, it does NOT apply: saying anything about >>> observers and entanglement does not at any point touch the >>> kind of statements that involve talk about qualia etc. So >>> let's let that sleeping dog lie.... (?). >>> >>> As for using the methods/standards of physics over here in >>> cog sci ..... I think it best to listen to George Bernard >>> Shaw on this one: "Never do unto others as you would they >>> do unto you: their tastes may not be the same." >>> >>> Our tastes (requirements/constraints/issues) are quite >>> different, so what happens elsewhere cannot be directly, >>> slavishly imported. >>> >>> >>> Richard Loosemore >>> >>> Wells College >>> Aurora NY >>> USA >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccchow at pitt.edu Tue Jan 28 16:34:14 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 16:34:14 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> Message-ID: <52E82256.2080906@pitt.edu> Jim, Before you check out there is one question that I wanted you to address. I think you believe that the brain is irreducibly complex so that Kolmogorov complexity of the brain is the brain itself. Is this true? If this is true then the only model of the brain is the brain itself. We can reconstruct it by faithfully simulating it but there is no abstraction of it that is simpler than the whole thing, right? In such a situation, what does it mean to understand the brain? Also, while I agree that we should focus only on things that evolution can affect I don't think we can say how optimal the brain is. Three billion years may be a long time to us but nature can only search a miniscule sample of genome space in that time. All we can really say is that the brain is locally optimal conditioned on its entire history. Thus, we have no idea of how many possible ways to construct brains that have performance capabilities similar to mammals. best, Carson On 1/28/14 4:14 PM, james bower wrote: > Ok, had enough here - back to work. > > It is emblematic, however, for me of the larger problem that a > discussion that started out by raising concerns about abstract models, > disconnected from the physical realty of machine we are supposed to be > understanding, has turned into a debate about quantum theory and > consciousness. > > I rest my case > > The very best to everyone and to all of us as we try to figure this > out. I have no doubt that everyone is sincere and truly believes in > the approach they are taking. For my part, I will stick with the nuts > and bolts. > > Jim Bower > > p.s Last one - personally I take Darwin?s view that the question of > consciousness isn?t that interesting. > > > > > On Jan 28, 2014, at 2:50 PM, Richard Loosemore > wrote: > >> On 1/28/14, 3:09 PM, Brian J Mingus wrote: >>> Hi Richard, thanks for the feedback. >>> >>> > Yes, in general, having an outcome measure that correlates with C >>> ... that is good, but only with a clear and unambigous meaning for C >>> itself (which I don't think anyone has, so therefore it is, after >>> all, of no value to look for outcome measures that correlate) >>> >>> Actually, the outcome measure I described is independent of a clear >>> and unambiguous meaning for C itself, and in an interesting way: the >>> models, like us, essentially reinvent the entire literature, and >>> have a conversation as we do, inventing almost all the same >>> positions that we've invented (including the one in your paper). >>> >> >> I can tell you in advance that the theory I propose in that paper >> makes a prediction there. If your models (I assume you mean models >> of the human cognitive system) have precisely the right positioning >> for their 'concept analysis mechanism' (and they almost certainly >> would have to... it is difficult to avoid), then they would indeed >> "reinvent the entire literature, and have a conversation as we do, >> inventing almost all the same positions that we've invented". >> >> However, I can say *why* they should do this, as a tightly-argued >> consequence of the theory itself, and I can also say why they should >> express those same confusions about consciousness that we do. >> >> I think that is the key. I don't think the naked fact that a >> model-of-cognition reinvents the philosophy of mind would actually >> tell us anything, sadly. There is no strong logical compulsion >> there. It would boot me little to know that they had done that. >> >> Anyhow, look forward to hearing your thoughts if/when you get a chance. >> >> Richard Loosemore > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > *Phone: 210 382 0553* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information > is only for the viewing or use of the intended recipient. If you have > received this e-mail in error or are not the intended recipient, you > are hereby notified that any disclosure, copying, distribution or use > of, or the taking of any action in reliance upon, any of the > information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately > deleted from your computer without making any copies hereof and any > and all hard copies made must be destroyed. If you have received this > e-mail in error, please notify the sender by e-mail immediately. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christos.dimitrakakis at gmail.com Tue Jan 28 16:49:35 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Tue, 28 Jan 2014 22:49:35 +0100 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E81EE3.505@pitt.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E81A9C.2080303@pitt.edu> <52E81EE3.505@pitt.edu> Message-ID: <52E825EF.1000001@gmail.com> >> are in BQP and not in BPP. We also know that BPP is much smaller than >> NP, so if C does require QM then for some reason it sits in a small Well, P <= BPP <= NP - we can guess that BPP is smaller than NP, but even the outer relationship has not been resolved yet. -- Christos Dimitrakakis http://www.cse.chalmers.se/~chrdimi/ From randy.oreilly at colorado.edu Tue Jan 28 18:02:23 2014 From: randy.oreilly at colorado.edu (Randall O'Reilly) Date: Tue, 28 Jan 2014 16:02:23 -0700 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E817BC.30607@pitt.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E817BC.30607@pitt.edu> Message-ID: <7D18CDFF-631B-4029-B949-830474742A30@colorado.edu> Carson -- yeah I need to fix that part about the massive particles -- that has been pointed out to me and was just sloppy on my part. I don't think it changes the fundamental points though -- I just focus on the photon case because it is where all the experimental action is, and the issues seem particularly clear there. I don't know why it is so hard to communicate in the context of physics this difference between a mechanistic model and a calculational tool, which is at the core of my issue with QM. I completely agree that QM in Hilbert space is a clean, clear, nice calculational tool. But I just don't see how nature could do anything like it. Going up to the neuroscience level, it seems perfectly clear to everyone to say that a Bayesian or other abstract mathematical model can capture the relevant behavior, but it doesn't capture the way that the brain actually does it (through neurons and ion channels and whatnot). There may not be (and unless you're incredibly lucky, almost certainly isn't) a direct isomorphism between the variables in the abstract model and those in the brain. But you can capture the behavior very accurately. This is just two different levels of description of the same phenomenon. In physics, things are simple enough that you can have multiple different ways of describing the exact same thing that are *completely isomorphic* -- they are truly indistinguishable. The example in the paper is the Coloumb vs. Lorenz gauge formulations of EM, but there are many others. I claim that the Lorenz gauge is something that nature could compute (it is local, simple wave equation, easily implemented in a cellular automaton, etc). My key point then is that there isn't an equivalent representation of QM that has these same properties. It is all done in fourier space with perturbative expansions and whatnot, which are not something that nature could locally, tractably compute. Maybe I'm just crazy but this just seems self-evident to me, but I have the hardest time communicating it! Cheers, - Randy On Jan 28, 2014, at 1:49 PM, Carson Chow wrote: > Randy, I just skimmed your screed against physics and while I agree that most physicists don't pay attention to the foundations of quantum mechanics and I sympathize with your angst, I still must disagree with some of your assertions. > > It is not true that only entangled photons can demonstrate nonlocality. Once two massive particles are outside of their respective light cones, they are no longer causally related by the speed-of-light. If you have two measuring devices setup very far apart and you can keep the two particles coherent then they will violate Bell's inequalities. The fact that it took them a long time to get to the space-like separation is irrelevant. All that matters is once they are far enough apart they are causally separated but Bell's inequalities will still be violated. I also believe that Alain Aspect's experiments show photon entanglement. > > QED is manifestly Lorentz covariant so photons do travel at the speed of light in the theory. Also, the Langrangian for all quantum field theories have the same form as Maxwell's equations that you like so much. I also don't really follow what you are so distrubed about with regards to the locality of photons. The normal modes of photons are indeed pure Fourier modes but the photons that you know and love come in wave packets, which imparts locality to them. I'm not sure why this bothers you. > > I think you also impart some advantage to semi-classical calculations over QM that don't seem warranted. All calculations in QFT are perturbational and there are basically two small parameters you can use - the coupling constant, e.g. alpha, or Planck's constant hbar. A semi-classical calculation (also called a loop expansion) just uses small hbar. The agreement to experiments like the Lamb shift or electron-photon scattering, etc do improve as you go to higher order in the loop expansion. > > As mathematics, Quantum mechanics is beautifully self-consistent and rather simple. All you need is unitary transformation of a state function in Hilbert space together with the Born rule. You may find that distasteful as a representation of reality but I find that much more satisfying than our confusing nonlinear classical world. I think the biggest puzzle in quantum mechanics is the origin of the Born rule. Why is the L2 norm squared of the amplitude probability? > > Anyway, I had no idea you were thinking about these things. > > cheers, > Carson > > > On 1/28/14 2:12 PM, Randall O'Reilly wrote: >> I?m glad to hear this counterpoint to all this physics envy ? I took a deep dive into the current state of theory in quantum physics a while back, and was pretty shocked at what a mess it is! Sure, it works (?shut up and calculate? is a mantra) but from a conceptual level, there are some pretty serious unresolved issues, which don?t seem to be very widely appreciated in the lay press, with all those rah-rah unified theory books. >> >> The core issue is relevant for the discussion here: physics does NOT actually have anything approaching a ?mechanistic? model ? it is all a descriptive calculational tool. In other words, you can compute the right answers, but this is clearly not how ?nature computes physics?. Indeed, the notion of finding such a mechanistic model is considered naive and has long since been abandoned. >> >> Translating this to our field: the Bayesians have won, and nobody cares about how neurons actually work! As long as you can compute the ?behavioral? outcome of experiments (to high precision for sure), the underlying hardware is irrelevant. And those calculations seem a lot like the epicycles: you need to compute more and more terms in infinite sums to reach ever-closer approximations to the truth, with the seemingly arbitrary renormalization procedure added in to make sure everything converges. Do we think that nature is using the same technique? >> >> Anyway, I wrote up a critique and submitted it to a physics journal: >> http://arxiv.org/abs/1109.0880 >> >> Not surprisingly, the paper was not accepted, but the review did not undermine any of the major claims of the paper, and just reiterated the ?standard? lines about the whole entanglement issue, denying the validity of the various papers cited raising serious questions about this. >> >> I did make some friends in the ?alternative? physics community from that paper, and I am currently (very slowly) working on a ?neural network? inspired model of quantum physics, described here: >> http://grey.colorado.edu/WELD/index.php/WELDBook/Main >> ? in this model, everything emerges from interacting wave equations, just like we think everything in the brain emerges from interacting neurons.. >> >> - Randy >> >> On Jan 28, 2014, at 8:03 AM, Kaare Mikkelsen >> >> wrote: >> >> >>> Speaking as another physicist trying to bridge the gap between physics and neuroscience I must also say that how the most abstract ideas from quantum mechanics could meaningfully (read: scientifically) be applied to macroscopic neuroscience, given our present level of understanding of either field, is beyond me. To me, it is at the point where the connection is impossible to prove or disprove, but seems very unlikely. I do not see how valid scientific results can come in that direction, seeing as there is no theory, no reasonable path towards a theory, and absolutely no way of measuring anything. >>> >>> -------------------------------------------------------------------- >>> Kaare Mikkelsen, M. Sc. >>> Institut for Fysik og Astronomi >>> Ny Munkegade 120 >>> 8000 >>> Aarhus C >>> Lok.: 1520-629 >>> Tlf.: 87 15 56 37 >>> -------------------------------------------------------------------- >>> >>> >>> On 28 January 2014 15:32, Richard Loosemore >>> >>> wrote: >>> On 1/27/14, 11:30 PM, Brian J Mingus wrote: >>> Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. >>> >>> As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. >>> >>> Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. >>> >>> >>> Brian Mingus >>> >>> http://grey.colorado.edu/mingus >>> >>> >>> Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). >>> >>> As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." >>> >>> Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. >>> >>> >>> Richard Loosemore >>> >>> Wells College >>> Aurora NY >>> USA >>> >>> >>> >> > From ccchow at pitt.edu Tue Jan 28 18:21:40 2014 From: ccchow at pitt.edu (Carson Chow) Date: Tue, 28 Jan 2014 18:21:40 -0500 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E825EF.1000001@gmail.com> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E81A9C.2080303@pitt.edu> <52E81EE3.505@pitt.edu> <52E825EF.1000001@gmail.com> Message-ID: <52E83B84.2080802@pitt.edu> You're absolutely right. I was implicitly assuming P != NP but of course that's the million dollar question! Anyway, if QM has anything to do with C then somehow C is in BQP but not in BPP. On 1/28/14 4:49 PM, Christos Dimitrakakis wrote: >>> are in BQP and not in BPP. We also know that BPP is much smaller than >>> NP, so if C does require QM then for some reason it sits in a small > > Well, P <= BPP <= NP - we can guess that BPP is smaller than NP, but > even the outer relationship has not been resolved yet. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Jan 28 18:29:39 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 28 Jan 2014 17:29:39 -0600 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <52E82256.2080906@pitt.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> <52E82256.2080906@pitt.edu> Message-ID: <3BCD5774-9EE4-4B12-A7BC-DFB62600A49B@uthscsa.edu> On Jan 28, 2014, at 3:34 PM, Carson Chow wrote: > Jim, > > Before you check out there is one question that I wanted you to address. I think you believe that the brain is irreducibly complex so that Kolmogorov complexity of the brain is the brain itself. Is this true? Yes, this is my operating assumption. One thing to note about that, that I think is important. In my view a major issue with neuroscience is that I believe we actually have a very poor understanding of what the brain actually does. While we have technology that can study its anatomical structure at angstrom spacial scale, and we routinely measure its electrical activity at microsecond and millisecond levels, in most cases our description of its overall behavior, is at scales of minutes to hours and not very sophisticated at all. Yes, we can measure physical movement of eyes, limbs, etc at very high spatial and temporal resolution - and clearly progress has been made in linking those kinds of brain behavior to neural activity (although perhaps not as much as some think) - however at the level of the behavioral function of the overall machine I believe our understanding is far too crude and as challenging as figuring out the brains structure (another way to answer your question). (Branch point I won?t take to how much ?cognitive? analysis is responsible for this :-) ). Anyway, How is that related to the complexity question? One can certainly design a model or system that, for example, controls the movements of robotic limbs that has much less the apparent complexity of the systems in the brain that control arm movement. While I think it is still fair to say that after many years of effort, robots designed to mimic the behavior of actual biological organisms are still clearly distinguishable from the biological case. However, it seems to me that figuring out IF the Komogorov complexity of the problem the brain solves, is equivalent to the complexity of the brain, requires an understanding of the complexity of the problem the brain solves in total. Of course, in my animal behavior course, students eventually figure out that in some overall sense, the problem that male and female brains are trying to solve is whether he/she will or won?t let me commingle gametes (why it takes 19 year olds half a semester to realize that has always been beyond me). However, in fact, the behavior, decision making etc involved in getting to that point and then beyond is vastly more complex I think, than we know. (perhaps one reason why we like to short circuit all that stuff with drugs and rock and roll :-) ). But why do I assume the K-C relationship: As I mentioned before, in those few cases were we can actually authoritatively measure the performance of the nervous system against constraints given by the actual physics (mostly those measurements have been made at the receptor level), the nervous system?s performance comes right up against the physical limits. NOT ONLY THAT, there is evidence that further computation in the brain can actually improve the resolution (so-called hyper acuity). There is also a thermodynamic argument - very simple (perhaps too simple). The brain has 10 to the 12th neurons (lets say), 10 to the 14th (conservatively) connections. And its abilities are obviously remarkable. Yet you have to heat it up to make it work. It doesn?t generate enough energy to keep itself warm. Perhaps everyone doesn?t know that, by volume> the largest fraction of the brain (by a lot) is actually the system that supports the neurons - the glia and the vascular system. Providing the energy (Glucose remarkably enough) and the heat. Sounds to me like rather remarkable engineering (if you excuse the inference). As everyone here probably does know, heat generation is a principle limitation on the design of modern chips - One other overall measure of this structures likely sophistication. The first talk I gave in a Neural Networks context, was at a small meeting at the Miramar Hotel in Santa Barbara. This meeting had been organized (by Caltech and AT&T) as a follow on to a ?Hopfest? that had taken place a few months earlier at Caltech. I gave a talk about the structure of olfactory cortex and the likely origin of the theta (7-12 Hz) and Gama (40Hz) oscillations in cerebral cortex - by talking about the results of what i think was the first anatomically realistic model of the cerebral cortex that had been built (by Matt Wilson as a master?s degree student on an IBM XT computer). In describing the model, I stated that there appeared to be a very diffuse pattern of intrinsic connections with no order (note my previous post, when I said that I now believe based on the 3rd generation of that model that this wrong - anyway). The audience didn?t care about oscillations at all - but they did pick up on the idea that there was possibly a Hopflied-Like network in the real brain. and they asked if this was possible. I told them that it was almost certainly not possible, but that evening in the bar ?proved it? by calculating how large the brain would be if all the neurons were connected to every other neuron. Only considering the axons - excluding the rest of the neurons, the glia, the vasculature etc, the answer turned out to be 20 KM in diameter. Not much room for redundancy there. So, sorry, once again long winded - but my assumption for all these reasons is that the K-complexity of the brain, is the brain itself. (and let the howling begin). > If this is true then the only model of the brain is the brain itself. We can reconstruct it by faithfully simulating it but there is no abstraction of it that is simpler than the whole thing, right? In such a situation, what does it mean to understand the brain? An outstanding and important question and something I worry about all the time while, at the same time, I don?t. And here is why. And this, I think is a critical distinction in the world of modeling AND in this debate. It is not my intent to publish in my lifetime an accurate model of the brain and its function, or even the cerebellar Purkinje cell and I don?t think anyone else will either. My interest has always been in the process - in being on the right path - that is really at the fundamental foundation of what I have been trying to say for the last 3 days. We need to think of this endeavor as a quest - and I don?t think there will be any short cuts. Building GENESIS, working to start the CNS graduate program at Caltech, starting NIPS, starting the CNS program, starting the summer courses in Woods Hole and then Europe and then latin America, starting the Journal of Computational Neuroscience, publishing all our models openly, inviting criticism, etc - was all about starting a process - which I think we are at the very beginning of - absolutely. It is my deep belief that if we as humans start on a quest, with the right tools, headed in the right direction, working collaborative and cooperatively, we can make progress. We did build the pyramids, thousands of years ago. Pretty remarkable. But I think throughout human history, we have often tried to take short cuts - and we also have a strong tendency to let our egos drive the enterprise - to be the one who found out how visual cortex worked, or at least convinced some number of poor experimentalists that we are right. In my view, not much chance. So, in fact, those on this list who have criticized me for claiming that there is one true way (my way) to figuring this system out, are not understanding. All I said, and what I deeply believe, is if your model or theory isn?t directly connected to the actual physical structure of the machine - it isn?t likely to contribute to the long haul process of figuring out how that machine works. And if your experimental efforts, aren?t linked to models and theories, you have no way of knowing that the data you are collecting are actually the information that is most needed now to advance our understanding for the next increment. And if the strong tendency of the field is to have everyone work on their own models, claim their models account for all the data (just like everyone else tend to claim their models do) and at the same time claim their model is unique - then - we aren?t using the right process. And I think that progress is all about process. Anyway, I am being redundant - but I think that your question is fundamental, and also honestly why there is so much resistance to this approach to modeling - in our heart of hearts I suspect we all know that this task is not doable in our lifetime, and who knows, may not be doable at all. (Physicist looking for their keys under the light post). But what the heck - its interesting. > > Also, while I agree that we should focus only on things that evolution can affect I don't think we can say how optimal the brain is. see above - and of course we can?t - all we can say is that we haven?t come close to building a machine that does what it does IN TOTAL (yes, we can beat humans at Jeoperdy - but who cares ? Watson couldn?t find its way to the studio and certainly couldn?t beat me in polo). Furthermore, I don?t believe that we yet even know what kind of machine the brain is. I wrote a paper years ago, I think for Trends in Neuroscience, making the point that humans have always thought that the brain was built and functioned based on the same principles as the most sophisticated technology of the day (humors for the aqueduct loving greeks, machinery for DesCarte, and parallel distributed analog computer for us) are we any closer? Who knows? Sadly, we often seem to forget that these comparisons are metaphorical, not actual. > Three billion years may be a long time to us but nature can only search a miniscule sample of genome space in that time. Oh Boy - you really don?t want me to write pages more about how misunderstood the evolutionary process is and how completely unclear it is (in large part because they suffer from the same kind of short cut modeling) how DNA structure is related to cellular behavior - even not considering evolution - remember ?junk DNA? - we really don?t want to discuss that. Another manifestation of the same unfortunate human tendency to take short cuts - (perhaps thist is our tendency is because the brain is always looking for ways to reduce its own energy costs. :-) ). > All we can really say is that the brain is locally optimal conditioned on its entire history. Thus, we have no idea of how many possible ways to construct brains that have performance capabilities similar to mammals. one thing we do know - which is another little bit of circumstantial evidence, is that when brain components (like the retina) evolve to perform a similar type of function in two animals without a common seeing ancestor (the Cephlapod and the Primate), the retina that is generated is remarkably similar in structure - in fact, my understanding is that, accept that the Cepholapod retina is pointed in the ?right? direction - i.e. towards the light source, it is essentially indistinguishable in its architecture. Tends to suggest that there may not be that many optimal solutions - at least, it suggests that the same solution evolved independently. (by the way, the problem of convergent evolution is actually a huge confound, often ignored in computational molding of DNA evolution and history). Well, I tried to sign off briefly - but couldn?t help myself as I think these are core questions. The senior faculty on this list have no doubt tired of these kinds of arguments - and many in particular have tired of me making them. However, I don?t engage in these debates to impress my colleagues (long since unimpressed). I engage in them so that students can think in new and fresh ways about this stuff, and not bury them, just to get to work. the question you asked is key - its implications are profound and nobody knows - but, I have my suspicions. Best and my apologies for once again, being long winded Jim > > best, > Carson > > > > On 1/28/14 4:14 PM, james bower wrote: >> Ok, had enough here - back to work. >> >> It is emblematic, however, for me of the larger problem that a discussion that started out by raising concerns about abstract models, disconnected from the physical realty of machine we are supposed to be understanding, has turned into a debate about quantum theory and consciousness. >> >> I rest my case >> >> The very best to everyone and to all of us as we try to figure this out. I have no doubt that everyone is sincere and truly believes in the approach they are taking. For my part, I will stick with the nuts and bolts. >> >> Jim Bower >> >> p.s Last one - personally I take Darwin?s view that the question of consciousness isn?t that interesting. >> >> >> >> >> On Jan 28, 2014, at 2:50 PM, Richard Loosemore wrote: >> >>> On 1/28/14, 3:09 PM, Brian J Mingus wrote: >>>> >>>> Hi Richard, thanks for the feedback. >>>> >>>> > Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate) >>>> >>>> Actually, the outcome measure I described is independent of a clear and unambiguous meaning for C itself, and in an interesting way: the models, like us, essentially reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented (including the one in your paper). >>>> >>> >>> I can tell you in advance that the theory I propose in that paper makes a prediction there. If your models (I assume you mean models of the human cognitive system) have precisely the right positioning for their 'concept analysis mechanism' (and they almost certainly would have to... it is difficult to avoid), then they would indeed "reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented". >>> >>> However, I can say *why* they should do this, as a tightly-argued consequence of the theory itself, and I can also say why they should express those same confusions about consciousness that we do. >>> >>> I think that is the key. I don't think the naked fact that a model-of-cognition reinvents the philosophy of mind would actually tell us anything, sadly. There is no strong logical compulsion there. It would boot me little to know that they had done that. >>> >>> Anyhow, look forward to hearing your thoughts if/when you get a chance. >>> >>> Richard Loosemore >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Tue Jan 28 19:33:12 2014 From: achler at gmail.com (Tsvi Achler) Date: Tue, 28 Jan 2014 16:33:12 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: Jim, I am not surprised that olfactory activation changes significantly between awake and anesthetized animals. There are hypothesized to be two types of top-down feedback loops in the olfactory bulb which originates from the brain eg. (Meisami 1991). Also I am not surprised recognition is not continuous with slightly changing molecules. Molecule folding and configuration (secondary and tertiary structure) are significant in determining the properties molecules, receptors, and receptors with molecules exhibit. I also suspect slightly changing receptors will also yield radically different responses. Thanks for the feedback. I look forward for a continuing vibrant discussion of network brain-like properties and limits. Sincerely, -Tsvi Meisami, E. (1991). Chemoreception. Neural and integrative animal physiology C. L. Prosser. New York, Wiley-Liss: p335-434 ix, 776 p. On Mon, Jan 27, 2014 at 2:57 PM, james bower wrote: > Tsvi, > > Nice list and I think a productive approach - the nature of the questions is > obviously important. > > Its another long story, but might I ask that you, and the community consider > how the answer to each of these questions might change, if for example, the > nervous system already 'knows' in advance what it is looking for and > 'learning' in the usual sense, is not involved. > > In our own work, to my great surprise, evidence has emerged from both > detailed physiological models of the olfactory cortex and also from studies > of the classification systems that humans use to describe odors, that the > olfactory system might already know a great deal in advance about the > metabolic structure of the organic world it is trying to detect and that > that prior knowledge plays a key role in recognition. These results, first > obtained 15 years ago, were never published except in thesis form in large > part because they were so antithetical to current thinking about how the > olfactory system worked in particular and learning worked in general that we > decided it wasn't worth the effort. A previous paper showing that olfactory > receptive fields changed in the olfactory bulb, using one of the first ever > awake behaving multi-single unit recording procedures, took 5 years to get > published. > > However, the metabolic hypothesis (as we have called it) was recently a > subject in a meeting in Germany for which I append at the end of this > comment a brief description of the idea. > > You might find it interesting that these two lines of evidence pointing in > the same direction were obtained completely independently, and that the > modeling result in particular was completely unexpected. What we set out to > do in what was and I think still is the most detailed biological model of > certainly olfactory cortex ever made, was to duplicate the pattern of > current source densities found during (natural) cortical oscillations. (BTW: > the first version of this model 25 years ago showed that these oscillations > are not 'driven' from anywhere, but in fact are an intrinsic property of the > network itself). It turned out that the only way the experimental data > could be reconstructed was if there were independent subnetworks in the > cortex. For 25 previous years, I had assumed that the olfactory cortex was > some kind of associative learning network (influence of the NN community > actually), based on its apparently highly defuse and topographically > unorganized set of intrinsic excitatory connections. Turns out, the model > predicted that this apparent diffuseness may be concealing what is actually > a highly organized network structure - but not of the usual topographic > type. I suspect, although we don't know that these subnets reflect the > structure of the metabolic world. > > So, after 25 years, i was forced by the modeling work to completely change > how I was thinking about how the system worked. > > This is the value and power of this type of modeling, to fundamentally > change what you think about how something works. > > Perhaps it doesn't need to be pointed out, that this is also obviously the > kind of result that could support and drive the kind of experimental effort > that the Brain project is intent on undertaking. Except that instead of > blind data collection - the data collection is organized in the context of a > particular hypothesis. My guess is (and we could probably use the model to > test this) that finding these subnetworks with blind data collection would > be much more difficult or perhaps even impossible. > > Anyway, a good list of questions, but as with any list of questions, they > make assumptions about how the system works. > > A question like: "what do we have to assume about the intrinsic connectivity > of olfactory cortex to duplicate the pattern of current source density > distributions following electrical shock of the lateral olfactory track in a > detailed biological model of the olfactory cortex" makes many fewer > functional assumptions. But, in this case (and in most cases of this type > of modeling we have done), what falls out is something we didn't know was > there, with, it would seem, significant potential functional significance > > to return again to Newton - while he clearly was interested in why the moon > remained in a circular orbit around the earth, he had no idea that the > apparent force between them had a regular relationship to the distance, > until he first invented (or stole depending) the calculous and actually saw > the relationship. > > Had nothing to do with the inspiration of an apple falling in his sister's > orchard. That was a story that he apparently made up subsequently to > impress others with his insight and genius. > > :-) > > Jim > > > > Metabolic - hypothesis Summary meeting report > > The Structure of Olfactory Space > > Hannover Germany Sept, 2013. > > > > Question: Is the olfactory system a chemical classifier, or a detector of > natural biological chemical processes? > > In the first century BC, the Roman poet and philosopher Lucretius speculated > about olfaction: "Thus simple 'tis to see that whatsoever can touch the > senses pleasingly are made of smooth and rounded elements, whilst those > which seem the bitter and the sharp, are held Entwined by elements more > crook'd". This intuition that the olfactory system generates olfactory > percepts by interpreting the general chemical structure of odorant molecules > continues to underlie much olfactory research. Practically, it is manifest > in the continued reliance on monomolecular odorant stimuli most often > presented as chemical families (alchohols and aldehydes) varying along a > single chemical metric (e.g. carbon length chain). The results, at > multiple levels of scale from single receptor neurons to networks, typically > show individual elements responding to a large and complex range of > compounds, leading in turn to the suggestion that the olfactory system uses > a distributed combinatorial code to learn to recognize objects. > Perceptually however, compounds with highly different chemical structures > can elicit similar odors, while small changes in chemical structure can > render a highly odorous compound completely odorless. For these and other > reasons, traditional approaches to classifying the perception of odorant > molecules based on their physical structure continue to have minimal > predictive value. > > We believe, as an alternative, it is worth considering whether the olfactory > system may not be a chemical classifier in the traditional sense, but > instead has evolved to detect known chemical patterns reflecting > biologically important signals in nature. In this view, "odor perceptual > space" is predicted to be organized around the chemical structure of the > organic world including, for example, the chemical signature of specific > metabolic pathways (from traditional food sources), chemical patterns > generated by one species to specifically attract other species (allomones, > kairomones, or even compounds given off by fruit to signal ripeness), or > stimuli signaling the interactions of "consortia of organisms" (microbial > digestion of plant or animal tissue). > > What we are proposing as the "Metabolomics Hypothesis" makes several > specific predictions: The core prediction is that the olfactory system will > be organized around biologically significant mixtures of molecules, in > effect, seeking evidence for the presence of particular chemical > interactions within the environment; This structure may be apparent as early > as single olfactory receptor proteins which could, for example, bind > odorants that are metabolically related, even if structurally dissimilar; > As a special case, molecules employed as signals between species (allomones, > kairomones , or molecules signally the ripeness of fruit for example) might > induce responses in a broad number of receptors; Receptor neuron projections > to the olfactory bulb as well as bulbar projections to the olfactory cortex > may be more ordered than previously assumed reflecting this structure; This > hypothesis further predicts that metabolic relatedness is more likely to > predict perception and perceptual interactions (cross adaptation for > example) than would either simple structural similarity, or chemical class. > Finally, and perhaps most importantly, we would predict that this structural > knowledge of the chemical world may be 'built into' the olfactory system at > the outset, providing a non-learned basis for olfactory perception. Such an > existing structure would relegate 'learning' to changes in > aversive/preference (hedonic) scale based on individual experience. While > preliminary evidence exists for each of these predictions (c.f. Chee, 2003; > Vanier, 2001), further experimental work is necessary to test this new > hypothesis. That work will depend, however, on the use of panels or > mixtures of odorants with known behavioral significance. > > > > > > > > > > > > > > > On Jan 27, 2014, at 12:31 PM, Tsvi Achler wrote: > > Jim has referred twice now to a list of problems and brain-like phenomena > that models should strive to emulate. In my mind this gets to the heart of > the matter. However, there was a discussion of one or two points and then it > fizzled. The brain shows many electrophysiological but also behavioral > phenomena. > > I would like to revive that discussion (and include not just neuroscience > phenomena) in a list to show: how significant these issues are, the size of > the gap in our knowledge, and focus more specifically on what is brain-like. > > Let me motivate this even further. The biggest bottleneck to understanding > the brain is understanding how the brain/neurons perform recognition. > Recognition is an essential foundation upon which cognition and intelligence > is based. Without recognition the brain cannot interact with the world. > Thus a better knowledge of recognition will open up the brain for better > understanding. > > Here is my humble list, and I would like to open it to discussions, > opinions, suggestions, and additions. > > 1) Dynamics. Lets be very specific. Oscillations are observed during > recognition (as Jim and others mentioned) and they are not satisfactorily > accounted. Since single oscillation generators have not been found, I > interpret this means the oscillations are likely due to some type of > feedforward-feedback connections functioning during recognition. > > 2) Difficulty with Similarity. Discriminating between similar patterns > recognition takes longer and is more prone to error. This is not primarily > a spatial search phenomena because it occurs in all modalities including > olfaction which has very poor spatial resolution. Thus appears to be a > fundamental part of the neural mechanisms of recognition. > > 3) Asymmetry. This is related to signal-to-noise like phenomena to which > difficulty with similarity belong. Asymmetry is a special case of > difficulty with similarity, where a similar pattern with more information > will predominate the one with less. > > 4) Biased competition (priming). Prior expectation affects recognition time > and accuracy. > > 5) Recall-ability. The same neural recognition network that can perform > recognition likely performs recall. This is suggested by studies where > sensory region activation can be observed when recognition patterns are > imagined, and by the existence of mirror neurons. > > 6) Update-ability. The brain can learn new information (online outside the > IID assumption) and immediately use it. It does not have to retrain on all > old information (IID requirement for feed-forward neural networks). > > If we do not seriously consider networks that inherently display these > properties, I believe neural the network community will continue rehashing > ideas and see limited progress. > > My strong yet humble opinions, > > -Tsvi > > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or > contain privileged and confidential information. This information is only > for the viewing or use of the intended recipient. If you have received this > e-mail in error or are not the intended recipient, you are hereby notified > that any disclosure, copying, distribution or use of, or the taking of any > action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this > e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this > e-mail and all attachments to this e-mail must be immediately deleted from > your computer without making any copies hereof and any and all hard copies > made must be destroyed. If you have received this e-mail in error, please > notify the sender by e-mail immediately. > > > > From pul8 at psu.edu Tue Jan 28 19:52:54 2014 From: pul8 at psu.edu (Ping Li) Date: Tue, 28 Jan 2014 19:52:54 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <52E6CD8B.7040405@phys.psu.edu> References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> <52E6CD8B.7040405@phys.psu.edu> Message-ID: Hi John, In psychology, it's often the opposite -- when the theory (or model, in this case) and experiment don't agree, it's the theory that's to blame. Hence we have to "simulate the data", "replicate the empirical findings", and "match with empirical evidence" (phrases used in almost all cognitive modeling papers -- just finished another one myself)... That's why Brad pointed out it's so hard to publish modeling papers without corresponding experiments or other empirical data. Best, Ping That's not the whole story. For modern physics, a common happening is that when theory and experiment disagree, it is the experiment that is wrong, at least if the theory is well established. (Faster-than-light neutrinos are only one example.) > > John Collins > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.mcnorgan at gmail.com Tue Jan 28 19:55:43 2014 From: chris.mcnorgan at gmail.com (Chris McNorgan) Date: Tue, 28 Jan 2014 19:55:43 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: <52E8518F.4040002@gmail.com> I receive these emails in digest form, so I wasn't on top of the current conversation. Given that neural networks operate over neuronally-inspired processing units, I always found the lack of crosstalk between neuroscience and connectionist modeling literatures surprising. Though not nearly as ambitious as the BRAIN initiative, I've recently published a paper in Brain Connectivity demonstrating the sorts of interesting things that can come out of applying connectionist modeling techniques directly to Big Data from resting state fMRI. I'm not sure what sort of reception it will ultimately get from the neuroimaging community, but it seems topical and perhaps it will be of some interest to readers of this mailing list. http://www.ncbi.nlm.nih.gov/pubmed/24117388 A Connectionist Approach to Mapping the Human Connectome Permits Simulations of Neural Activity Within an Artificial Brain. McNorgan C ,Joanisse MF . Author information Abstract Abstract Data-driven models drawn from statistical correlations between brain activity and behavior are used to inform theory-driven models, such as those described by computational models, which provide a mechanistic account of these correlations. This article introduces a novel multivariate approach for bootstrapping neurologically-plausible computational models that accurately encodes cortical effective connectivity from resting state functional neuroimaging data (rs-fMRI). We show that a network modularity algorithm finds comparable resting state networks within connectivity matrices produced by our approach and by the benchmark method. Unlike existing methods, however, ours permits simulation of brain activation that is a direct reflection of this cortical connectivity. Cross-validation of our model suggests that neural activity in some regions may be more consistent between individuals, providing novel insight into brain function. We suggest this method to make an important contribution toward modeling macro-scale human brain activity, and it has the potential to advance our understanding of complex neurological disorders and the development of neural connectivity. Cheers, Chris McNorgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbednar at inf.ed.ac.uk Tue Jan 28 20:02:37 2014 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Wed, 29 Jan 2014 01:02:37 GMT Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> References: Message-ID: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From bower at uthscsa.edu Tue Jan 28 21:13:02 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 28 Jan 2014 20:13:02 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: Message-ID: On Jan 28, 2014, at 6:33 PM, Tsvi Achler wrote: > Jim, > > I am not surprised that olfactory activation changes significantly > between awake and anesthetized animals. actually the results showed changing receptive fields during the awake state - Bhalla, U. and Bower, J.M. (1997) Multi-day recordings from olfactory bulb neurons in awake freely moving rats: Spatial and temporally organized variability in odorant response properties . J. Computational Neuroscience. 4: 221-256. Jim Bower > There are hypothesized to be > two types of top-down feedback loops in the olfactory bulb which > originates from the brain eg. (Meisami 1991). > Also I am not surprised recognition is not continuous with slightly > changing molecules. Molecule folding and configuration (secondary and > tertiary structure) are significant in determining the properties > molecules, receptors, and receptors with molecules exhibit. > I also suspect slightly changing receptors will also yield radically > different responses. > Thanks for the feedback. I look forward for a continuing vibrant > discussion of network brain-like properties and limits. > Sincerely, > -Tsvi > > Meisami, E. (1991). Chemoreception. Neural and integrative animal > physiology C. L. Prosser. New York, Wiley-Liss: p335-434 ix, 776 p. > > On Mon, Jan 27, 2014 at 2:57 PM, james bower wrote: >> Tsvi, >> >> Nice list and I think a productive approach - the nature of the questions is >> obviously important. >> >> Its another long story, but might I ask that you, and the community consider >> how the answer to each of these questions might change, if for example, the >> nervous system already 'knows' in advance what it is looking for and >> 'learning' in the usual sense, is not involved. >> >> In our own work, to my great surprise, evidence has emerged from both >> detailed physiological models of the olfactory cortex and also from studies >> of the classification systems that humans use to describe odors, that the >> olfactory system might already know a great deal in advance about the >> metabolic structure of the organic world it is trying to detect and that >> that prior knowledge plays a key role in recognition. These results, first >> obtained 15 years ago, were never published except in thesis form in large >> part because they were so antithetical to current thinking about how the >> olfactory system worked in particular and learning worked in general that we >> decided it wasn't worth the effort. A previous paper showing that olfactory >> receptive fields changed in the olfactory bulb, using one of the first ever >> awake behaving multi-single unit recording procedures, took 5 years to get >> published. >> >> However, the metabolic hypothesis (as we have called it) was recently a >> subject in a meeting in Germany for which I append at the end of this >> comment a brief description of the idea. >> >> You might find it interesting that these two lines of evidence pointing in >> the same direction were obtained completely independently, and that the >> modeling result in particular was completely unexpected. What we set out to >> do in what was and I think still is the most detailed biological model of >> certainly olfactory cortex ever made, was to duplicate the pattern of >> current source densities found during (natural) cortical oscillations. (BTW: >> the first version of this model 25 years ago showed that these oscillations >> are not 'driven' from anywhere, but in fact are an intrinsic property of the >> network itself). It turned out that the only way the experimental data >> could be reconstructed was if there were independent subnetworks in the >> cortex. For 25 previous years, I had assumed that the olfactory cortex was >> some kind of associative learning network (influence of the NN community >> actually), based on its apparently highly defuse and topographically >> unorganized set of intrinsic excitatory connections. Turns out, the model >> predicted that this apparent diffuseness may be concealing what is actually >> a highly organized network structure - but not of the usual topographic >> type. I suspect, although we don't know that these subnets reflect the >> structure of the metabolic world. >> >> So, after 25 years, i was forced by the modeling work to completely change >> how I was thinking about how the system worked. >> >> This is the value and power of this type of modeling, to fundamentally >> change what you think about how something works. >> >> Perhaps it doesn't need to be pointed out, that this is also obviously the >> kind of result that could support and drive the kind of experimental effort >> that the Brain project is intent on undertaking. Except that instead of >> blind data collection - the data collection is organized in the context of a >> particular hypothesis. My guess is (and we could probably use the model to >> test this) that finding these subnetworks with blind data collection would >> be much more difficult or perhaps even impossible. >> >> Anyway, a good list of questions, but as with any list of questions, they >> make assumptions about how the system works. >> >> A question like: "what do we have to assume about the intrinsic connectivity >> of olfactory cortex to duplicate the pattern of current source density >> distributions following electrical shock of the lateral olfactory track in a >> detailed biological model of the olfactory cortex" makes many fewer >> functional assumptions. But, in this case (and in most cases of this type >> of modeling we have done), what falls out is something we didn't know was >> there, with, it would seem, significant potential functional significance >> >> to return again to Newton - while he clearly was interested in why the moon >> remained in a circular orbit around the earth, he had no idea that the >> apparent force between them had a regular relationship to the distance, >> until he first invented (or stole depending) the calculous and actually saw >> the relationship. >> >> Had nothing to do with the inspiration of an apple falling in his sister's >> orchard. That was a story that he apparently made up subsequently to >> impress others with his insight and genius. >> >> :-) >> >> Jim >> >> >> >> Metabolic - hypothesis Summary meeting report >> >> The Structure of Olfactory Space >> >> Hannover Germany Sept, 2013. >> >> >> >> Question: Is the olfactory system a chemical classifier, or a detector of >> natural biological chemical processes? >> >> In the first century BC, the Roman poet and philosopher Lucretius speculated >> about olfaction: "Thus simple 'tis to see that whatsoever can touch the >> senses pleasingly are made of smooth and rounded elements, whilst those >> which seem the bitter and the sharp, are held Entwined by elements more >> crook'd". This intuition that the olfactory system generates olfactory >> percepts by interpreting the general chemical structure of odorant molecules >> continues to underlie much olfactory research. Practically, it is manifest >> in the continued reliance on monomolecular odorant stimuli most often >> presented as chemical families (alchohols and aldehydes) varying along a >> single chemical metric (e.g. carbon length chain). The results, at >> multiple levels of scale from single receptor neurons to networks, typically >> show individual elements responding to a large and complex range of >> compounds, leading in turn to the suggestion that the olfactory system uses >> a distributed combinatorial code to learn to recognize objects. >> Perceptually however, compounds with highly different chemical structures >> can elicit similar odors, while small changes in chemical structure can >> render a highly odorous compound completely odorless. For these and other >> reasons, traditional approaches to classifying the perception of odorant >> molecules based on their physical structure continue to have minimal >> predictive value. >> >> We believe, as an alternative, it is worth considering whether the olfactory >> system may not be a chemical classifier in the traditional sense, but >> instead has evolved to detect known chemical patterns reflecting >> biologically important signals in nature. In this view, "odor perceptual >> space" is predicted to be organized around the chemical structure of the >> organic world including, for example, the chemical signature of specific >> metabolic pathways (from traditional food sources), chemical patterns >> generated by one species to specifically attract other species (allomones, >> kairomones, or even compounds given off by fruit to signal ripeness), or >> stimuli signaling the interactions of "consortia of organisms" (microbial >> digestion of plant or animal tissue). >> >> What we are proposing as the "Metabolomics Hypothesis" makes several >> specific predictions: The core prediction is that the olfactory system will >> be organized around biologically significant mixtures of molecules, in >> effect, seeking evidence for the presence of particular chemical >> interactions within the environment; This structure may be apparent as early >> as single olfactory receptor proteins which could, for example, bind >> odorants that are metabolically related, even if structurally dissimilar; >> As a special case, molecules employed as signals between species (allomones, >> kairomones , or molecules signally the ripeness of fruit for example) might >> induce responses in a broad number of receptors; Receptor neuron projections >> to the olfactory bulb as well as bulbar projections to the olfactory cortex >> may be more ordered than previously assumed reflecting this structure; This >> hypothesis further predicts that metabolic relatedness is more likely to >> predict perception and perceptual interactions (cross adaptation for >> example) than would either simple structural similarity, or chemical class. >> Finally, and perhaps most importantly, we would predict that this structural >> knowledge of the chemical world may be 'built into' the olfactory system at >> the outset, providing a non-learned basis for olfactory perception. Such an >> existing structure would relegate 'learning' to changes in >> aversive/preference (hedonic) scale based on individual experience. While >> preliminary evidence exists for each of these predictions (c.f. Chee, 2003; >> Vanier, 2001), further experimental work is necessary to test this new >> hypothesis. That work will depend, however, on the use of panels or >> mixtures of odorants with known behavioral significance. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Jan 27, 2014, at 12:31 PM, Tsvi Achler wrote: >> >> Jim has referred twice now to a list of problems and brain-like phenomena >> that models should strive to emulate. In my mind this gets to the heart of >> the matter. However, there was a discussion of one or two points and then it >> fizzled. The brain shows many electrophysiological but also behavioral >> phenomena. >> >> I would like to revive that discussion (and include not just neuroscience >> phenomena) in a list to show: how significant these issues are, the size of >> the gap in our knowledge, and focus more specifically on what is brain-like. >> >> Let me motivate this even further. The biggest bottleneck to understanding >> the brain is understanding how the brain/neurons perform recognition. >> Recognition is an essential foundation upon which cognition and intelligence >> is based. Without recognition the brain cannot interact with the world. >> Thus a better knowledge of recognition will open up the brain for better >> understanding. >> >> Here is my humble list, and I would like to open it to discussions, >> opinions, suggestions, and additions. >> >> 1) Dynamics. Lets be very specific. Oscillations are observed during >> recognition (as Jim and others mentioned) and they are not satisfactorily >> accounted. Since single oscillation generators have not been found, I >> interpret this means the oscillations are likely due to some type of >> feedforward-feedback connections functioning during recognition. >> >> 2) Difficulty with Similarity. Discriminating between similar patterns >> recognition takes longer and is more prone to error. This is not primarily >> a spatial search phenomena because it occurs in all modalities including >> olfaction which has very poor spatial resolution. Thus appears to be a >> fundamental part of the neural mechanisms of recognition. >> >> 3) Asymmetry. This is related to signal-to-noise like phenomena to which >> difficulty with similarity belong. Asymmetry is a special case of >> difficulty with similarity, where a similar pattern with more information >> will predominate the one with less. >> >> 4) Biased competition (priming). Prior expectation affects recognition time >> and accuracy. >> >> 5) Recall-ability. The same neural recognition network that can perform >> recognition likely performs recall. This is suggested by studies where >> sensory region activation can be observed when recognition patterns are >> imagined, and by the existence of mirror neurons. >> >> 6) Update-ability. The brain can learn new information (online outside the >> IID assumption) and immediately use it. It does not have to retrain on all >> old information (IID requirement for feed-forward neural networks). >> >> If we do not seriously consider networks that inherently display these >> properties, I believe neural the network community will continue rehashing >> ideas and see limited progress. >> >> My strong yet humble opinions, >> >> -Tsvi >> >> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or >> contain privileged and confidential information. This information is only >> for the viewing or use of the intended recipient. If you have received this >> e-mail in error or are not the intended recipient, you are hereby notified >> that any disclosure, copying, distribution or use of, or the taking of any >> action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this >> e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this >> e-mail and all attachments to this e-mail must be immediately deleted from >> your computer without making any copies hereof and any and all hard copies >> made must be destroyed. If you have received this e-mail in error, please >> notify the sender by e-mail immediately. >> >> >> >> Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Jan 28 22:07:28 2014 From: bower at uthscsa.edu (james bower) Date: Tue, 28 Jan 2014 21:07:28 -0600 Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> References: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> Message-ID: On Jan 28, 2014, at 7:02 PM, James A. Bednar wrote: > | Date: 2014-01-25 12:09 AM > | From: james bower > | > | Having spent almost 30 years building realistic models of its cells > | and networks (and also doing experiments, as described in the > | article I linked to) we have made some small progress - but only by > | avoiding abstractions and paying attention to the details. > > Jim, > > I think it's disingenuous to claim that *any* model can avoid > abstractions, First, you might want to read the paper I linked to earlier, so that you understand what I am saying. here it is again: https://www.dropbox.com/s/r046g03w8ev5kkm/272602_1_En_5%20copy.pdf With respect to that article, here is a list of properties of Purkinje cells that if you don?t have them, means you aren?t modeling a Purkinje cell, based on 40 years of modeling and experimental studies not only including our own: - no dendrite - a dendrite when present that simply does a voltage sum - or in other words a dendrite with no active conductive properties - a soma that has a simple fixed defined threshold to fire - or in other words, a dendrite with no active conductive properties - a dendrite that isn?t morphologically based on an actual reconstructed Purkinje cell dendrite. - a soma that doesn?t generation action potentials independently on its own - a dendrites whose principe effect is to stop the soma from firing As far as its response in the context of a realistic network model: - a dendrite that only receives excitatory input, without inhibitory input - input from parallel fibers directly driving somatic spiking These are equivalent, or should be, to making a model of the interaction of subatomic particles and ignoring conservation laws. You couldn?t get away with it in physics - happens all the time in modeling Purkinje cells and the cerebellum (even last week). Neuroscience should be accumulating these biological equivalents to the laws of physics - but it isn?t. (would even be useful to the Neural Network, connectionists, machine learning community who mostly have a pre hodgkin huxley view of neurons). But there is another and more definition, in my view, of ?realistic? models, which again has to do with process. The vast majority of neurobiological models are designed to demonstrate that an idea somebody had about how the system worked is plausible. No problem with that, what so ever, in engineering (its what engineers do) - but a significant problem in ?reverse engineering? the brain. Realistic models, in my nomenclature (and I was one of the first people to use the term actually) aren?t defined explicitly by how adorned they are or not with biological stuff. They are models whose construction and parameter tuning is primarily and fundamentally aimed at replicating basic biological data. Not, synthesized biological data (ie. ocular dominance columns or orientation selectivity), but basic recorded responses that don?t have known functional implications. Better yet, biological data recorded under completely artificial circumstances and conditions which never-the-less reveal complex behavior that isn?t understood. In the case of realistic single cell models, that data is often from voltage clamp experiments. Clamping the voltage of a neuron at a fixed level by injecting current is a highly artificial thing to do - yet, in many cells and in particular the Purkinje cell, it reveals a complex pattern of activity reflecting in a complex way the biophysical structure of that neuron. Once model parameters have been tuned to replicate voltage clamp data - then, one freezes these parameters and applies synaptic input to see if one can replicate the basic response properties of the cell (e.g. its variable rate of action potential generation). In our experience at that point we have always started to find all sorts of things that you didn?t know where there. In the case of the PUrkinje cell, for example, we found out that it didn?t matter where on its huge dendrite you applied a synaptic input, that input had the same effect on the soma. If you want to know why that is interesting (and what happened next) take the time to read the paper. So, the point is this - again, the real litmus test for realistic modeling, is whether the model was tuned and designed to produce a particular functional result - and then adorned with biology, of if replicating the basic biology independent of function was the first step and remains the reference step for the modeling work. Again to return to Newton - he apparently built a ?realistic? (by my definition) model of the moon orbiting the earth. he applied mathematical analysis to figure out the size of the force holding it in its orbit. He then realized that that force appeared to be the inverse square of the distance. Actually, the force he calculated the first time wasn?t - it was less - and therefore, Newton at age 19, apparently thought that there was some other force (Keplers vortex force), in the mix as well. It wasn?t until many years later, after turning most of his attention to alchemy, when informed that another scientist was about to report the inverse square relationship that he became interested again - that interest turned into the work that ended up with his treatise on mechanics. So, the bottom line - if you want to understand the nervous system and you have an idea about how the cerebellum is involved in learning - and you build a model that implements your scheme - then, you are engaged in a Ptolemaic effort, not a realistic one. If most of the ?predictions? of your model are actually ?postdictions? of well known phenomena that you are using to convince people your functional idea is right, then again, you are in the domain of Ptolemy. If you are building a model to solve the traveling salesman problem, or to perform better voice recognition on a chip - good for you - that?s how engineering works. No problem with that at all - and actually, if you picked something up from a neuroscience lecture that gave you a new idea about how to make your neural networks chip - absolutely no problem what so ever with that either - as we all know, biology has served as an important source of creativity in engineering historically. However, if you want to claim that your model also reveals something important about how brains work, then the model must either be ?realistic? first, or be able to link to such a model. It goes without saying that these types of realistic models can be built at many levels, as long as the model has biological components. (you won?t convince me with mean field theories of cerebral cortex). It also goes without saying, of course, that we don?t have the technology or the knowledge for that matter to build one model of everything - although personally, I believe eventually we will have to, and reflecting that view, Version 3.0 of GENESIS was specifically built to link broadly across many different levels of scale. The critical question therefore, is whether the model is built in such a way that the biology can tell you something you didn?t know before you started (just like the earth moon model told Newton) - or, is the biology just dressing up something you already believed to be true and just wanted to convince the rest of us. Building the model out of realistic components, and then testing it on theory- neutral biological data, is more likely to lead to the former. At least it has over and over again for us. Jim > and in particular that your type of "realistic" > multicompartmental single-cell and network modelling could ever do so. > > *Real* morphologically complex cells are embedded in complex networks, > which are embedded in complex organisms, which are embedded in complex > environments, which are embedded in complex ecosystems. Evolution > acts on the net result of *all* of this, indirectly via a process of > development. Certain species thrive in certain ecosystems if their > proteins, cells, networks, nervous systems, bodies, and communities > allow them to function in that environment well enough to reproduce. > The details of *all* of these things matter. > > Are all of these details represented realistically in your models? > No, and they shouldn't be -- you pose questions that can be addressed > by the things you do include, abstract away the rest, and all is well > and good. But other different yet no less realistic models are built > to address different questions, paying attention to different sets of > details (such as large-scale development and plasticity, for my own > models), and again abstract away the rest. > > I am happy to join with you to decry truly unrealistic models, which > would be those that respect none of the details at any level. Down > with unrealistic models! But there is no meaningful sense in which > any model can be claimed to avoid abstraction, and no level that > exclusively owns biological realism. > > Jim Bednar > > ________________________________________________ > > Dr. James A. Bednar > Director, Doctoral Training Centre in > Neuroinformatics and Computational Neuroscience > University of Edinburgh School of Informatics > 10 Crichton Street, Edinburgh, EH8 9AB UK > http://anc.ed.ac.uk/dtc > http://homepages.inf.ed.ac.uk/jbednar > ________________________________________________ > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Tue Jan 28 22:53:06 2014 From: bwyble at gmail.com (Brad Wyble) Date: Tue, 28 Jan 2014 22:53:06 -0500 Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: References: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> Message-ID: I'd like to throw in a few cents. > However, if you want to claim that your model also reveals something > important about how brains work, then the model must either be 'realistic' > first, or be able to link to such a model. > > I think that whether this is true depends on your definition of "how brains work". For my purposes, the litmus test of a model that reflects "how brains work" in some fundamental sense is reflected in their ability to generate novel predictions that are correct. Your warnings about postdiction are right on target, since the predictions must be truly de novo at the time they are created to ensure that you are not engaging in curve fitting. Therefore to me it seems as if we are using very different goals in the modelling process. I want a model to reflect the behavior of the system, and I use neural models because the added constraint of neural plausibility enormously accelerates my search through the space of possible models by preventing me from following innumerable dead ends (though there are still quite a lot of neurally plausible dead ends). Jim, you seem to want a model that reflects the wiring of the brain first and foremost, and the functionality comes in a close second. I think it's fine for both of those goals to exist and I don't think it's necessary to label one of them as useless, or even less efficient. -Brad PS. Thanks for a very interesting debate! > It goes without saying that these types of realistic models can be built > at many levels, as long as the model has biological components. (you won't > convince me with mean field theories of cerebral cortex). It also goes > without saying, of course, that we don't have the technology or the > knowledge for that matter to build one model of everything - although > personally, I believe eventually we will have to, and reflecting that view, > Version 3.0 of GENESIS was specifically built to link broadly across many > different levels of scale. > > The critical question therefore, is whether the model is built in such a > way that the biology can tell you something you didn't know before you > started (just like the earth moon model told Newton) - or, is the biology > just dressing up something you already believed to be true and just wanted > to convince the rest of us. Building the model out of realistic > components, and then testing it on theory- neutral biological data, is more > likely to lead to the former. At least it has over and over again for us. > > > Jim > > > > and in particular that your type of "realistic" > multicompartmental single-cell and network modelling could ever do so. > > *Real* morphologically complex cells are embedded in complex networks, > which are embedded in complex organisms, which are embedded in complex > environments, which are embedded in complex ecosystems. Evolution > acts on the net result of *all* of this, indirectly via a process of > development. Certain species thrive in certain ecosystems if their > proteins, cells, networks, nervous systems, bodies, and communities > allow them to function in that environment well enough to reproduce. > The details of *all* of these things matter. > > Are all of these details represented realistically in your models? > No, and they shouldn't be -- you pose questions that can be addressed > by the things you do include, abstract away the rest, and all is well > and good. But other different yet no less realistic models are built > to address different questions, paying attention to different sets of > details (such as large-scale development and plasticity, for my own > models), and again abstract away the rest. > > I am happy to join with you to decry truly unrealistic models, which > would be those that respect none of the details at any level. Down > with unrealistic models! But there is no meaningful sense in which > any model can be claimed to avoid abstraction, and no level that > exclusively owns biological realism. > > Jim Bednar > > ________________________________________________ > > Dr. James A. Bednar > Director, Doctoral Training Centre in > Neuroinformatics and Computational Neuroscience > University of Edinburgh School of Informatics > 10 Crichton Street, Edinburgh, EH8 9AB UK > http://anc.ed.ac.uk/dtc > http://homepages.inf.ed.ac.uk/jbednar > ________________________________________________ > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > > *Phone: 210 382 0553 <210%20382%200553>* > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged > or contain privileged and confidential information. This information is > only for the viewing or use of the intended recipient. If you have received > this e-mail in error or are not the intended recipient, you are hereby > notified that any disclosure, copying, distribution or use of, or the > taking of any action in reliance upon, any of the information contained in > this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that > this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, > this e-mail and all attachments to this e-mail must be immediately deleted > from your computer without making any copies hereof and any and all hard > copies made must be destroyed. If you have received this e-mail in error, > please notify the sender by e-mail immediately. > > > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhb0 at lehigh.edu Tue Jan 28 23:00:19 2014 From: mhb0 at lehigh.edu (Mark H. Bickhard) Date: Tue, 28 Jan 2014 23:00:19 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> <52E6CD8B.70404! 05@phys.psu.edu> Message-ID: I would like to offer a couple of observations and comments. I am in strong agreement with the emphasis on and necessity for theories, in interaction with empirical data and methodology, though I do find it useful in general to differentiate between theories and models ? both are useful and needed, but not identical. This is not a consensual distinction in the philosophy of science, but one way to think of the difference I have in mind is to consider theories as resources and frameworks for the construction of models. For example, does the brain function in terms of the processing of semantic information, or via endogenously oscillatory processes (at multiple spatial and temporal scales, within complex topologies) that engage in modulations amongst themselves? The former assumption is almost universal today, but there are serious problems with it. I advocate the latter (paper available, for those who might be interested: http://www.lehigh.edu/~mhb0/ModelCNSFunctioning19Jan14.pdf). This raises some related issues. E.g., is there a possibility that the field is stuck in a framework with a false presupposition? If so, we might be in a position similar to that of attempting to determine (model?) the mass of fire (phlogiston). That example is from a while ago (and the assumption that fire is a substance goes back millennia, at least in the Western tradition), but more recently we find people devoting careers to figuring the nature and properties of "associations" or of how two-layer Perceptrons could engage in full human level pattern recognition (and, later, overlooking for a decade or so that the in-principle proof of impossibility did not apply to multiple layer connectionist models). False presuppositions are rather common in history. I argue that the framework of semantic information processing is such a false presupposition. Even without the problems of false presuppositions, there are also the problems of missing conceptual frameworks. You could not "reverse engineer" a table without notions of atoms, molecules, van der Waals forces, and so on ? it is not a problem solvable via reverse engineering alone without regard for the conceptual resources available. Presumably, a similar point holds for understanding how the brain works, only more so, and it seems likely that we are in such a position of conceptual impoverishment, if not (also) in a position of working within a false framework of presuppositions. I have argued that we are in fact working with false presuppositions and with an impoverishment of conceptual resources, and have made some attempts to contribute to resolving those problems. The history of science provides strong "inductive" support that all science has been and still is caught in such problems and that major advances require changes at those levels. Including science(s) of the brain. But my arguments have tended to be more "in-principle" (and, thus, have driven me into theory, and even [horrors!] philosophy). This speaks to the points made in the discussion about theory as a differentiated subdomain within physics, and that it perhaps should also be recognized as such in studying the brain (and in psychology and cognitive science, etc. more generally). I fully agree. Recall, however, that the first Nobel for purely theoretical work in physics was for Dirac in 1933 (they would not and did not award the Nobel to Einstein for his purely theoretical work). So, physics is a positive model in this regard, but it too suffered from a serious hesitancy to make and honor the distinction. Mark Mark H. Bickhard Lehigh University 17 Memorial Drive East Bethlehem, PA 18015 mark at bickhard.name http://bickhard.ws/ On Jan 28, 2014, at 7:52 PM, Ping Li wrote: Hi John, In psychology, it's often the opposite -- when the theory (or model, in this case) and experiment don't agree, it's the theory that's to blame. Hence we have to "simulate the data", "replicate the empirical findings", and "match with empirical evidence" (phrases used in almost all cognitive modeling papers -- just finished another one myself)... That's why Brad pointed out it's so hard to publish modeling papers without corresponding experiments or other empirical data. Best, Ping That's not the whole story. For modern physics, a common happening is that when theory and experiment disagree, it is the experiment that is wrong, at least if the theory is well established. (Faster-than-light neutrinos are only one example.) John Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From mo2259 at columbia.edu Tue Jan 28 23:26:57 2014 From: mo2259 at columbia.edu (Mark Orr) Date: Tue, 28 Jan 2014 23:26:57 -0500 Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: References: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> Message-ID: <2159EEDF-0D46-4FA7-89A0-1246C03C020A@columbia.edu> Jim, With all the talk of physics, let's not forget Feynman's eloquence in explaining an unsolved and old problem in physics: turbulence. Where the understanding of the parts does not lead up to much of an understanding of the whole. "How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts--physics, biology, geology, astronomy, psychology, and so on--remember that nature does not know it! So let us put it all back together, not forgetting ultimately what it is for. Let it give us one more final pleasure: drink it and forget it all!" -Richard Feynman, from Six Easy Pieces On Jan 28, 2014, at 10:07 PM, james bower wrote: > > On Jan 28, 2014, at 7:02 PM, James A. Bednar wrote: > >> | Date: 2014-01-25 12:09 AM >> | From: james bower >> | >> | Having spent almost 30 years building realistic models of its cells >> | and networks (and also doing experiments, as described in the >> | article I linked to) we have made some small progress - but only by >> | avoiding abstractions and paying attention to the details. >> >> Jim, >> >> I think it's disingenuous to claim that *any* model can avoid >> abstractions, > > First, you might want to read the paper I linked to earlier, so that you understand what I am saying. > > here it is again: https://www.dropbox.com/s/r046g03w8ev5kkm/272602_1_En_5%20copy.pdf > > With respect to that article, here is a list of properties of Purkinje cells that if you don?t have them, means you aren?t modeling a Purkinje cell, based on 40 years of modeling and experimental studies not only including our own: > > - no dendrite > - a dendrite when present that simply does a voltage sum > - or in other words a dendrite with no active conductive properties > - a soma that has a simple fixed defined threshold to fire > - or in other words, a dendrite with no active conductive properties > - a dendrite that isn?t morphologically based on an actual reconstructed Purkinje cell dendrite. > - a soma that doesn?t generation action potentials independently on its own > - a dendrites whose principe effect is to stop the soma from firing > > As far as its response in the context of a realistic network model: > > - a dendrite that only receives excitatory input, without inhibitory input > - input from parallel fibers directly driving somatic spiking > > > These are equivalent, or should be, to making a model of the interaction of subatomic particles and ignoring conservation laws. > > You couldn?t get away with it in physics - happens all the time in modeling Purkinje cells and the cerebellum (even last week). > > Neuroscience should be accumulating these biological equivalents to the laws of physics - but it isn?t. (would even be useful to the Neural Network, connectionists, machine learning community who mostly have a pre hodgkin huxley view of neurons). > > But there is another and more definition, in my view, of ?realistic? models, which again has to do with process. > > The vast majority of neurobiological models are designed to demonstrate that an idea somebody had about how the system worked is plausible. No problem with that, what so ever, in engineering (its what engineers do) - but a significant problem in ?reverse engineering? the brain. > > Realistic models, in my nomenclature (and I was one of the first people to use the term actually) aren?t defined explicitly by how adorned they are or not with biological stuff. They are models whose construction and parameter tuning is primarily and fundamentally aimed at replicating basic biological data. Not, synthesized biological data (ie. ocular dominance columns or orientation selectivity), but basic recorded responses that don?t have known functional implications. Better yet, biological data recorded under completely artificial circumstances and conditions which never-the-less reveal complex behavior that isn?t understood. In the case of realistic single cell models, that data is often from voltage clamp experiments. Clamping the voltage of a neuron at a fixed level by injecting current is a highly artificial thing to do - yet, in many cells and in particular the Purkinje cell, it reveals a complex pattern of activity reflecting in a complex way the biophysical structure of that neuron. Once model parameters have been tuned to replicate voltage clamp data - then, one freezes these parameters and applies synaptic input to see if one can replicate the basic response properties of the cell (e.g. its variable rate of action potential generation). In our experience at that point we have always started to find all sorts of things that you didn?t know where there. In the case of the PUrkinje cell, for example, we found out that it didn?t matter where on its huge dendrite you applied a synaptic input, that input had the same effect on the soma. If you want to know why that is interesting (and what happened next) take the time to read the paper. > > So, the point is this - again, the real litmus test for realistic modeling, is whether the model was tuned and designed to produce a particular functional result - and then adorned with biology, of if replicating the basic biology independent of function was the first step and remains the reference step for the modeling work. > > Again to return to Newton - he apparently built a ?realistic? (by my definition) model of the moon orbiting the earth. he applied mathematical analysis to figure out the size of the force holding it in its orbit. He then realized that that force appeared to be the inverse square of the distance. Actually, the force he calculated the first time wasn?t - it was less - and therefore, Newton at age 19, apparently thought that there was some other force (Keplers vortex force), in the mix as well. It wasn?t until many years later, after turning most of his attention to alchemy, when informed that another scientist was about to report the inverse square relationship that he became interested again - that interest turned into the work that ended up with his treatise on mechanics. > > So, the bottom line - if you want to understand the nervous system and you have an idea about how the cerebellum is involved in learning - and you build a model that implements your scheme - then, you are engaged in a Ptolemaic effort, not a realistic one. If most of the ?predictions? of your model are actually ?postdictions? of well known phenomena that you are using to convince people your functional idea is right, then again, you are in the domain of Ptolemy. If you are building a model to solve the traveling salesman problem, or to perform better voice recognition on a chip - good for you - that?s how engineering works. No problem with that at all - and actually, if you picked something up from a neuroscience lecture that gave you a new idea about how to make your neural networks chip - absolutely no problem what so ever with that either - as we all know, biology has served as an important source of creativity in engineering historically. However, if you want to claim that your model also reveals something important about how brains work, then the model must either be ?realistic? first, or be able to link to such a model. > > It goes without saying that these types of realistic models can be built at many levels, as long as the model has biological components. (you won?t convince me with mean field theories of cerebral cortex). It also goes without saying, of course, that we don?t have the technology or the knowledge for that matter to build one model of everything - although personally, I believe eventually we will have to, and reflecting that view, Version 3.0 of GENESIS was specifically built to link broadly across many different levels of scale. > > The critical question therefore, is whether the model is built in such a way that the biology can tell you something you didn?t know before you started (just like the earth moon model told Newton) - or, is the biology just dressing up something you already believed to be true and just wanted to convince the rest of us. Building the model out of realistic components, and then testing it on theory- neutral biological data, is more likely to lead to the former. At least it has over and over again for us. > > > Jim > > > >> and in particular that your type of "realistic" >> multicompartmental single-cell and network modelling could ever do so. >> >> *Real* morphologically complex cells are embedded in complex networks, >> which are embedded in complex organisms, which are embedded in complex >> environments, which are embedded in complex ecosystems. Evolution >> acts on the net result of *all* of this, indirectly via a process of >> development. Certain species thrive in certain ecosystems if their >> proteins, cells, networks, nervous systems, bodies, and communities >> allow them to function in that environment well enough to reproduce. >> The details of *all* of these things matter. >> >> Are all of these details represented realistically in your models? >> No, and they shouldn't be -- you pose questions that can be addressed >> by the things you do include, abstract away the rest, and all is well >> and good. But other different yet no less realistic models are built >> to address different questions, paying attention to different sets of >> details (such as large-scale development and plasticity, for my own >> models), and again abstract away the rest. >> >> I am happy to join with you to decry truly unrealistic models, which >> would be those that respect none of the details at any level. Down >> with unrealistic models! But there is no meaningful sense in which >> any model can be claimed to avoid abstraction, and no level that >> exclusively owns biological realism. >> >> Jim Bednar >> >> ________________________________________________ >> >> Dr. James A. Bednar >> Director, Doctoral Training Centre in >> Neuroinformatics and Computational Neuroscience >> University of Edinburgh School of Informatics >> 10 Crichton Street, Edinburgh, EH8 9AB UK >> http://anc.ed.ac.uk/dtc >> http://homepages.inf.ed.ac.uk/jbednar >> ________________________________________________ >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neuroanalysis at gmail.com Wed Jan 29 02:25:20 2014 From: neuroanalysis at gmail.com (Avi Peled) Date: Wed, 29 Jan 2014 09:25:20 +0200 Subject: Connectionists: How the brain works In-Reply-To: References: Message-ID: Cool Quotations right on target link - http://www.ninds.nih.gov/funding/funding_announcements/rfa/RFA-NS-14-009.htm R U Interested? On Tue, Jan 28, 2014 at 10:52 PM, Gary Cottrell wrote: > I think everyone is a tad over-reacting - read the RFP below from NIH. It > does include theory as a major component... > > The purpose of this FOA is to provide resources for integrated development > of experimental, analytic and theoretical capabilities for large-scale > analysis of neural systems and circuits. We seek applications for > exploratory studies that use new and emerging methods for large scale > recording and manipulation of neural circuits across multiple brain > regions. Applications should propose to elucidate the contributions of > dynamic circuit activity to a specific behavioral or neural system. > Studies should incorporate rich information on cell-types, on circuit > functionality and connectivity, and should be performed in conjunction with > sophisticated analysis of ethologically relevant behaviors. Applications > should propose teams of investigators that seek to cross boundaries of > interdisciplinary collaboration by bridging fields and linking theory and > data analysis to experimental design. Exploratory studies supported by > this FOA are intended to develop experimental capabilities and theoretical > frameworks in preparation for a future competition for large scale awards. > > On Jan 28, 2014, at 10:03 AM, Avi Peled wrote: > > > Ivan I agree - and even more, neuroscientific application to psychiatry > is even more at its infancy - but unfortunately patients are suffering > immensely and cannot wait - the attached paper tries to tackle the problem > of neuroscientific psychiatry - comments are welcome > > Abraham > > > On Tue, Jan 28, 2014 at 10:51 AM, Ivan Raikov wrote: > >> >> My summary of the history of physics was quite wrong: the idea of >> infinitesimals and their application has been around since the time of >> Archimedes: >> >> http://www.idsia.ch/~juergen/archimedes.html >> >> http://en.wikipedia.org/wiki/Infinitesimal >> >> The moral is, it takes a while for fundamental ideas in science to >> promulgate :-) >> >> On Mon, Jan 27, 2014 at 3:38 PM, Ivan Raikov wrote: >> >>> >>> Speaking of radio and electromagnetic waves, it is perhaps the case that >>> neuroscience has not yet reached the maturity of 19th century physics: >>> while the discovery of electromagnetism is attributed to great >>> experimentalists such as Ampere and Faraday, and its mathematical model is >>> attributed to one of the greatest modelers in physics, Maxwell, none of it >>> happened in isolation. There was a lot of duplicated experimental work and >>> simultaneous independent discoveries in that time period, and Maxwell's >>> equations were readily accepted and quickly refined by a number of >>> physicists after he first postulated them. So in a sense physics had a >>> consensus community model of electromagnetism already in the first half of >>> the 19th century. Neuroscience is perhaps more akin to physics in the 17th >>> century, when Newton's infinitesimal calculus was rejected and even mocked >>> by the scientific establishment on the continent, and many years would pass >>> until calculus was understood and widely accepted. So a unifying theory of >>> neuroscience may not come until a lot of independent and reproducible >>> experimentation brings it about. >>> >>> -Ivan >>> >>> >>> >>> On Mon, Jan 27, 2014 at 1:39 PM, Thomas Trappenberg wrote: >>> >>>> Some of our discussion seems to be about 'How the brain works'. I am of >>>> course not smart enough to answer this question. So let me try another >>>> system. >>>> >>>> How does a radio work? I guess it uses an antenna to sense an >>>> electromagnetic wave that is then amplified so that an electromagnet can >>>> drive a membrane to produce an airwave that can be sensed by our ear. Hope >>>> this captures some essential aspects. >>>> >>>> Now that you know, can you repair it when it doesn't work? >>>> >>>> >> > > > -- > Abraham Peled M.D. - Psychiatry > Chair of Dept' SM, Mental Health Center > Clinical Assistant Professor 'Technion' Israel Institute of Technology > Book author of 'Optimizers 2050' and 'NeuroAnalysis' > Email: neuroanalysis at gmail.com > Web: http://neuroanalysis.org.il/ > Web www.shaar-menashe.org > Phone: +972522844050 > Fax: +97246334869 > > CONFIDENTIALITY NOTICE: Information contained in this message and any > attachments is intended only for the addressee(s). If you believe that you > have received this message in error, please notify the sender immediately > by return electronic mail, and please delete it without further review, > disclosure, or copying. > > > > [I am in Dijon, France on sabbatical this year. To call me, Skype works > best (gwcottrell), or dial +33 788319271] > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Things may come to those who wait, but only the things left by those who > hustle. -- Abraham Lincoln > > "Of course, none of this will be easy. If it was, we would already > know everything there was about how the brain works, and presumably my > life would be simpler here. It could explain all kinds of things that go on > in Washington." -Barack Obama > > "Probably once or twice a week we are sitting at dinner and Richard says, > 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. > Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery does > not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > -- Abraham Peled M.D. - Psychiatry Chair of Dept' SM, Mental Health Center Clinical Assistant Professor 'Technion' Israel Institute of Technology Book author of 'Optimizers 2050' and 'NeuroAnalysis' Email: neuroanalysis at gmail.com Web: http://neuroanalysis.org.il/ Web www.shaar-menashe.org Phone: +972522844050 Fax: +97246334869 CONFIDENTIALITY NOTICE: Information contained in this message and any attachments is intended only for the addressee(s). If you believe that you have received this message in error, please notify the sender immediately by return electronic mail, and please delete it without further review, disclosure, or copying. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Johan.Suykens at esat.kuleuven.be Wed Jan 29 04:12:35 2014 From: Johan.Suykens at esat.kuleuven.be (Johan Suykens) Date: Wed, 29 Jan 2014 10:12:35 +0100 Subject: Connectionists: Physics In-Reply-To: References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> Message-ID: <52E8C603.7060308@esat.kuleuven.be> Dear all, Related to the quantum measurement problem, derivation of the Born rule, and connections with neural networks, support vector machines, kernel methods, learning theory, variational principles, convex optimization, and others, I would like to bring to the attention the following recent results: Johan A.K. Suykens, Generating quantum-measurement probabilities from an optimality principle, Phys. Rev. A 87, 052134 (2013) http://pra.aps.org/abstract/PRA/v87/i5/e052134 http://homes.esat.kuleuven.be/~sistawww/cgi-bin/newsearch.pl?Name=Suykens+J Hopefully this might also contribute to connecting different research fields. Best regards, Johan ---------------------- Prof. Dr.ir. Johan Suykens Katholieke Universiteit Leuven Departement Elektrotechniek - ESAT-STADIUS Kasteelpark Arenberg 10 B-3001 Leuven (Heverlee) Belgium Tel: 32/16/32 18 02 Fax: 32/16/32 19 70 Email: Johan.Suykens at esat.kuleuven.be http://www.esat.kuleuven.be/stadius/members/suykens.html On 01/28/2014 04:03 PM, Kaare Mikkelsen wrote: > Speaking as another physicist trying to bridge the gap between physics > and neuroscience I must also say that how the most abstract ideas from > quantum mechanics could meaningfully (read: scientifically) be applied > to macroscopic neuroscience, given our present level of understanding of > either field, is beyond me. To me, it is at the point where the > connection is impossible to prove or disprove, but seems very unlikely. > I do not see how valid scientific results can come in that direction, > seeing as there is no theory, no reasonable path towards a theory, and > absolutely no way of measuring anything. > > -------------------------------------------------------------------- > Kaare Mikkelsen, M. Sc. > Institut for Fysik og Astronomi > Ny Munkegade 120 > 8000 > Aarhus C > Lok.: 1520-629 > Tlf.: 87 15 56 37 > -------------------------------------------------------------------- > > > On 28 January 2014 15:32, Richard Loosemore > wrote: > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > Consciousness is also such a bag of worms that we can't rule out > that qualia owes its totally non-obvious and a priori > unpredicted existence to concepts derived from quantum > mechanics, such as nested observers, or entanglement. > > As far as I know, my litmus test for a model is the only way to > tell whether low-level quantum effects are required: if the > model, which has not been exposed to a corpus containing > consciousness philosophy, then goes on to independently recreate > consciousness philosophy, despite the fact that it is composed > of (for example) point neurons, then we can be sure that > low-level quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested > observers or entanglement. I'll let a quantum physicist chime in > on that - although I will note that according to news articles > I've read that we keep managing to entangle larger and larger > objects - up to the size of molecules at this time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/__mingus > > Speaking as someone is both a physicist and a cognitive scientist, > AND someone who has written papers resolving that whole C-word > issue, I can tell you that the quantum story isn't nearly enough > clear in the minds of physicists, yet, so how it can be applied to > the C question is beyond me. Frankly, it does NOT apply: saying > anything about observers and entanglement does not at any point > touch the kind of statements that involve talk about qualia etc. > So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci > ..... I think it best to listen to George Bernard Shaw on this one: > "Never do unto others as you would they do unto you: their tastes > may not be the same." > > Our tastes (requirements/constraints/__issues) are quite different, > so what happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > From icais at cuas.at Wed Jan 29 06:50:02 2014 From: icais at cuas.at (icais) Date: Wed, 29 Jan 2014 11:50:02 +0000 Subject: Connectionists: ICAIS'14 1st CFP In-Reply-To: <825BADE2E9C8674B82CEBD8B5F91F6474F55BC85@EXMBX01.technikum.local> References: <825BADE2E9C8674B82CEBD8B5F91F6474F55BC85@EXMBX01.technikum.local> Message-ID: <825BADE2E9C8674B82CEBD8B5F91F6474F55C449@EXMBX01.technikum.local> [Apologies if you receive multiple copies of this CFP] -------------------------------------------------------------------------------------------------- * * * CALL FOR PAPERS * * * The 2014 International Conference on Adaptive & Intelligent Systems - ICAIS'14 September 08th - 10th, 2014 Bournemouth, UK http://computing.bournemouth.ac.uk/ICAIS/ icais at bournemouth.ac.uk Sponsored by - IEEE Computational Intelligence Society - The International Neural Network Society -------------------------------------------------------------------------------------------------- * * * PLENARY TALKS * * * Prof. Ludmila I Kuncheva, Bangor University, UK Prof. Jo?o Gama, University of Porto Porto, Portugal * * * AIMS OF THE CONFERENCE * * * The ICAIS'14 conference aims at bringing together international researchers, developers and practitioners from different horizons to discuss the latest advances in system learning and adaptation. ICAIS'14 will serve as a space to present the current state of the art but also future research avenues of this thematic. Topics of the conference cover three aspects: Algorithms & theories of adaptation and learning, Adaptation issues in Software & System Engineering, Real- world Applications. ICAIS'14 will feature contributed papers as well as world-renowned guest speakers (see webpage), interactive breakout sessions, and instructional workshops. * * * IMPORTANT DATES * * * - Workshop & Special Session proposal: April 13, 2014 - Full paper submission: June 10, 2014 - Acceptance notification: July 01, 2014 - Final camera ready: July 11, 2014 * * * CONFERENCE PROCEEDINGS * * * Proceedings will be published by Springer in Lecture Notes in Artificial Intelligence Series. * * * MAIN TOPICS (but not limited to) * * * - Track 1: Self-X Systems o Self-adaptation o Self-organization and behavior emergence o Self-managing o Self-healing o Self-monitoring o Multi-agent systems o Self-X software agents o Self-X robots o Self-organizing sensor networks o Evolving systems - Track 2: Incremental Learning o Online incremental learning o Self-growing neural networks o Adaptive and life-long learning o Plasticity and stability o Forgetting o Unlearning o Novelty detection o Perception and evolution o Drift handling o Adaptation in changing environments - Track 3: Online Processing o Adaptive rule-based systems o Adaptive identification systems o Adaptive decision systems o Adaptive preference learning o Time series prediction o Online and single-pass data mining o Online classification o Online clustering o Online regression o Online feature selection and reduction o Online information routing - Track 4: Dynamic and Evolving Models in Computational Intelligence o (Dynamic) Neural networks architectures o (Dynamic) Evolutionary computation o (Dynamic) Swarm intelligence o (Dynamic) Immune and bacterial systems o Uncertainty and fuzziness modeling for adaptation o Approximate reasoning and adaptation o Chaotic systems - Track 5: Software & System Engineering o Autonomic computing o Organic computing o Evolution o Adaptive software architecture o Software change o Software agents o Engineering of complex systems o Adaptive software engineering processes o Component-based development - Track 6: Applications - Adaptivity and Learning o Smart systems o Ambient / ubiquitous environments o Distributed intelligence o Robotics o Industrial applications o Internet applications o Business applications o Supply chain management o etc. * * * SUBMISSION * * * Papers must be in PDF, not exceeding 10 pages and conforming to Springer-Verlag Lecture Notes guidelines. Author instructions and style files can be downloaded at http://www.springer.de/comp/lncs/authors.html. Papers must be submitted through the submission system ( http://computing.bournemouth.ac.uk/ICAIS ). Short papers describing novel research visions, work-in-progress or less mature results are also welcome. All submission will be peer-reviewed by at least 3 qualified reviewers. Selection criteria will include: relevance, significance, impact, originality, technical soundness, and quality of presentation. Preference will be given to submissions that take strong or challenging positions on important emergent topics. At least one author have to attend the conference to present the paper. * * * ORGANIZATION COMMITTEE * * * General Chair: - Abdelhamid Bouchachia, Bournemouth University, UK International Advisory Committee: - Nikola Kasabov, Auckland University, New Zealand - Xin Yao, University of Birmingham, UK - Djamel Ziou, University of Sherbrooke, Canada - Plamen Angelov, University of Lancaster, UK - Witold Pedrycz, University of Edmonton, Canada - Janusz Kacprzyk, Polish Academy of Sciences, Poland Organization Committee: - Hammadi Nait-Charif, Bournemouth University, UK - Emili Balaguer-Ballester, Bournemouth University, UK - Damien Fay, Bournemouth University, UK - Jane McAlpine, Bournemouth University, UK Publicity Chair: - Markus Prossegger, Carinthia University of Applied Sciences, Austria -------------- next part -------------- An HTML attachment was scrubbed... URL: From wduch at is.umk.pl Wed Jan 29 05:26:35 2014 From: wduch at is.umk.pl (=?UTF-8?Q?W=C5=82odzis=C5=82aw_Duch?=) Date: Wed, 29 Jan 2014 11:26:35 +0100 Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Dear all, QM has yet to show some advantages over strong synchronization in classical models that unifies the activity of the whole network. There is another aspect to this discussion: we need to go beyond na?ve interpretation of assigning functions to activity of single structures. We have to use a formalism similar to the quantum mechanical representation theory in Hilbert space, decomposing brain activations into combinations of other activations. In wrote a bit about it in sec. 2 of ?Neurolinguistic Approach to Natural Language Processing ?, Neural Networks 21(10), 1500-1510, 2008. QM seems to be attractive because we do not understand how to make a transition between brain activations and subjective experience, described in some psychological spaces, outside and inside (3rd and 1st person) points of view. I have tried to explain it in a paper for APA, Mind-Brain Relations, Geometric Perspective and Neurophenomenology, American Philosophical Association Newsletter 12(1), 1-7, 2012 QM formalism of representation theory may be useful also for classical distributed computing systems. Best regards, W?odek Duch ____________________ Google W. Duch From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Carson Chow Sent: Tuesday, January 28, 2014 10:01 PM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Physics and Psychology (and the C-word) Brian, Quantum mechanics can be completely simulated on a classical computer so if quantum mechanics do matter for C then it must be a matter of computational efficiency and nothing more. We also know that BQP (i.e. set of problems solved efficiently on a quantum computer) is bigger than BPP (set of problems solved effficiently on a classical computer) but not by much. I'm not fully up to date on this but I think factoring and boson sampling or about the only two examples that are in BQP and not in BPP. We also know that BPP is much smaller than NP, so if C does require QM then for some reason it sits in a small sliver of complexity space. best, Carson PS I do like your self-consistent test for confirming consciousness. I once proposed that we could just run Turing machines and see which ones asked why they exist as a test of C. Kind of similar to your idea. On 1/28/14 3:09 PM, Brian J Mingus wrote: Hi Richard, thanks for the feedback. > Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate) Actually, the outcome measure I described is independent of a clear and unambiguous meaning for C itself, and in an interesting way: the models, like us, essentially reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented (including the one in your paper). I will read your paper and see if it changes my position. At the present time, however, I can't imagine any information that would solve the so-called zombie problem. I'm not a big fan of integrative information theory - I don't think hydrogen atoms are conscious, and I don't think naive bayes trained on a large corpus and run in generative mode is conscious. Thus, if the model doesn't go through the same philosophical reasoning that we've collectively gone through with regards to subjective experience, then I'm going to wonder if its experience is anything like mine at all. Touching back on QM, if we create a point neuron-based model that doesn't wax philosophical on consciousness, I'm going to wonder if we should add lower levels of analysis. I will take a look at your paper, and see if it changes my view on this at all. Cheers, Brian Mingus http://grey.colorado.edu/mingus On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore wrote: Brian, Everything hinges on the definition of the concept ("consciousness") under consideration. In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of Artificial General Intelligence" I pointed out (echoing Chalmers) that too much is said about C without a clear enough understanding of what is meant by it .... and then I went on to clarify what exactly could be meant by it, and thereby came to a resolution of the problem (with testable predictions). So I think the answer to the question you pose below is that: (a) Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate), and (b) All three of the approaches you mention are sidelined and finessed by the approach I used in the abovementioned paper, where I clarify the definition by clarifying first why we have so much difficulty defining it. In other words, there is a fourth way, and that is to explain it as ... well, I have to leave that dangling because there is too much subtlety to pack into an elevator pitch. (The title is the best I can do: " Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism "). Certainly though, the weakness of all quantum mechanics 'answers' is that they are stranded on the wrong side of the explanatory gap. Richard Loosemore Reference Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), Theoretical Foundations of Artifical General Intelligence. Atlantis Press. http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf On 1/28/14, 10:34 AM, Brian J Mingus wrote: Hi Richard, > I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. I'm not sure I see the argument you're trying to make here. If you have an outcome measure that you agree correlates with consciousness, then we have a framework for scientifically studying it. Here's my setup: If you create a society of models and do not expose them to a corpus containing consciousness philosophy and they then, in a reasonably short amount of time, independently rewrite it, they are almost certainly conscious. This design explicitly rules out a generative model that accidentally spits out consciousness philosophy. Another approach is to accept that our brains are so similar that you and I are almost certainly both conscious, and to then perform experiments on each other and study our subjective reports. Another approach is to perform experiments on your own brain and to write first person reports about your experience. These three approaches each have tradeoffs, and each provide unique information. The first approach, in particular, might ultimately allow us to draw some of the strongest possible conclusions. For example, it allows for the scientific study of the extent to which quantum effects may or may not be relevant. I'm very interested in hearing any counterarguments as to why this general approach won't work. If it can't work, then I would argue that perhaps we should not create full models of ourselves, but should instead focus on upgrading ourselves. From that perspective, getting this to work is extremely important, despite however futuristic it may seem. > So let's let that sleeping dog lie.... (?). Not gonna' happen. :) Brian Mingus http://grey.colorado.edu On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore wrote: On 1/27/14, 11:30 PM, Brian J Mingus wrote: Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. Brian Mingus http://grey.colorado.edu/mingus Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. Richard Loosemore Wells College Aurora NY USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Wed Jan 29 10:49:52 2014 From: bower at uthscsa.edu (james bower) Date: Wed, 29 Jan 2014 09:49:52 -0600 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <3CB75368-ECE9-42B3-BBEB-E695E46DBAB0@paradise.caltech.edu> <52E2A265.9000306@cse.msu.edu> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> <52E6CD8B.70404! ! 05@phys.psu.edu> Message-ID: Mark, Interesting points - a couple of comments: On Jan 28, 2014, at 10:00 PM, Mark H. Bickhard wrote: > but one way to think of the difference I have in mind is to consider theories as resources and frameworks for the construction of models. this is precisely the direction that, in neuroscience, I think is a large part of the problem (and you seem to agree). People come up with theories about how a particular part of the brain works (often based on what is almost antidotal descriptions of its architecture) and then build models as a kind of existence proof for the idea. These theories and models then take on a life of their own. One example in the cerebellum, is the learning theory invented by David Marr and Jim Albus (independently) in the late 60s. While Marr later disparaged his own model and precisely its antidotal nature in the introduction to his epistemological book ?Vision?, the Marr/Ablus theory and the pseudo-realistic, i.e., Ptolemaic (see previous definition) models that were built based on the theory have generated literally thousands of experimental papers - whose methods were, in effect, manipulated to support the presumed function. I would be happy to send anyone interested references. To this day the majority of cerebellar cortical models that have been published implement one or another modern version of the original theory. The models all violate the ?conservation laws? of Purkinje cells that I mentioned previously. Furthermore, the Marr / Albus theory can not be right, if those ?conservation laws? are correct. As I keep returning to in the case of Newton and the inverse square law. He did not have a theory of gravitation, that he decide to build a model to demonstrate how it works. he posed a physical question to a model of a physical system and something unexpected popped out. What popped out became the inspiration for Newtons theory of mechanics. So, the point again is, that while today physicists can invent theories, and implement them as models - they are not working in the theoretical or modeling vacuum that exists for neuroscience. The field is absolutely stuck in a framework based on false presuppositions - many of them. But the solution is not the a priori generation of a new framework - it is to go back to the system itself, using tools that allow the system itself to generate the framework. Once we have that - speculation and theory can flow much more freely. We don?t really even know what kind of machine this is - and we don?t know very much about what it does either I don?t believe. Jim > > For example, does the brain function in terms of the processing of semantic information, or via endogenously oscillatory processes (at multiple spatial and temporal scales, within complex topologies) that engage in modulations amongst themselves? The former assumption is almost universal today, but there are serious problems with it. I advocate the latter (paper available, for those who might be interested: http://www.lehigh.edu/~mhb0/ModelCNSFunctioning19Jan14.pdf). > > This raises some related issues. E.g., is there a possibility that the field is stuck in a framework with a false presupposition? If so, we might be in a position similar to that of attempting to determine (model?) the mass of fire (phlogiston). That example is from a while ago (and the assumption that fire is a substance goes back millennia, at least in the Western tradition), but more recently we find people devoting careers to figuring the nature and properties of "associations" or of how two-layer Perceptrons could engage in full human level pattern recognition (and, later, overlooking for a decade or so that the in-principle proof of impossibility did not apply to multiple layer connectionist models). False presuppositions are rather common in history. I argue that the framework of semantic information processing is such a false presupposition. > > Even without the problems of false presuppositions, there are also the problems of missing conceptual frameworks. You could not "reverse engineer" a table without notions of atoms, molecules, van der Waals forces, and so on ? it is not a problem solvable via reverse engineering alone without regard for the conceptual resources available. Presumably, a similar point holds for understanding how the brain works, only more so, and it seems likely that we are in such a position of conceptual impoverishment, if not (also) in a position of working within a false framework of presuppositions. > > I have argued that we are in fact working with false presuppositions and with an impoverishment of conceptual resources, and have made some attempts to contribute to resolving those problems. The history of science provides strong "inductive" support that all science has been and still is caught in such problems and that major advances require changes at those levels. Including science(s) of the brain. But my arguments have tended to be more "in-principle" (and, thus, have driven me into theory, and even [horrors!] philosophy). > > This speaks to the points made in the discussion about theory as a differentiated subdomain within physics, and that it perhaps should also be recognized as such in studying the brain (and in psychology and cognitive science, etc. more generally). I fully agree. Recall, however, that the first Nobel for purely theoretical work in physics was for Dirac in 1933 (they would not and did not award the Nobel to Einstein for his purely theoretical work). So, physics is a positive model in this regard, but it too suffered from a serious hesitancy to make and honor the distinction. > > Mark > > Mark H. Bickhard > Lehigh University > 17 Memorial Drive East > Bethlehem, PA 18015 > mark at bickhard.name > http://bickhard.ws/ > > On Jan 28, 2014, at 7:52 PM, Ping Li wrote: > > > > Hi John, > In psychology, it's often the opposite -- when the theory (or model, in this case) and experiment don't agree, it's the theory that's to blame. Hence we have to "simulate the data", "replicate the empirical findings", and "match with empirical evidence" (phrases used in almost all cognitive modeling papers -- just finished another one myself)... That's why Brad pointed out it's so hard to publish modeling papers without corresponding experiments or other empirical data. > > Best, > Ping > > > That's not the whole story. For modern physics, a common happening is that when theory and experiment disagree, it is the experiment that is wrong, at least if the theory is well established. (Faster-than-light neutrinos are only one example.) > > John Collins > > > > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Wed Jan 29 10:53:31 2014 From: bower at uthscsa.edu (james bower) Date: Wed, 29 Jan 2014 09:53:31 -0600 Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: <2159EEDF-0D46-4FA7-89A0-1246C03C020A@columbia.edu> References: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> <2159EEDF-0D46-4FA7-89A0-1246C03C020A@columbia.edu> Message-ID: <72EF4B39-35D7-4C8C-9836-5FD96BE5FA52@uthscsa.edu> Again, scale relationships are likely to work much differently in biological systems, where, I think, the linkage between mechanisms at different levels of scale is one of the ways that biological systems create their tremendous efficiency (optimality) and ?beat? the second law. :-) Jim On Jan 28, 2014, at 10:26 PM, Mark Orr wrote: > Jim, > With all the talk of physics, let's not forget Feynman's eloquence in explaining an unsolved and old problem in physics: turbulence. Where the understanding of the parts does not lead up to much of an understanding of the whole. > > "How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts--physics, biology, geology, astronomy, psychology, and so on--remember that nature does not know it! So let us put it all back together, not forgetting ultimately what it is for. Let it give us one more final pleasure: drink it and forget it all!" > > -Richard Feynman, from Six Easy Pieces > > > On Jan 28, 2014, at 10:07 PM, james bower wrote: > >> >> On Jan 28, 2014, at 7:02 PM, James A. Bednar wrote: >> >>> | Date: 2014-01-25 12:09 AM >>> | From: james bower >>> | >>> | Having spent almost 30 years building realistic models of its cells >>> | and networks (and also doing experiments, as described in the >>> | article I linked to) we have made some small progress - but only by >>> | avoiding abstractions and paying attention to the details. >>> >>> Jim, >>> >>> I think it's disingenuous to claim that *any* model can avoid >>> abstractions, >> >> First, you might want to read the paper I linked to earlier, so that you understand what I am saying. >> >> here it is again: https://www.dropbox.com/s/r046g03w8ev5kkm/272602_1_En_5%20copy.pdf >> >> With respect to that article, here is a list of properties of Purkinje cells that if you don?t have them, means you aren?t modeling a Purkinje cell, based on 40 years of modeling and experimental studies not only including our own: >> >> - no dendrite >> - a dendrite when present that simply does a voltage sum >> - or in other words a dendrite with no active conductive properties >> - a soma that has a simple fixed defined threshold to fire >> - or in other words, a dendrite with no active conductive properties >> - a dendrite that isn?t morphologically based on an actual reconstructed Purkinje cell dendrite. >> - a soma that doesn?t generation action potentials independently on its own >> - a dendrites whose principe effect is to stop the soma from firing >> >> As far as its response in the context of a realistic network model: >> >> - a dendrite that only receives excitatory input, without inhibitory input >> - input from parallel fibers directly driving somatic spiking >> >> >> These are equivalent, or should be, to making a model of the interaction of subatomic particles and ignoring conservation laws. >> >> You couldn?t get away with it in physics - happens all the time in modeling Purkinje cells and the cerebellum (even last week). >> >> Neuroscience should be accumulating these biological equivalents to the laws of physics - but it isn?t. (would even be useful to the Neural Network, connectionists, machine learning community who mostly have a pre hodgkin huxley view of neurons). >> >> But there is another and more definition, in my view, of ?realistic? models, which again has to do with process. >> >> The vast majority of neurobiological models are designed to demonstrate that an idea somebody had about how the system worked is plausible. No problem with that, what so ever, in engineering (its what engineers do) - but a significant problem in ?reverse engineering? the brain. >> >> Realistic models, in my nomenclature (and I was one of the first people to use the term actually) aren?t defined explicitly by how adorned they are or not with biological stuff. They are models whose construction and parameter tuning is primarily and fundamentally aimed at replicating basic biological data. Not, synthesized biological data (ie. ocular dominance columns or orientation selectivity), but basic recorded responses that don?t have known functional implications. Better yet, biological data recorded under completely artificial circumstances and conditions which never-the-less reveal complex behavior that isn?t understood. In the case of realistic single cell models, that data is often from voltage clamp experiments. Clamping the voltage of a neuron at a fixed level by injecting current is a highly artificial thing to do - yet, in many cells and in particular the Purkinje cell, it reveals a complex pattern of activity reflecting in a complex way the biophysical structure of that neuron. Once model parameters have been tuned to replicate voltage clamp data - then, one freezes these parameters and applies synaptic input to see if one can replicate the basic response properties of the cell (e.g. its variable rate of action potential generation). In our experience at that point we have always started to find all sorts of things that you didn?t know where there. In the case of the PUrkinje cell, for example, we found out that it didn?t matter where on its huge dendrite you applied a synaptic input, that input had the same effect on the soma. If you want to know why that is interesting (and what happened next) take the time to read the paper. >> >> So, the point is this - again, the real litmus test for realistic modeling, is whether the model was tuned and designed to produce a particular functional result - and then adorned with biology, of if replicating the basic biology independent of function was the first step and remains the reference step for the modeling work. >> >> Again to return to Newton - he apparently built a ?realistic? (by my definition) model of the moon orbiting the earth. he applied mathematical analysis to figure out the size of the force holding it in its orbit. He then realized that that force appeared to be the inverse square of the distance. Actually, the force he calculated the first time wasn?t - it was less - and therefore, Newton at age 19, apparently thought that there was some other force (Keplers vortex force), in the mix as well. It wasn?t until many years later, after turning most of his attention to alchemy, when informed that another scientist was about to report the inverse square relationship that he became interested again - that interest turned into the work that ended up with his treatise on mechanics. >> >> So, the bottom line - if you want to understand the nervous system and you have an idea about how the cerebellum is involved in learning - and you build a model that implements your scheme - then, you are engaged in a Ptolemaic effort, not a realistic one. If most of the ?predictions? of your model are actually ?postdictions? of well known phenomena that you are using to convince people your functional idea is right, then again, you are in the domain of Ptolemy. If you are building a model to solve the traveling salesman problem, or to perform better voice recognition on a chip - good for you - that?s how engineering works. No problem with that at all - and actually, if you picked something up from a neuroscience lecture that gave you a new idea about how to make your neural networks chip - absolutely no problem what so ever with that either - as we all know, biology has served as an important source of creativity in engineering historically. However, if you want to claim that your model also reveals something important about how brains work, then the model must either be ?realistic? first, or be able to link to such a model. >> >> It goes without saying that these types of realistic models can be built at many levels, as long as the model has biological components. (you won?t convince me with mean field theories of cerebral cortex). It also goes without saying, of course, that we don?t have the technology or the knowledge for that matter to build one model of everything - although personally, I believe eventually we will have to, and reflecting that view, Version 3.0 of GENESIS was specifically built to link broadly across many different levels of scale. >> >> The critical question therefore, is whether the model is built in such a way that the biology can tell you something you didn?t know before you started (just like the earth moon model told Newton) - or, is the biology just dressing up something you already believed to be true and just wanted to convince the rest of us. Building the model out of realistic components, and then testing it on theory- neutral biological data, is more likely to lead to the former. At least it has over and over again for us. >> >> >> Jim >> >> >> >>> and in particular that your type of "realistic" >>> multicompartmental single-cell and network modelling could ever do so. >>> >>> *Real* morphologically complex cells are embedded in complex networks, >>> which are embedded in complex organisms, which are embedded in complex >>> environments, which are embedded in complex ecosystems. Evolution >>> acts on the net result of *all* of this, indirectly via a process of >>> development. Certain species thrive in certain ecosystems if their >>> proteins, cells, networks, nervous systems, bodies, and communities >>> allow them to function in that environment well enough to reproduce. >>> The details of *all* of these things matter. >>> >>> Are all of these details represented realistically in your models? >>> No, and they shouldn't be -- you pose questions that can be addressed >>> by the things you do include, abstract away the rest, and all is well >>> and good. But other different yet no less realistic models are built >>> to address different questions, paying attention to different sets of >>> details (such as large-scale development and plasticity, for my own >>> models), and again abstract away the rest. >>> >>> I am happy to join with you to decry truly unrealistic models, which >>> would be those that respect none of the details at any level. Down >>> with unrealistic models! But there is no meaningful sense in which >>> any model can be claimed to avoid abstraction, and no level that >>> exclusively owns biological realism. >>> >>> Jim Bednar >>> >>> ________________________________________________ >>> >>> Dr. James A. Bednar >>> Director, Doctoral Training Centre in >>> Neuroinformatics and Computational Neuroscience >>> University of Edinburgh School of Informatics >>> 10 Crichton Street, Edinburgh, EH8 9AB UK >>> http://anc.ed.ac.uk/dtc >>> http://homepages.inf.ed.ac.uk/jbednar >>> ________________________________________________ >>> >>> -- >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >> >> >> >> >> >> Dr. James M. Bower Ph.D. >> >> Professor of Computational Neurobiology >> >> Barshop Institute for Longevity and Aging Studies. >> >> 15355 Lambda Drive >> >> University of Texas Health Science Center >> >> San Antonio, Texas 78245 >> >> >> Phone: 210 382 0553 >> >> Email: bower at uthscsa.edu >> >> Web: http://www.bower-lab.org >> >> twitter: superid101 >> >> linkedin: Jim Bower >> >> >> CONFIDENTIAL NOTICE: >> >> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or >> >> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be >> >> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. >> >> >> > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Wed Jan 29 11:23:56 2014 From: bower at uthscsa.edu (james bower) Date: Wed, 29 Jan 2014 10:23:56 -0600 Subject: Connectionists: "Abstract" vs "Biologically realistic" modelling In-Reply-To: References: <201401290102.s0T12bCY021704@hebb.inf.ed.ac.uk> Message-ID: I bet you guys wish I still had to attend faculty meetings, meet with the various dean-lits about new safety measures for the laboratory and spend hours talking about ways to reorganize the graduate program given new budgetary constraints. I also never have to have another discussion about the ROI for every square inch of my laboratory. he he. Thus, returning to intellectual pursuits :-) : In addition to the on-list conversation, there has also been a robust off list conversation going on. Remarkably enough, it turns out that several members of the list have actually taken the time to read the paper on Purkinje cells that I linked to - and several have made the valid point that work on the cerebellum has the considerable benefit that we actually know a lot (and have for more than 100 years) about its basic architecture. Which raises another issue. To return to Newton. From historical documents, it would appear, as I have said, that a key insight that eventually drove his work on mechanics, was obtained when he was quite young, by examining celestial mechanics with what was, in effect, a ?realistic? model of the moon orbiting the earth. He did not, as Kepler had, for example, taken on the problem of planetary motion - instead he chose a system with quite good data, and importantly, a case in which the orbit was actually almost circular. Not coincidentally, there has never been any debate but that the moon actually does revolve around the earth. In other words, he chose the right system to ask the question. In neuroscience and computational neuroscience in particular, the majority of the models theories and even experimental work involves the visual system. The reason is probably obvious, as we value ours a lot. However, as several off list have pointed out I think accurately, we still to this day know much less about the visual system?s architecture than we do about that of the cerebellum or for example 3 layered cortex like that used by the olfactory system. Not to be controversial, but, if I were NIH - I would suspend funding work on the visual system, or neo-cortex in general, and get the field to focus on ?archi-cortex? and the systems that use it (olfaction being the primary one). I would also suspend work on the hippocampus, until we actually understand the role the olfactory system played in the evolution and even current function of that structure. In fact, from the point of view of comparative mammalian studies - the olfactory system is much more behaviorally important (even for us) than the visual system anyway. (have you ever gone out on a date twice with someone whose smell you found offensive - nope). Furthermore, there are good reasons to believe that the olfactory system ?invented? the fundamental architecture of cerebral cortical networks. Accordingly, I have suggested for many years that if you want to understand how cerebral cortex works, you first need to understand this structure in the context of the sensory system that invented it (olfaction), not the ones that parasitized it (e.g. vision). I can provide papers for anyone interested. So, NOW not only am I telling you HOW to study the brain, and I am also telling you what parts of the brain it makes the most sense to study, given the current state of neuroscience. :-) (Does his arrogance know no bounds :-) ). In biology, the right choice of system (squid axon, lobster somatogastric system, Tritonia swim system, cerebellum, olfactory system) has always been critically important to make progress. Jim On Jan 28, 2014, at 9:53 PM, Brad Wyble wrote: > I'd like to throw in a few cents. > > However, if you want to claim that your model also reveals something important about how brains work, then the model must either be ?realistic? first, or be able to link to such a model. > > > I think that whether this is true depends on your definition of "how brains work". For my purposes, the litmus test of a model that reflects "how brains work" in some fundamental sense is reflected in their ability to generate novel predictions that are correct. Your warnings about postdiction are right on target, since the predictions must be truly de novo at the time they are created to ensure that you are not engaging in curve fitting. > > Therefore to me it seems as if we are using very different goals in the modelling process. I want a model to reflect the behavior of the system, and I use neural models because the added constraint of neural plausibility enormously accelerates my search through the space of possible models by preventing me from following innumerable dead ends (though there are still quite a lot of neurally plausible dead ends). > > Jim, you seem to want a model that reflects the wiring of the brain first and foremost, and the functionality comes in a close second. I think it's fine for both of those goals to exist and I don't think it's necessary to label one of them as useless, or even less efficient. > > -Brad > > PS. Thanks for a very interesting debate! > > > > > > > > > It goes without saying that these types of realistic models can be built at many levels, as long as the model has biological components. (you won?t convince me with mean field theories of cerebral cortex). It also goes without saying, of course, that we don?t have the technology or the knowledge for that matter to build one model of everything - although personally, I believe eventually we will have to, and reflecting that view, Version 3.0 of GENESIS was specifically built to link broadly across many different levels of scale. > > The critical question therefore, is whether the model is built in such a way that the biology can tell you something you didn?t know before you started (just like the earth moon model told Newton) - or, is the biology just dressing up something you already believed to be true and just wanted to convince the rest of us. Building the model out of realistic components, and then testing it on theory- neutral biological data, is more likely to lead to the former. At least it has over and over again for us. > > > Jim > > > >> and in particular that your type of "realistic" >> multicompartmental single-cell and network modelling could ever do so. >> >> *Real* morphologically complex cells are embedded in complex networks, >> which are embedded in complex organisms, which are embedded in complex >> environments, which are embedded in complex ecosystems. Evolution >> acts on the net result of *all* of this, indirectly via a process of >> development. Certain species thrive in certain ecosystems if their >> proteins, cells, networks, nervous systems, bodies, and communities >> allow them to function in that environment well enough to reproduce. >> The details of *all* of these things matter. >> >> Are all of these details represented realistically in your models? >> No, and they shouldn't be -- you pose questions that can be addressed >> by the things you do include, abstract away the rest, and all is well >> and good. But other different yet no less realistic models are built >> to address different questions, paying attention to different sets of >> details (such as large-scale development and plasticity, for my own >> models), and again abstract away the rest. >> >> I am happy to join with you to decry truly unrealistic models, which >> would be those that respect none of the details at any level. Down >> with unrealistic models! But there is no meaningful sense in which >> any model can be claimed to avoid abstraction, and no level that >> exclusively owns biological realism. >> >> Jim Bednar >> >> ________________________________________________ >> >> Dr. James A. Bednar >> Director, Doctoral Training Centre in >> Neuroinformatics and Computational Neuroscience >> University of Edinburgh School of Informatics >> 10 Crichton Street, Edinburgh, EH8 9AB UK >> http://anc.ed.ac.uk/dtc >> http://homepages.inf.ed.ac.uk/jbednar >> ________________________________________________ >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > > > > > > Dr. James M. Bower Ph.D. > > Professor of Computational Neurobiology > > Barshop Institute for Longevity and Aging Studies. > > 15355 Lambda Drive > > University of Texas Health Science Center > > San Antonio, Texas 78245 > > > Phone: 210 382 0553 > > Email: bower at uthscsa.edu > > Web: http://www.bower-lab.org > > twitter: superid101 > > linkedin: Jim Bower > > > CONFIDENTIAL NOTICE: > > The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or > > any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be > > immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. > > > > > > > -- > Brad Wyble > Assistant Professor > Psychology Department > Penn State University > > http://wyblelab.com Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Wed Jan 29 13:02:19 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Wed, 29 Jan 2014 11:02:19 -0700 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: Hi W?odek, Suggesting that we use a Hilbert space instead of a standard vector space for semantic representation seems similar to the question of whether we should use spiking or rate-coded neurons. It would seem that if all of the information is encoded in the rate, that the Hilbert space representation could then be compressed into a simpler vector space representation. I did recently see a paper that suggested that different parts of the brain seem to use different codes, i.e., some convey information in spikes, and some convey information in the rate (I can't find the paper at this time). In that case, a Hilbert space representation might be simpler to the extent that it captures all of the information, either way. I am not a physicist so I may not have a deep enough understanding of what a Hilbert space is - could you perhaps explain what new information it might be capable of representing over and above the vector space representation, which is essentially what is used in rate coded deep neural nets? Thanks, Brian http://grey.colorado.edu/mingus On Wed, Jan 29, 2014 at 3:26 AM, W?odzis?aw Duch wrote: > Dear all, > > > > QM has yet to show some advantages over strong synchronization in > classical models that unifies the activity of the whole network. There is > another aspect to this discussion: we need to go beyond na?ve > interpretation of assigning functions to activity of single structures. We > have to use a formalism similar to the quantum mechanical representation > theory in Hilbert space, decomposing brain activations into combinations of > other activations. In wrote a bit about it in sec. 2 of ?Neurolinguistic > Approach to Natural Language Processing?, > Neural Networks 21(10), 1500-1510, 2008. > > > > QM seems to be attractive because we do not understand how to make a > transition between brain activations and subjective experience, described > in some psychological spaces, outside and inside (3rd and 1st person) > points of view. I have tried to explain it in a paper for APA, *Mind-Brain > Relations, Geometric Perspective > ** and > Neurophenomenology*, American Philosophical Association Newsletter > 12(1), 1-7, 2012 > > QM formalism of representation theory may be useful also for classical > distributed computing systems. > > > > Best regards, W?odek Duch > > ____________________ > > Google W. Duch > > > > > > > > *From:* Connectionists [mailto: > connectionists-bounces at mailman.srv.cs.cmu.edu] *On Behalf Of *Carson Chow > *Sent:* Tuesday, January 28, 2014 10:01 PM > *To:* connectionists at mailman.srv.cs.cmu.edu > *Subject:* Re: Connectionists: Physics and Psychology (and the C-word) > > > > Brian, > > Quantum mechanics can be completely simulated on a classical computer so > if quantum mechanics do matter for C then it must be a matter of > computational efficiency and nothing more. We also know that BQP (i.e. set > of problems solved efficiently on a quantum computer) is bigger than BPP > (set of problems solved effficiently on a classical computer) but not by > much. I'm not fully up to date on this but I think factoring and boson > sampling or about the only two examples that are in BQP and not in BPP. We > also know that BPP is much smaller than NP, so if C does require QM then > for some reason it sits in a small sliver of complexity space. > > best, > Carson > > PS I do like your self-consistent test for confirming consciousness. I > once proposed that we could just run Turing machines and see which ones > asked why they exist as a test of C. Kind of similar to your idea. > > On 1/28/14 3:09 PM, Brian J Mingus wrote: > > Hi Richard, thanks for the feedback. > > > > > Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C itself > (which I don't think anyone has, so therefore it is, after all, of no value > to look for outcome measures that correlate) > > > > Actually, the outcome measure I described is independent of a clear and > unambiguous meaning for C itself, and in an interesting way: the models, > like us, essentially reinvent the entire literature, and have a > conversation as we do, inventing almost all the same positions that we've > invented (including the one in your paper). > > > > I will read your paper and see if it changes my position. At the present > time, however, I can't imagine any information that would solve the > so-called zombie problem. I'm not a big fan of integrative information > theory - I don't think hydrogen atoms are conscious, and I don't think > naive bayes trained on a large corpus and run in generative mode is > conscious. Thus, if the model doesn't go through the same philosophical > reasoning that we've collectively gone through with regards to subjective > experience, then I'm going to wonder if its experience is anything like > mine at all. > > > > Touching back on QM, if we create a point neuron-based model that doesn't > wax philosophical on consciousness, I'm going to wonder if we should add > lower levels of analysis. > > > > I will take a look at your paper, and see if it changes my view on this at > all. > > > > Cheers, > > > > Brian Mingus > > > > http://grey.colorado.edu/mingus > > > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore > wrote: > > > > Brian, > > Everything hinges on the definition of the concept ("consciousness") under > consideration. > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of > Artificial General Intelligence" I pointed out (echoing Chalmers) that too > much is said about C without a clear enough understanding of what is meant > by it .... and then I went on to clarify what exactly could be meant by it, > and thereby came to a resolution of the problem (with testable > predictions). So I think the answer to the question you pose below is > that: > > (a) Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C itself > (which I don't think anyone has, so therefore it is, after all, of no value > to look for outcome measures that correlate), and > > (b) All three of the approaches you mention are sidelined and finessed by > the approach I used in the abovementioned paper, where I clarify the > definition by clarifying first why we have so much difficulty defining it. > In other words, there is a fourth way, and that is to explain it as ... > well, I have to leave that dangling because there is too much subtlety to > pack into an elevator pitch. (The title is the best I can do: " Human and > Machine Consciousness as a Boundary Effect in the Concept Analysis > Mechanism "). > > Certainly though, the weakness of all quantum mechanics 'answers' is that > they are stranded on the wrong side of the explanatory gap. > > > Richard Loosemore > > > Reference > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary > Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), > Theoretical Foundations of Artifical General Intelligence. Atlantis Press. > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > Hi Richard, > > > > > I can tell you that the quantum story isn't nearly enough clear in the > minds of physicists, yet, so how it can be applied to the C question is > beyond me. Frankly, it does NOT apply: saying anything about observers > and entanglement does not at any point touch the kind of statements that > involve talk about qualia etc. > > > > I'm not sure I see the argument you're trying to make here. If you have an > outcome measure that you agree correlates with consciousness, then we have > a framework for scientifically studying it. > > > > Here's my setup: If you create a society of models and do not expose them > to a corpus containing consciousness philosophy and they then, in a > reasonably short amount of time, independently rewrite it, they are almost > certainly conscious. This design explicitly rules out a generative model > that accidentally spits out consciousness philosophy. > > > > Another approach is to accept that our brains are so similar that you and > I are almost certainly both conscious, and to then perform experiments on > each other and study our subjective reports. > > > > Another approach is to perform experiments on your own brain and to write > first person reports about your experience. > > > > These three approaches each have tradeoffs, and each provide unique > information. The first approach, in particular, might ultimately allow us > to draw some of the strongest possible conclusions. For example, it allows > for the scientific study of the extent to which quantum effects may or may > not be relevant. > > > > I'm very interested in hearing any counterarguments as to why this general > approach won't work. If it *can't* work, then I would argue that perhaps > we should not create full models of ourselves, but should instead focus on > upgrading ourselves. From that perspective, getting this to work is > extremely important, despite however futuristic it may seem. > > > > > So let's let that sleeping dog lie.... (?). > > > > Not gonna' happen. :) > > > > Brian Mingus > > http://grey.colorado.edu > > > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore > wrote: > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > Consciousness is also such a bag of worms that we can't rule out that > qualia owes its totally non-obvious and a priori unpredicted existence to > concepts derived from quantum mechanics, such as nested observers, or > entanglement. > > As far as I know, my litmus test for a model is the only way to tell > whether low-level quantum effects are required: if the model, which has not > been exposed to a corpus containing consciousness philosophy, then goes on > to independently recreate consciousness philosophy, despite the fact that > it is composed of (for example) point neurons, then we can be sure that > low-level quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested observers or > entanglement. I'll let a quantum physicist chime in on that - although I > will note that according to news articles I've read that we keep managing > to entangle larger and larger objects - up to the size of molecules at this > time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > > Speaking as someone is both a physicist and a cognitive scientist, AND > someone who has written papers resolving that whole C-word issue, I can > tell you that the quantum story isn't nearly enough clear in the minds of > physicists, yet, so how it can be applied to the C question is beyond me. > Frankly, it does NOT apply: saying anything about observers and > entanglement does not at any point touch the kind of statements that > involve talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I > think it best to listen to George Bernard Shaw on this one: "Never do unto > others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what > happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From collins at phys.psu.edu Wed Jan 29 15:59:19 2014 From: collins at phys.psu.edu (John Collins) Date: Wed, 29 Jan 2014 15:59:19 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <1fa801cf1082$f7844f30$e68ced90$@csc.kth.se> <52E2CBE2.6090803@cse.msu.edu> <0552FEDA-DAC6-46A1-BC4A-934FB04BF4CC@uthscsa.edu> <1ADA4840-8CE9-4BD5-B17F-ECA5DFB7171E@mail.nih.gov> <7748E2B7-6E44-4CB6-BD97-8D7CB9507CA6@uthscsa.edu> <52E6CD8B.7040405@phys.psu.edu> Message-ID: <52E96BA7.2020106@phys.psu.edu> Ping, You are undoubtedly correct about the typical cases you encounter in psychology. My remark was only intended to counter the idea that, when experiment and theory disagree, it is always the theory that is to be blamed. The situation is not universal in physics. E.g., in the 1960s and early 1970s, the situation in high-energy physics was very like what you describe for psychology. John On 01/28/2014 07:52 PM, Ping Li wrote: > > Hi John, > In psychology, it's often the opposite -- when the theory (or model, in > this case) and experiment don't agree, it's the theory that's to blame. > Hence we have to "simulate the data", "replicate the empirical > findings", and "match with empirical evidence" (phrases used in almost > all cognitive modeling papers -- just finished another one myself)... > That's why Brad pointed out it's so hard to publish modeling papers > without corresponding experiments or other empirical data. > > Best, > Ping > > > That's not the whole story. For modern physics, a common happening is > that when theory and experiment disagree, it is the experiment that is > wrong, at least if the theory is well established. (Faster-than-light > neutrinos are only one example.) > > John Collins From collins at phys.psu.edu Wed Jan 29 17:33:54 2014 From: collins at phys.psu.edu (John Collins) Date: Wed, 29 Jan 2014 17:33:54 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: Message-ID: <52E981D2.3020004@phys.psu.edu> As a physics colleague of Brad's, I'll take his hint to give a perspective on his questions. Of course, current practice at one period in one area of science may be totally inappropriate in another situation. I'll refer primarily to elementary particle physics. Some differences between what Brad describes and what I see in physics are: 1. In physics, theorists are notably self-consciously interested in ideas that are general rather than just the modeling of a particular phenomenon. Good ideas can relate many experimental situations, and the predictivity of a theoretical idea may greatly exceed the initial expectations of its author. Some of these (e.g., the Standard "Model") are amazingly successful. 2. Both theory and experiment are so difficult that specialization is inevitable. 3. Some of the time scales for making experimental measurements are long: well over a decade. 4. In many modern physics theories, a lot hangs on self-consistency of the theoretical framework. Given an initial idea (induced from data) much work is sometimes needed to convert it to an implementable theory or method. This can proceed almost autonomously from day-to-day contact with real data. (String theory is a well-known extreme example of this.) (N.B. Interesting gaps can remain in well-established theoretical work and can be unperceived by many practitioners, as Randy found.) Another tendency is for theorists to provide software for simulations rather than simply computing predictions. Then experimentalists apply these to make theoretical predictions to compare with their own actual data. This is in addition to the simulations the experimental groups themselves construct to model their complicated detectors. For a recent example, see the article at http://arxiv.org/abs/1312.5353, and search in the pdf file for "simulation". (It's a long paper, I'm afraid.) John Collins On 01/28/2014 08:25 AM, Brad Wyble wrote: > Thanks Randal, that's a great suggestion. I'll ask my colleagues in > physics for their perspective as well. > > -Brad > > > On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene > wrote: > > Hi Brad, > This reminds me of theoretical physics, where proposed models are > expounded in papers, often without the ability to immediately carry > out empirical tests of all the predictions. Subsequently, > experiments are often designed to compare and contrast different models. > Perhaps a way to advance this is indeed to make the analogy with > physics? From bower at uthscsa.edu Wed Jan 29 18:52:32 2014 From: bower at uthscsa.edu (james bower) Date: Wed, 29 Jan 2014 17:52:32 -0600 Subject: Connectionists: Best practices in model publication In-Reply-To: <52E7DB20.2080509@pitt.edu> References: <52E7DB20.2080509@pitt.edu> Message-ID: Interesting With respect to the cortical column discussion we didn?t yet have (whether they exist or not), there were actually two papers published by Vernon Mountcastle in the late 1950s in which the cortical column idea was introduced. The first included mostly the data, the second mostly the idea. I once plotted literature citations for the two papers. For the first 10 years, the data paper was cited much more than the theory paper. However, 15 year out they crossed and now the data paper is almost never sited. So, as mentioned earlier with Marr and Albus, perhaps it is a kind of theory envy in neuroscience, but it is not at all unusual in neuroscience to have exactly the opposite be the case, that the data is forgotten and the theory persist. Perhaps all this is leading to an interesting article (or perhaps book with a series of essays) on how physics and biology are similar and different. Anyone interested? As some of you probably know, Thomas Kuhn explicitly excluded biology in his analysis (as a physicist, obviously, he knew that field better). I have often her biologists say that his analysis does not apply to biology at which point I tell him that it does, he just didn?t talk very much about pre-parigmatic science. However, I believe that the conversation we have been having about the epistemology of physics and biology might be of considerable more general interest. I know that this has been commented on by others before (Richard F for example), but I don?t know of any volume of essays on the subject. Could be interesting Jim On Jan 28, 2014, at 10:30 AM, Carson Chow wrote: > Hi Brad, > > Philip Anderson, Nobel Prize in Physics, once wrote that theory and experimental results should never be in the same paper. His reason was for the protection of the experiment because if the theory turns out wrong (as is often the case) then people often forget about the data. > > Carson > > > > On 1/28/14 8:25 AM, Brad Wyble wrote: >> Thanks Randal, that's a great suggestion. I'll ask my colleagues in physics for their perspective as well. >> >> -Brad >> >> >> >> >> On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene wrote: >> Hi Brad, >> This reminds me of theoretical physics, where proposed models are expounded in papers, often without the ability to immediately carry out empirical tests of all the predictions. Subsequently, experiments are often designed to compare and contrast different models. >> Perhaps a way to advance this is indeed to make the analogy with physics? >> Cheers, >> Randal >> >> Dr. Randal A. Koene >> Randal.A.Koene at gmail.com - Randal.A.Koene at carboncopies.org >> http://randalkoene.com - http://carboncopies.org >> >> >> On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble wrote: >> Thank you Mark, I hadn't seen this paper. She includes this other point that should have been in my list: >> >> "From a practical point of view, as noted the time required to build >> and analyze a computational model is quite substantial and validation may >> require teams. To delay model presentation until validation has occurred >> retards the development of the scientific field. " ----Carley (1999) >> >> >> And here is a citation for this paper. >> Carley, Kathleen M., 1999. Validating Computational Models. CASOS Working Paper, CMU >> >> -Brad >> >> >> >> >> On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr wrote: >> Brad, >> Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), suggesting the same practice. See http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf >> >> Mark >> >> On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote: >> >>> Dear connectionists, >>> >>> I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory. At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of a small handful of its own predictions. While I am certainly in favor of model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons: >>> >>> It encourages the creation only of predictions that are easy to test with the techniques available to the modeller. >>> >>> It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction. >>> >>> It encourages a practice of running a battery of experiments and reporting only those that match the model's output. >>> >>> It encourages the creation of predictions which cannot fail, and are therefore less informative >>> >>> It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one. >>> >>> It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". >>> >>> I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions. Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians. >>> >>> I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later. >>> >>> I also realize that this shift in publication expectations wouldn't prevent the problems described above, but it would at least not reward them. >>> >>> I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models. This effort should coincide with a shift in writing style to make such models more accessible to non modellers. >>> >>> What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors? >>> >>> Any advice welcome! >>> >>> -Brad >>> >>> >>> >>> -- >>> Brad Wyble >>> Assistant Professor >>> Psychology Department >>> Penn State University >>> >>> http://wyblelab.com >> >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com >> >> >> >> >> -- >> Brad Wyble >> Assistant Professor >> Psychology Department >> Penn State University >> >> http://wyblelab.com > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Wed Jan 29 19:41:55 2014 From: achler at gmail.com (Tsvi Achler) Date: Wed, 29 Jan 2014 16:41:55 -0800 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: I can't resist commenting anymore. It is interesting that the subject of quantum mechanics is being discussed here. From what I understand, Einstein never liked quantum mechanics because it replaces mechanics that are unknown with statistics. Subsequently electrons and atomic particle locations are represented by a probabilistic density functions. In this way, yet unknown mechanisms can be quantified and characterized statistically moving the physics forward to some degree. However neuroscience has an equivalent. It is Bayesian networks. Using a statistical framework, neural connections and activations are modeled with statistical distributions. Using the the conditional density function, priors, their distributions, and the Bayes equation the posterior probability (recognition) is calculated. Bayesian methods have been extremely successful in explaining cognitive and neural data. However they are not true neural networks because the connections are statistically defined. In some sense I think connectionists are being squeezed by Bayesian networks because they are now the go-to for cognitive modeling, not neural networks. I think the neural network community is in the unique position to truly explain neural function, and that's its biggest contribution. Here are my 2 cents: I think if we do not stay close to the biology we will be swallowed up by statistical methods. I would like to hear what others think. Sincerely, -Tsvi On Wed, Jan 29, 2014 at 2:26 AM, W?odzis?aw Duch wrote: > Dear all, > > > > QM has yet to show some advantages over strong synchronization in classical > models that unifies the activity of the whole network. There is another > aspect to this discussion: we need to go beyond na?ve interpretation of > assigning functions to activity of single structures. We have to use a > formalism similar to the quantum mechanical representation theory in Hilbert > space, decomposing brain activations into combinations of other activations. > In wrote a bit about it in sec. 2 of ?Neurolinguistic Approach to Natural > Language Processing?, Neural Networks 21(10), 1500-1510, 2008. > > > > QM seems to be attractive because we do not understand how to make a > transition between brain activations and subjective experience, described in > some psychological spaces, outside and inside (3rd and 1st person) points of > view. I have tried to explain it in a paper for APA, Mind-Brain Relations, > Geometric Perspective and Neurophenomenology, American Philosophical > Association Newsletter 12(1), 1-7, 2012 > > QM formalism of representation theory may be useful also for classical > distributed computing systems. > > > > Best regards, W?odek Duch > > ____________________ > > Google W. Duch > > > > > > > > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] > On Behalf Of Carson Chow > Sent: Tuesday, January 28, 2014 10:01 PM > To: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Physics and Psychology (and the C-word) > > > > Brian, > > Quantum mechanics can be completely simulated on a classical computer so if > quantum mechanics do matter for C then it must be a matter of computational > efficiency and nothing more. We also know that BQP (i.e. set of problems > solved efficiently on a quantum computer) is bigger than BPP (set of > problems solved effficiently on a classical computer) but not by much. I'm > not fully up to date on this but I think factoring and boson sampling or > about the only two examples that are in BQP and not in BPP. We also know > that BPP is much smaller than NP, so if C does require QM then for some > reason it sits in a small sliver of complexity space. > > best, > Carson > > PS I do like your self-consistent test for confirming consciousness. I once > proposed that we could just run Turing machines and see which ones asked why > they exist as a test of C. Kind of similar to your idea. > > On 1/28/14 3:09 PM, Brian J Mingus wrote: > > Hi Richard, thanks for the feedback. > > > >> Yes, in general, having an outcome measure that correlates with C ... that >> is good, but only with a clear and unambigous meaning for C itself (which I >> don't think anyone has, so therefore it is, after all, of no value to look >> for outcome measures that correlate) > > > > Actually, the outcome measure I described is independent of a clear and > unambiguous meaning for C itself, and in an interesting way: the models, > like us, essentially reinvent the entire literature, and have a conversation > as we do, inventing almost all the same positions that we've invented > (including the one in your paper). > > > > I will read your paper and see if it changes my position. At the present > time, however, I can't imagine any information that would solve the > so-called zombie problem. I'm not a big fan of integrative information > theory - I don't think hydrogen atoms are conscious, and I don't think naive > bayes trained on a large corpus and run in generative mode is conscious. > Thus, if the model doesn't go through the same philosophical reasoning that > we've collectively gone through with regards to subjective experience, then > I'm going to wonder if its experience is anything like mine at all. > > > > Touching back on QM, if we create a point neuron-based model that doesn't > wax philosophical on consciousness, I'm going to wonder if we should add > lower levels of analysis. > > > > I will take a look at your paper, and see if it changes my view on this at > all. > > > > Cheers, > > > > Brian Mingus > > > > http://grey.colorado.edu/mingus > > > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore > wrote: > > > > Brian, > > Everything hinges on the definition of the concept ("consciousness") under > consideration. > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of > Artificial General Intelligence" I pointed out (echoing Chalmers) that too > much is said about C without a clear enough understanding of what is meant > by it .... and then I went on to clarify what exactly could be meant by it, > and thereby came to a resolution of the problem (with testable predictions). > So I think the answer to the question you pose below is that: > > (a) Yes, in general, having an outcome measure that correlates with C ... > that is good, but only with a clear and unambigous meaning for C itself > (which I don't think anyone has, so therefore it is, after all, of no value > to look for outcome measures that correlate), and > > (b) All three of the approaches you mention are sidelined and finessed by > the approach I used in the abovementioned paper, where I clarify the > definition by clarifying first why we have so much difficulty defining it. > In other words, there is a fourth way, and that is to explain it as ... > well, I have to leave that dangling because there is too much subtlety to > pack into an elevator pitch. (The title is the best I can do: " Human and > Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism > "). > > Certainly though, the weakness of all quantum mechanics 'answers' is that > they are stranded on the wrong side of the explanatory gap. > > > Richard Loosemore > > > Reference > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary > Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), > Theoretical Foundations of Artifical General Intelligence. Atlantis Press. > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > Hi Richard, > > > >> I can tell you that the quantum story isn't nearly enough clear in the >> minds of physicists, yet, so how it can be applied to the C question is >> beyond me. Frankly, it does NOT apply: saying anything about observers and >> entanglement does not at any point touch the kind of statements that involve >> talk about qualia etc. > > > > I'm not sure I see the argument you're trying to make here. If you have an > outcome measure that you agree correlates with consciousness, then we have a > framework for scientifically studying it. > > > > Here's my setup: If you create a society of models and do not expose them to > a corpus containing consciousness philosophy and they then, in a reasonably > short amount of time, independently rewrite it, they are almost certainly > conscious. This design explicitly rules out a generative model that > accidentally spits out consciousness philosophy. > > > > Another approach is to accept that our brains are so similar that you and I > are almost certainly both conscious, and to then perform experiments on each > other and study our subjective reports. > > > > Another approach is to perform experiments on your own brain and to write > first person reports about your experience. > > > > These three approaches each have tradeoffs, and each provide unique > information. The first approach, in particular, might ultimately allow us to > draw some of the strongest possible conclusions. For example, it allows for > the scientific study of the extent to which quantum effects may or may not > be relevant. > > > > I'm very interested in hearing any counterarguments as to why this general > approach won't work. If it can't work, then I would argue that perhaps we > should not create full models of ourselves, but should instead focus on > upgrading ourselves. From that perspective, getting this to work is > extremely important, despite however futuristic it may seem. > > > >> So let's let that sleeping dog lie.... (?). > > > > Not gonna' happen. :) > > > > Brian Mingus > > http://grey.colorado.edu > > > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore > wrote: > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > Consciousness is also such a bag of worms that we can't rule out that qualia > owes its totally non-obvious and a priori unpredicted existence to concepts > derived from quantum mechanics, such as nested observers, or entanglement. > > As far as I know, my litmus test for a model is the only way to tell whether > low-level quantum effects are required: if the model, which has not been > exposed to a corpus containing consciousness philosophy, then goes on to > independently recreate consciousness philosophy, despite the fact that it is > composed of (for example) point neurons, then we can be sure that low-level > quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested observers or > entanglement. I'll let a quantum physicist chime in on that - although I > will note that according to news articles I've read that we keep managing to > entangle larger and larger objects - up to the size of molecules at this > time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > > Speaking as someone is both a physicist and a cognitive scientist, AND > someone who has written papers resolving that whole C-word issue, I can tell > you that the quantum story isn't nearly enough clear in the minds of > physicists, yet, so how it can be applied to the C question is beyond me. > Frankly, it does NOT apply: saying anything about observers and > entanglement does not at any point touch the kind of statements that involve > talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I > think it best to listen to George Bernard Shaw on this one: "Never do unto > others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what > happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > > > > > > > From brian.mingus at colorado.edu Wed Jan 29 20:10:44 2014 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Wed, 29 Jan 2014 18:10:44 -0700 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: Hi Tsvi, What does it take for an algorithm to feel? If you have the answer to that question, then perhaps you can create mean field, bayesian, and neural network algorithms with differing amounts of detail, all of which feel human. Brian http://grey.colorado.edu/mingus On Wed, Jan 29, 2014 at 5:41 PM, Tsvi Achler wrote: > I can't resist commenting anymore. > > It is interesting that the subject of quantum mechanics is being > discussed here. From what I understand, Einstein never liked quantum > mechanics because it replaces mechanics that are unknown with > statistics. Subsequently electrons and atomic particle locations are > represented by a probabilistic density functions. In this way, yet > unknown mechanisms can be quantified and characterized statistically > moving the physics forward to some degree. > > However neuroscience has an equivalent. It is Bayesian networks. > Using a statistical framework, neural connections and activations are > modeled with statistical distributions. Using the the conditional > density function, priors, their distributions, and the Bayes equation > the posterior probability (recognition) is calculated. > > Bayesian methods have been extremely successful in explaining > cognitive and neural data. However they are not true neural networks > because the connections are statistically defined. In some sense I > think connectionists are being squeezed by Bayesian networks because > they are now the go-to for cognitive modeling, not neural networks. > > I think the neural network community is in the unique position to > truly explain neural function, and that's its biggest contribution. > Here are my 2 cents: I think if we do not stay close to the biology we > will be swallowed up by statistical methods. > > I would like to hear what others think. > > Sincerely, > -Tsvi > > On Wed, Jan 29, 2014 at 2:26 AM, W?odzis?aw Duch wrote: > > Dear all, > > > > > > > > QM has yet to show some advantages over strong synchronization in > classical > > models that unifies the activity of the whole network. There is another > > aspect to this discussion: we need to go beyond na?ve interpretation of > > assigning functions to activity of single structures. We have to use a > > formalism similar to the quantum mechanical representation theory in > Hilbert > > space, decomposing brain activations into combinations of other > activations. > > In wrote a bit about it in sec. 2 of ?Neurolinguistic Approach to Natural > > Language Processing?, Neural Networks 21(10), 1500-1510, 2008. > > > > > > > > QM seems to be attractive because we do not understand how to make a > > transition between brain activations and subjective experience, > described in > > some psychological spaces, outside and inside (3rd and 1st person) > points of > > view. I have tried to explain it in a paper for APA, Mind-Brain > Relations, > > Geometric Perspective and Neurophenomenology, American Philosophical > > Association Newsletter 12(1), 1-7, 2012 > > > > QM formalism of representation theory may be useful also for classical > > distributed computing systems. > > > > > > > > Best regards, W?odek Duch > > > > ____________________ > > > > Google W. Duch > > > > > > > > > > > > > > > > From: Connectionists [mailto: > connectionists-bounces at mailman.srv.cs.cmu.edu] > > On Behalf Of Carson Chow > > Sent: Tuesday, January 28, 2014 10:01 PM > > To: connectionists at mailman.srv.cs.cmu.edu > > Subject: Re: Connectionists: Physics and Psychology (and the C-word) > > > > > > > > Brian, > > > > Quantum mechanics can be completely simulated on a classical computer so > if > > quantum mechanics do matter for C then it must be a matter of > computational > > efficiency and nothing more. We also know that BQP (i.e. set of problems > > solved efficiently on a quantum computer) is bigger than BPP (set of > > problems solved effficiently on a classical computer) but not by much. > I'm > > not fully up to date on this but I think factoring and boson sampling or > > about the only two examples that are in BQP and not in BPP. We also know > > that BPP is much smaller than NP, so if C does require QM then for some > > reason it sits in a small sliver of complexity space. > > > > best, > > Carson > > > > PS I do like your self-consistent test for confirming consciousness. I > once > > proposed that we could just run Turing machines and see which ones asked > why > > they exist as a test of C. Kind of similar to your idea. > > > > On 1/28/14 3:09 PM, Brian J Mingus wrote: > > > > Hi Richard, thanks for the feedback. > > > > > > > >> Yes, in general, having an outcome measure that correlates with C ... > that > >> is good, but only with a clear and unambigous meaning for C itself > (which I > >> don't think anyone has, so therefore it is, after all, of no value to > look > >> for outcome measures that correlate) > > > > > > > > Actually, the outcome measure I described is independent of a clear and > > unambiguous meaning for C itself, and in an interesting way: the models, > > like us, essentially reinvent the entire literature, and have a > conversation > > as we do, inventing almost all the same positions that we've invented > > (including the one in your paper). > > > > > > > > I will read your paper and see if it changes my position. At the present > > time, however, I can't imagine any information that would solve the > > so-called zombie problem. I'm not a big fan of integrative information > > theory - I don't think hydrogen atoms are conscious, and I don't think > naive > > bayes trained on a large corpus and run in generative mode is conscious. > > Thus, if the model doesn't go through the same philosophical reasoning > that > > we've collectively gone through with regards to subjective experience, > then > > I'm going to wonder if its experience is anything like mine at all. > > > > > > > > Touching back on QM, if we create a point neuron-based model that doesn't > > wax philosophical on consciousness, I'm going to wonder if we should add > > lower levels of analysis. > > > > > > > > I will take a look at your paper, and see if it changes my view on this > at > > all. > > > > > > > > Cheers, > > > > > > > > Brian Mingus > > > > > > > > http://grey.colorado.edu/mingus > > > > > > > > > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore < > rloosemore at susaro.com> > > wrote: > > > > > > > > Brian, > > > > Everything hinges on the definition of the concept ("consciousness") > under > > consideration. > > > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of > > Artificial General Intelligence" I pointed out (echoing Chalmers) that > too > > much is said about C without a clear enough understanding of what is > meant > > by it .... and then I went on to clarify what exactly could be meant by > it, > > and thereby came to a resolution of the problem (with testable > predictions). > > So I think the answer to the question you pose below is that: > > > > (a) Yes, in general, having an outcome measure that correlates with C ... > > that is good, but only with a clear and unambigous meaning for C itself > > (which I don't think anyone has, so therefore it is, after all, of no > value > > to look for outcome measures that correlate), and > > > > (b) All three of the approaches you mention are sidelined and finessed by > > the approach I used in the abovementioned paper, where I clarify the > > definition by clarifying first why we have so much difficulty defining > it. > > In other words, there is a fourth way, and that is to explain it as ... > > well, I have to leave that dangling because there is too much subtlety to > > pack into an elevator pitch. (The title is the best I can do: " Human > and > > Machine Consciousness as a Boundary Effect in the Concept Analysis > Mechanism > > "). > > > > Certainly though, the weakness of all quantum mechanics 'answers' is that > > they are stranded on the wrong side of the explanatory gap. > > > > > > Richard Loosemore > > > > > > Reference > > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary > > Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel > (Eds), > > Theoretical Foundations of Artifical General Intelligence. Atlantis > Press. > > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > > > > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > > > Hi Richard, > > > > > > > >> I can tell you that the quantum story isn't nearly enough clear in the > >> minds of physicists, yet, so how it can be applied to the C question is > >> beyond me. Frankly, it does NOT apply: saying anything about > observers and > >> entanglement does not at any point touch the kind of statements that > involve > >> talk about qualia etc. > > > > > > > > I'm not sure I see the argument you're trying to make here. If you have > an > > outcome measure that you agree correlates with consciousness, then we > have a > > framework for scientifically studying it. > > > > > > > > Here's my setup: If you create a society of models and do not expose > them to > > a corpus containing consciousness philosophy and they then, in a > reasonably > > short amount of time, independently rewrite it, they are almost certainly > > conscious. This design explicitly rules out a generative model that > > accidentally spits out consciousness philosophy. > > > > > > > > Another approach is to accept that our brains are so similar that you > and I > > are almost certainly both conscious, and to then perform experiments on > each > > other and study our subjective reports. > > > > > > > > Another approach is to perform experiments on your own brain and to write > > first person reports about your experience. > > > > > > > > These three approaches each have tradeoffs, and each provide unique > > information. The first approach, in particular, might ultimately allow > us to > > draw some of the strongest possible conclusions. For example, it allows > for > > the scientific study of the extent to which quantum effects may or may > not > > be relevant. > > > > > > > > I'm very interested in hearing any counterarguments as to why this > general > > approach won't work. If it can't work, then I would argue that perhaps we > > should not create full models of ourselves, but should instead focus on > > upgrading ourselves. From that perspective, getting this to work is > > extremely important, despite however futuristic it may seem. > > > > > > > >> So let's let that sleeping dog lie.... (?). > > > > > > > > Not gonna' happen. :) > > > > > > > > Brian Mingus > > > > http://grey.colorado.edu > > > > > > > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore < > rloosemore at susaro.com> > > wrote: > > > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > > > Consciousness is also such a bag of worms that we can't rule out that > qualia > > owes its totally non-obvious and a priori unpredicted existence to > concepts > > derived from quantum mechanics, such as nested observers, or > entanglement. > > > > As far as I know, my litmus test for a model is the only way to tell > whether > > low-level quantum effects are required: if the model, which has not been > > exposed to a corpus containing consciousness philosophy, then goes on to > > independently recreate consciousness philosophy, despite the fact that > it is > > composed of (for example) point neurons, then we can be sure that > low-level > > quantum mechanical details are not important. > > > > Note, however, that such a model might still rely on nested observers or > > entanglement. I'll let a quantum physicist chime in on that - although I > > will note that according to news articles I've read that we keep > managing to > > entangle larger and larger objects - up to the size of molecules at this > > time, IIRC. > > > > > > Brian Mingus > > http://grey.colorado.edu/mingus > > > > Speaking as someone is both a physicist and a cognitive scientist, AND > > someone who has written papers resolving that whole C-word issue, I can > tell > > you that the quantum story isn't nearly enough clear in the minds of > > physicists, yet, so how it can be applied to the C question is beyond me. > > Frankly, it does NOT apply: saying anything about observers and > > entanglement does not at any point touch the kind of statements that > involve > > talk about qualia etc. So let's let that sleeping dog lie.... (?). > > > > As for using the methods/standards of physics over here in cog sci ..... > I > > think it best to listen to George Bernard Shaw on this one: "Never do > unto > > others as you would they do unto you: their tastes may not be the same." > > > > Our tastes (requirements/constraints/issues) are quite different, so what > > happens elsewhere cannot be directly, slavishly imported. > > > > > > Richard Loosemore > > > > Wells College > > Aurora NY > > USA > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Wed Jan 29 20:06:38 2014 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 29 Jan 2014 17:06:38 -0800 (PST) Subject: Connectionists: Brain-like computing fanfare and big data fanfare Message-ID: There are some misconceptions about the BRAIN Initiative that seem to be driving this discussion and some facts might help focus this issues. The NIH advisory group issued an interim report that made recommendations to the director of NIH, which was used to issue the RFAs for FY 2014: http://acd.od.nih.gov/presentations/BRAIN-Interim-Report.pdf See section 5 on Page 33: "Theory, Modeling and Statistics Will Be Essential to Understanding the Brain" There are many oportunities for those on this list to contribute to the BRAIN Initiative. Here are a few: p. 32: 4. The Importance of Behavior "High Priority Research Area for FY2014: Link Neuronal Activity to Behavior. The clever use of virtual reality, machine learning, and miniaturized recording devices has the potential to dramatically increase our understanding of how neuronal activity underlies cognition and behavior. This path can be enabled by developing technologies to quantify and interpret animal behavior, at high temporal and spatial resolution, reliably, objectively, over long periods of time, under a broad set of conditions, and in combination with concurrent measurement and manipulation of neuronal activity." p. 35: New Statistical and Quantitative Approaches to New Kinds of Data "As new kinds of data become available through advances in molecular sensors and optical recording, equal effort must be expended to extract maximum insight from these novel data sets. Data analytic and theoretical problems are likely to emerge that we cannot anticipate at the present time. Resources should be available for experts from essential disciplines such as statistics, optimization, signal processing and machine learning to develop new approaches to identifying and analyzing the relevant signals." p. 16: "The BRAIN Initiative will deliver transformative scientific tools and methods that should accelerate all of basic neuroscience, translational neuroscience, and direct disease studies, as well as biology beyond neuroscience. It will deliver a foundation of knowledge about the function of the normal brain, its cellular components, the wiring of its circuits, its patterns of electrical activity at local and global scales, the causes and effects of those activity patterns, and the expression of brain activity in behavior. Through the interaction of experiment and theory, the BRAIN Initiative should elucidate the computational logic as well as the specific mechanisms of brain function at different spatial and temporal scales, defining the connections between molecules, neurons, circuits, activity, and behavior." p. 5: Recomendation #1. Generate a Census of Cell Types. We do not know how many types of neurons and glia there are. Two neurons with the same morphology could project to different areas and have different functions. Classification of cell types will depend on analyzing the high-dimensional transcriptome and proteome of single cells and combining this with anatomical and physiolgical data. Why is this important? Experimenters need to label and manipulate each cell type to discover its function. There are deep issues of what constitutes cell identity that need to be settled, which will depend on analyzing the distribution of heterogeneous data in high-dimensional spaces. p. 50 8d. Establish Platforms for Sharing Data The goal of the BRAIN Initiative is not to create big data sets. This is already happening in neuroscience, as it is in every area of science. One of the recommendations is to make it easier for neuroscientists to share their data so others can analyze them. This is an interim report. The final report, due in June 2014, will have more specific priorities, milestones and goals for each of the recommendations. Note also that there are 3 agencies involved in the BRAIN Initiative. NSF has not yet announced its program. DARPA has announced 2 BAAs: http://www.cccblog.org/2013/12/02/darpa-announces-two-programs-as-part-of-white-house-brain-initiative/ Finally, new money for the BRAIN Initiative was set aside in the budget that was recently passed by Congress. Terry ----- From ivan.g.raikov at gmail.com Wed Jan 29 20:29:35 2014 From: ivan.g.raikov at gmail.com (Ivan Raikov) Date: Thu, 30 Jan 2014 10:29:35 +0900 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: Well, I think it is not so clear that we will be swallowed up by statistical methods, because exact Bayesian inference is NP-hard. I think you will have a really hard time arguing that the brain solves NP-hard problems. In fact, I think a central question is how the heck does the brain *avoid* having to solve NP-hard problems :-) Of course, there are many kinds of inexact inference algorithms based on random sampling, which have polynomial time complexity, but they usually have problems with optimality. So I think statistics is just one more tool for those cases when there is no data. Ivan On Thu, Jan 30, 2014 at 9:41 AM, Tsvi Achler wrote: > I can't resist commenting anymore. > > It is interesting that the subject of quantum mechanics is being > discussed here. From what I understand, Einstein never liked quantum > mechanics because it replaces mechanics that are unknown with > statistics. Subsequently electrons and atomic particle locations are > represented by a probabilistic density functions. In this way, yet > unknown mechanisms can be quantified and characterized statistically > moving the physics forward to some degree. > > However neuroscience has an equivalent. It is Bayesian networks. > Using a statistical framework, neural connections and activations are > modeled with statistical distributions. Using the the conditional > density function, priors, their distributions, and the Bayes equation > the posterior probability (recognition) is calculated. > > Bayesian methods have been extremely successful in explaining > cognitive and neural data. However they are not true neural networks > because the connections are statistically defined. In some sense I > think connectionists are being squeezed by Bayesian networks because > they are now the go-to for cognitive modeling, not neural networks. > > I think the neural network community is in the unique position to > truly explain neural function, and that's its biggest contribution. > Here are my 2 cents: I think if we do not stay close to the biology we > will be swallowed up by statistical methods. > > I would like to hear what others think. > > Sincerely, > -Tsvi > > On Wed, Jan 29, 2014 at 2:26 AM, W?odzis?aw Duch wrote: > > Dear all, > > > > > > > > QM has yet to show some advantages over strong synchronization in > classical > > models that unifies the activity of the whole network. There is another > > aspect to this discussion: we need to go beyond na?ve interpretation of > > assigning functions to activity of single structures. We have to use a > > formalism similar to the quantum mechanical representation theory in > Hilbert > > space, decomposing brain activations into combinations of other > activations. > > In wrote a bit about it in sec. 2 of ?Neurolinguistic Approach to Natural > > Language Processing?, Neural Networks 21(10), 1500-1510, 2008. > > > > > > > > QM seems to be attractive because we do not understand how to make a > > transition between brain activations and subjective experience, > described in > > some psychological spaces, outside and inside (3rd and 1st person) > points of > > view. I have tried to explain it in a paper for APA, Mind-Brain > Relations, > > Geometric Perspective and Neurophenomenology, American Philosophical > > Association Newsletter 12(1), 1-7, 2012 > > > > QM formalism of representation theory may be useful also for classical > > distributed computing systems. > > > > > > > > Best regards, W?odek Duch > > > > ____________________ > > > > Google W. Duch > > > > > > > > > > > > > > > > From: Connectionists [mailto: > connectionists-bounces at mailman.srv.cs.cmu.edu] > > On Behalf Of Carson Chow > > Sent: Tuesday, January 28, 2014 10:01 PM > > To: connectionists at mailman.srv.cs.cmu.edu > > Subject: Re: Connectionists: Physics and Psychology (and the C-word) > > > > > > > > Brian, > > > > Quantum mechanics can be completely simulated on a classical computer so > if > > quantum mechanics do matter for C then it must be a matter of > computational > > efficiency and nothing more. We also know that BQP (i.e. set of problems > > solved efficiently on a quantum computer) is bigger than BPP (set of > > problems solved effficiently on a classical computer) but not by much. > I'm > > not fully up to date on this but I think factoring and boson sampling or > > about the only two examples that are in BQP and not in BPP. We also know > > that BPP is much smaller than NP, so if C does require QM then for some > > reason it sits in a small sliver of complexity space. > > > > best, > > Carson > > > > PS I do like your self-consistent test for confirming consciousness. I > once > > proposed that we could just run Turing machines and see which ones asked > why > > they exist as a test of C. Kind of similar to your idea. > > > > On 1/28/14 3:09 PM, Brian J Mingus wrote: > > > > Hi Richard, thanks for the feedback. > > > > > > > >> Yes, in general, having an outcome measure that correlates with C ... > that > >> is good, but only with a clear and unambigous meaning for C itself > (which I > >> don't think anyone has, so therefore it is, after all, of no value to > look > >> for outcome measures that correlate) > > > > > > > > Actually, the outcome measure I described is independent of a clear and > > unambiguous meaning for C itself, and in an interesting way: the models, > > like us, essentially reinvent the entire literature, and have a > conversation > > as we do, inventing almost all the same positions that we've invented > > (including the one in your paper). > > > > > > > > I will read your paper and see if it changes my position. At the present > > time, however, I can't imagine any information that would solve the > > so-called zombie problem. I'm not a big fan of integrative information > > theory - I don't think hydrogen atoms are conscious, and I don't think > naive > > bayes trained on a large corpus and run in generative mode is conscious. > > Thus, if the model doesn't go through the same philosophical reasoning > that > > we've collectively gone through with regards to subjective experience, > then > > I'm going to wonder if its experience is anything like mine at all. > > > > > > > > Touching back on QM, if we create a point neuron-based model that doesn't > > wax philosophical on consciousness, I'm going to wonder if we should add > > lower levels of analysis. > > > > > > > > I will take a look at your paper, and see if it changes my view on this > at > > all. > > > > > > > > Cheers, > > > > > > > > Brian Mingus > > > > > > > > http://grey.colorado.edu/mingus > > > > > > > > > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore < > rloosemore at susaro.com> > > wrote: > > > > > > > > Brian, > > > > Everything hinges on the definition of the concept ("consciousness") > under > > consideration. > > > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of > > Artificial General Intelligence" I pointed out (echoing Chalmers) that > too > > much is said about C without a clear enough understanding of what is > meant > > by it .... and then I went on to clarify what exactly could be meant by > it, > > and thereby came to a resolution of the problem (with testable > predictions). > > So I think the answer to the question you pose below is that: > > > > (a) Yes, in general, having an outcome measure that correlates with C ... > > that is good, but only with a clear and unambigous meaning for C itself > > (which I don't think anyone has, so therefore it is, after all, of no > value > > to look for outcome measures that correlate), and > > > > (b) All three of the approaches you mention are sidelined and finessed by > > the approach I used in the abovementioned paper, where I clarify the > > definition by clarifying first why we have so much difficulty defining > it. > > In other words, there is a fourth way, and that is to explain it as ... > > well, I have to leave that dangling because there is too much subtlety to > > pack into an elevator pitch. (The title is the best I can do: " Human > and > > Machine Consciousness as a Boundary Effect in the Concept Analysis > Mechanism > > "). > > > > Certainly though, the weakness of all quantum mechanics 'answers' is that > > they are stranded on the wrong side of the explanatory gap. > > > > > > Richard Loosemore > > > > > > Reference > > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary > > Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel > (Eds), > > Theoretical Foundations of Artifical General Intelligence. Atlantis > Press. > > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > > > > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > > > Hi Richard, > > > > > > > >> I can tell you that the quantum story isn't nearly enough clear in the > >> minds of physicists, yet, so how it can be applied to the C question is > >> beyond me. Frankly, it does NOT apply: saying anything about > observers and > >> entanglement does not at any point touch the kind of statements that > involve > >> talk about qualia etc. > > > > > > > > I'm not sure I see the argument you're trying to make here. If you have > an > > outcome measure that you agree correlates with consciousness, then we > have a > > framework for scientifically studying it. > > > > > > > > Here's my setup: If you create a society of models and do not expose > them to > > a corpus containing consciousness philosophy and they then, in a > reasonably > > short amount of time, independently rewrite it, they are almost certainly > > conscious. This design explicitly rules out a generative model that > > accidentally spits out consciousness philosophy. > > > > > > > > Another approach is to accept that our brains are so similar that you > and I > > are almost certainly both conscious, and to then perform experiments on > each > > other and study our subjective reports. > > > > > > > > Another approach is to perform experiments on your own brain and to write > > first person reports about your experience. > > > > > > > > These three approaches each have tradeoffs, and each provide unique > > information. The first approach, in particular, might ultimately allow > us to > > draw some of the strongest possible conclusions. For example, it allows > for > > the scientific study of the extent to which quantum effects may or may > not > > be relevant. > > > > > > > > I'm very interested in hearing any counterarguments as to why this > general > > approach won't work. If it can't work, then I would argue that perhaps we > > should not create full models of ourselves, but should instead focus on > > upgrading ourselves. From that perspective, getting this to work is > > extremely important, despite however futuristic it may seem. > > > > > > > >> So let's let that sleeping dog lie.... (?). > > > > > > > > Not gonna' happen. :) > > > > > > > > Brian Mingus > > > > http://grey.colorado.edu > > > > > > > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore < > rloosemore at susaro.com> > > wrote: > > > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > > > Consciousness is also such a bag of worms that we can't rule out that > qualia > > owes its totally non-obvious and a priori unpredicted existence to > concepts > > derived from quantum mechanics, such as nested observers, or > entanglement. > > > > As far as I know, my litmus test for a model is the only way to tell > whether > > low-level quantum effects are required: if the model, which has not been > > exposed to a corpus containing consciousness philosophy, then goes on to > > independently recreate consciousness philosophy, despite the fact that > it is > > composed of (for example) point neurons, then we can be sure that > low-level > > quantum mechanical details are not important. > > > > Note, however, that such a model might still rely on nested observers or > > entanglement. I'll let a quantum physicist chime in on that - although I > > will note that according to news articles I've read that we keep > managing to > > entangle larger and larger objects - up to the size of molecules at this > > time, IIRC. > > > > > > Brian Mingus > > http://grey.colorado.edu/mingus > > > > Speaking as someone is both a physicist and a cognitive scientist, AND > > someone who has written papers resolving that whole C-word issue, I can > tell > > you that the quantum story isn't nearly enough clear in the minds of > > physicists, yet, so how it can be applied to the C question is beyond me. > > Frankly, it does NOT apply: saying anything about observers and > > entanglement does not at any point touch the kind of statements that > involve > > talk about qualia etc. So let's let that sleeping dog lie.... (?). > > > > As for using the methods/standards of physics over here in cog sci ..... > I > > think it best to listen to George Bernard Shaw on this one: "Never do > unto > > others as you would they do unto you: their tastes may not be the same." > > > > Our tastes (requirements/constraints/issues) are quite different, so what > > happens elsewhere cannot be directly, slavishly imported. > > > > > > Richard Loosemore > > > > Wells College > > Aurora NY > > USA > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Wed Jan 29 20:38:34 2014 From: rloosemore at susaro.com (Richard Loosemore) Date: Wed, 29 Jan 2014 20:38:34 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: <52E7DB20.2080509@pitt.edu> Message-ID: <52E9AD1A.4080101@susaro.com> On 1/29/14, 6:52 PM, james bower wrote: > Interesting > > With respect to the cortical column discussion we didn't yet have > (whether they exist or not), there were actually two papers published > by Vernon Mountcastle in the late 1950s in which the cortical column > idea was introduced. > > The first included mostly the data, the second mostly the idea. > > I once plotted literature citations for the two papers. For the first > 10 years, the data paper was cited much more than the theory paper. > However, 15 year out they crossed and now the data paper is almost > never sited. > > So, as mentioned earlier with Marr and Albus, perhaps it is a kind of > theory envy in neuroscience, but it is not at all unusual in > neuroscience to have exactly the opposite be the case, that the data > is forgotten and the theory persist. > > Perhaps all this is leading to an interesting article (or perhaps book > with a series of essays) on how physics and biology are similar and > different. Anyone interested? > I mentioned earlier in the discussion that I myself *have* explicitly considered this issue: trying to understand what it is about cognitive systems that could make them not directable amenable to the methods of physics. I published a paper about it in 2007 and then a chapter expanding the same idea in 2012 (refs below). The conclusions I came to have been (and I am sure will continue to be) completely ignored. Nobody wants to hear that this or that methodology is what they *should* be adopting. Nobody will never give a hoot about such a message. It is considered, in the psychological/cognitive sciences to be almost rude (not to say gauche) to tell other people how they should be doing science. So compile a book about the book about "physics and biology are similar and different" if you feel inclined, but all it will do is act as a bookcase-weight. Richard Loosemore Refs: Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel & P. Wang (Eds.), Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam. Loosemore, R.P.W. (2012b). The Complex Cognitive Systems Manifesto. In: The Yearbook of Nanotechnology, Volume III: Nanotechnology, the Brain, and the Future, S. Hays, J. S. Robert, C. A. Miller, and I. Bennett (Eds). New York, NY: Springer, (2012) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Thu Jan 30 00:07:53 2014 From: bwyble at gmail.com (Brad Wyble) Date: Thu, 30 Jan 2014 00:07:53 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: <52E981D2.3020004@phys.psu.edu> References: <52E981D2.3020004@phys.psu.edu> Message-ID: Thanks for your input John, I appreciate it extremely. One of the points that I think is perhaps most important to improve on in our field is the scope of the predictions. In physics, predictions seem to occur on a much grander scale, and very inspiring, and this itself engenders respect for the theoretical work. In neuroscience, the predictions are expected to be easily testable, which leads necessarily to predictions that are smaller in scope, and testable with technology that is already at hand. -Brad On Wed, Jan 29, 2014 at 5:33 PM, John Collins wrote: > As a physics colleague of Brad's, I'll take his hint to give a perspective > on his questions. Of course, current practice at one period in one area of > science may be totally inappropriate in another situation. I'll refer > primarily to elementary particle physics. > > Some differences between what Brad describes and what I see in physics > are: 1. In physics, theorists are notably self-consciously interested in > ideas that are general rather than just the modeling of a particular > phenomenon. Good ideas can relate many experimental situations, and the > predictivity of a theoretical idea may greatly exceed the initial > expectations of its author. Some of these (e.g., the Standard "Model") are > amazingly successful. 2. Both theory and experiment are so difficult that > specialization is inevitable. 3. Some of the time scales for making > experimental measurements are long: well over a decade. 4. In many modern > physics theories, a lot hangs on self-consistency of the theoretical > framework. Given an initial idea (induced from data) much work is > sometimes needed to convert it to an implementable theory or method. This > can proceed almost autonomously from day-to-day contact with real data. > (String theory is a well-known extreme example of this.) > > (N.B. Interesting gaps can remain in well-established theoretical work and > can be unperceived by many practitioners, as Randy found.) > > Another tendency is for theorists to provide software for simulations > rather than simply computing predictions. Then experimentalists apply > these to make theoretical predictions to compare with their own actual > data. This is in addition to the simulations the experimental groups > themselves construct to model their complicated detectors. For a recent > example, see the article at http://arxiv.org/abs/1312.5353, and search in > the pdf file for "simulation". (It's a long paper, I'm afraid.) > > John Collins > > > > > On 01/28/2014 08:25 AM, Brad Wyble wrote: > >> Thanks Randal, that's a great suggestion. I'll ask my colleagues in >> physics for their perspective as well. >> >> -Brad >> >> >> On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene > > wrote: >> >> Hi Brad, >> This reminds me of theoretical physics, where proposed models are >> expounded in papers, often without the ability to immediately carry >> out empirical tests of all the predictions. Subsequently, >> experiments are often designed to compare and contrast different >> models. >> Perhaps a way to advance this is indeed to make the analogy with >> physics? >> > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From christos.dimitrakakis at gmail.com Thu Jan 30 01:20:52 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Thu, 30 Jan 2014 07:20:52 +0100 Subject: Connectionists: Best practices in model publication In-Reply-To: References: <52E7DB20.2080509@pitt.edu> Message-ID: <52E9EF44.9050908@gmail.com> For what it's worth, I think that the publication landscape in biology and especially neuroscience is much closer to economics than physics. This is mainly due to our relative ability to isolate processes and perform independent experiments. > > However, I believe that the conversation we have been having about the > epistemology of physics and biology might be of considerable more > general interest. I know that this has been commented on by others > before (Richard F for example), but I don?t know of any volume of essays > on the subject. > From christos.dimitrakakis at gmail.com Thu Jan 30 01:31:49 2014 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Thu, 30 Jan 2014 07:31:49 +0100 Subject: Connectionists: Best practices in model publication In-Reply-To: <52E9AD1A.4080101@susaro.com> References: <52E7DB20.2080509@pitt.edu> <52E9AD1A.4080101@susaro.com> Message-ID: <52E9F1D5.5050700@gmail.com> I wouldn't complain too much about people ignoring your article, Richard - god knows I have a few about which I can say the same :) Anyway, the relationships between global and local mechanisms are studied both formally and informally - most notably in economics and decision theory, which are of course closely linked both to psychology and artificial intelligence. The models that are studied are sometimes amenable to theoretical analysis, but more and more frequently simulations are used. I really do not see the empirical disconnect that you claim, though of course most people are happier either tinkering or doing theoretical analysis. On 30/01/14 02:38, Richard Loosemore wrote: > On 1/29/14, 6:52 PM, james bower wrote: >> Interesting >> >> With respect to the cortical column discussion we didn?t yet have >> (whether they exist or not), there were actually two papers published >> by Vernon Mountcastle in the late 1950s in which the cortical column >> idea was introduced. >> >> The first included mostly the data, the second mostly the idea. >> >> I once plotted literature citations for the two papers. For the first >> 10 years, the data paper was cited much more than the theory paper. >> However, 15 year out they crossed and now the data paper is almost >> never sited. >> >> So, as mentioned earlier with Marr and Albus, perhaps it is a kind of >> theory envy in neuroscience, but it is not at all unusual in >> neuroscience to have exactly the opposite be the case, that the data >> is forgotten and the theory persist. >> >> Perhaps all this is leading to an interesting article (or perhaps book >> with a series of essays) on how physics and biology are similar and >> different. Anyone interested? >> > I mentioned earlier in the discussion that I myself *have* explicitly > considered this issue: trying to understand what it is about cognitive > systems that could make them not directable amenable to the methods of > physics. I published a paper about it in 2007 and then a chapter > expanding the same idea in 2012 (refs below). > > The conclusions I came to have been (and I am sure will continue to be) > completely ignored. > > Nobody wants to hear that this or that methodology is what they *should* > be adopting. Nobody will never give a hoot about such a message. It is > considered, in the psychological/cognitive sciences to be almost rude > (not to say gauche) to tell other people how they should be doing science. > > So compile a book about the book about "physics and biology are similar > and different" if you feel inclined, but all it will do is act as a > bookcase-weight. > > Richard Loosemore > > > > > > Refs: > > Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and > Theoretical Psychology. In B. Goertzel & P. Wang (Eds.), Proceedings of > the 2006 AGI Workshop. IOS Press, Amsterdam. > > Loosemore, R.P.W. (2012b). The Complex Cognitive Systems Manifesto. > In: The Yearbook of Nanotechnology, Volume III: Nanotechnology, the > Brain, and the Future, S. Hays, J. S. Robert, C. A. Miller, and I. > Bennett (Eds). New York, NY: Springer, (2012) -- Christos Dimitrakakis http://www.cse.chalmers.se/~chrdimi/ From pelillo at dsi.unive.it Thu Jan 30 07:46:54 2014 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Thu, 30 Jan 2014 13:46:54 +0100 (CET) Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: On Thu, 30 Jan 2014, Ivan Raikov wrote: > Well, I think it is not so clear that we will be swallowed up by statistical methods, because exact Bayesian inference is NP-hard. > I think you will have a really hard time arguing that the brain solves NP-hard problems. > In fact, I think a central question is how the heck does the brain *avoid* having to solve NP-hard problems :-) > Of course, there are many kinds of inexact inference algorithms based on random sampling, which have polynomial time complexity, > but they usually have problems with optimality. So I think statistics is just one more tool for those cases when there? is no data. > In using the argument from NP-hardness we should keep in mind that classical complexity theory deals with the worst-case scenario. In practice, however, one is interested in typical-case behavior, which is of course much more difficult to characterize. A while ago, Monasson et al. published a Nature paper precisely on this issue, and showed that intractable problems such as K-SAT exhibit phase transition phenomena. R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, and L. Troyansky, Determining Computational Complexity from Characteristic Phase Transitions, Nature 400, 133 (1999). In 1999 we also run a NIPS workshop on this topic: http://www.dsi.unive.it/~nips99/ I think there were follow-ups to these studies but I'm not sure how far they have gone, though... Marcello --- Marcello Pelillo, FIEEE, FIAPR Professor of Computer Science Computer Vision and Pattern Recognition Lab, Director Center for Knowledge, Interaction and Intelligent Systems (KIIS), Director DAIS Ca' Foscari University, Venice Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: marcello.pelillo at gmail.com URL: http://www.dsi.unive.it/~pelillo > ?? Ivan > > > From bower at uthscsa.edu Thu Jan 30 07:55:26 2014 From: bower at uthscsa.edu (james bower) Date: Thu, 30 Jan 2014 06:55:26 -0600 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <87sis56bk8.fsf@cs.nuim.ie> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> <52E82256.2080906@pitt.edu> <3BCD5774-9EE4-4B12-A7BC-DFB62600A49B@uthscsa.edu> <87sis56bk8.fsf@cs.nuim.ie> Message-ID: <244EE72E-9BAB-486A-881B-6F20106D7F1F@uthscsa.edu> Hi Barak, Yes, was trying to be funny I suppose. Didn?t mean to imply at all that the mammalian arrangement didn?t make sense or vice versa. Aquatic animals also live in a space with lower photon flux. The fact that the neuronal structure of the retina itself is remarkably similar, but the entire structure is flipped, I think makes the point even more strongly that evolution found the same neuronal solution in two sets of animals independently. But thanks - isn?t biology remarkable. Jim On Jan 30, 2014, at 3:52 AM, Barak A. Pearlmutter wrote: > Jim, > > As a pure aside... > >> ... accept (sic) that the Cepholapod retina is pointed in the ?right? >> direction - i.e. towards the light source ... > > I think this is a bit of a "myth of science". It is true that the > mammalian retina has the circuitry and blood vessels between the lens > and the photoreceptors, while the cepholapod retina has them the other > way around. But the conclusion that this is a design flaw in the > mammalian eye is not quite so clear. There are four advantages that I > know of to putting the photoreceptors behind the circuitry. > > (1) If the back of the eyeball is reflective (as in a cat) light gets > two chances to be caught by a photoreceptor. Resolution when doing this > is degraded if the photoreceptors are moved away from the reflective > surface. > > (2) It is easier to keep the sheet of photoreceptors smooth. > > (3) If blood vessels are between the eyeball and the photoreceptors, the > heartbeat can make the photoreceptors wiggle. > > (4) The photoreceptors are very metabolically demanding, and can be > easier to maintain (nourish, clear toxic wastes) from the eyeball. > > --Barak. > -- > Barak A. Pearlmutter > Hamilton Institute & Dept Comp Sci, NUI Maynooth, Co. Kildare, Ireland > http://www.bcl.hamilton.ie/~barak/ Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Thu Jan 30 08:25:40 2014 From: jose at psychology.rutgers.edu (Stephen =?ISO-8859-1?Q?Jos=E9?= Hanson) Date: Thu, 30 Jan 2014 08:25:40 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: Message-ID: <1391088340.3251.11.camel@sam> Terry, I think the issue for many of us was the avoidance of Neuroimaging methods, other than to do research on new methods with better resolution (217). Clearly the Human Connectome RFP and funded Groups are dominated with fMRI Resting state. While the brain initiative is focused on celluar/circuit research. So prior to all the Jim emails.. it does seem there is there is a concern amoungst imagers why the advisory group decided exclude neuroimaging. Steve On Wed, 2014-01-29 at 17:06 -0800, Terry Sejnowski wrote: > There are some misconceptions about the BRAIN Initiative that seem to be > driving this discussion and some facts might help focus this issues. > > The NIH advisory group issued an interim report that made recommendations > to the director of NIH, which was used to issue the RFAs for FY 2014: > > http://acd.od.nih.gov/presentations/BRAIN-Interim-Report.pdf > > See section 5 on Page 33: > "Theory, Modeling and Statistics Will Be Essential to > Understanding the Brain" > > There are many oportunities for those on this list to contribute to the > BRAIN Initiative. Here are a few: > > p. 32: 4. The Importance of Behavior > "High Priority Research Area for FY2014: Link Neuronal Activity to > Behavior. The clever use of virtual reality, machine learning, and > miniaturized recording devices has the potential to dramatically increase > our understanding of how neuronal activity underlies cognition and > behavior. This path can be enabled by developing technologies to quantify > and interpret animal behavior, at high temporal and spatial resolution, > reliably, objectively, over long periods of time, under a broad set of > conditions, and in combination with concurrent measurement and > manipulation of neuronal activity." > > p. 35: New Statistical and Quantitative Approaches to New Kinds of Data > "As new kinds of data become available through advances in molecular > sensors and optical recording, equal effort must be expended to extract > maximum insight from these novel data sets. Data analytic and theoretical > problems are likely to emerge that we cannot anticipate at the present > time. Resources should be available for experts from essential disciplines > such as statistics, optimization, signal processing and machine learning > to develop new approaches to identifying and analyzing the relevant > signals." > > p. 16: "The BRAIN Initiative will deliver transformative scientific tools > and methods that should accelerate all of basic neuroscience, > translational neuroscience, and direct disease studies, as well as biology > beyond neuroscience. It will deliver a foundation of knowledge about the > function of the normal brain, its cellular components, the wiring of its > circuits, its patterns of electrical activity at local and global scales, > the causes and effects of those activity patterns, and the expression of > brain activity in behavior. Through the interaction of experiment and > theory, the BRAIN Initiative should elucidate the computational logic as > well as the specific mechanisms of brain function at different spatial and > temporal scales, defining the connections between molecules, neurons, > circuits, activity, and behavior." > > p. 5: Recomendation #1. Generate a Census of Cell Types. We do not know > how many types of neurons and glia there are. Two neurons with the same > morphology could project to different areas and have different functions. > Classification of cell types will depend on analyzing the high-dimensional > transcriptome and proteome of single cells and combining this with > anatomical and physiolgical data. Why is this important? Experimenters > need to label and manipulate each cell type to discover its function. > There are deep issues of what constitutes cell identity that need to be > settled, which will depend on analyzing the distribution of heterogeneous > data in high-dimensional spaces. > > p. 50 8d. Establish Platforms for Sharing Data > The goal of the BRAIN Initiative is not to create big data sets. This is > already happening in neuroscience, as it is in every area of science. One > of the recommendations is to make it easier for neuroscientists to share > their data so others can analyze them. > > This is an interim report. The final report, due in June 2014, will have > more specific priorities, milestones and goals for each of the > recommendations. > > Note also that there are 3 agencies involved in the BRAIN Initiative. > NSF has not yet announced its program. DARPA has announced 2 BAAs: > > http://www.cccblog.org/2013/12/02/darpa-announces-two-programs-as-part-of-white-house-brain-initiative/ > > Finally, new money for the BRAIN Initiative was set aside in the budget > that was recently passed by Congress. > > Terry > > ----- > -- Stephen Jos? Hanson Director RUBIC (Rutgers Brain Imaging Center) Professor of Psychology Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) Rutgers University email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-3313 (RUBIC) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Thu Jan 30 09:45:13 2014 From: bwyble at gmail.com (Brad Wyble) Date: Thu, 30 Jan 2014 09:45:13 -0500 Subject: Connectionists: Best practices in model publication In-Reply-To: References: <52E981D2.3020004@phys.psu.edu> Message-ID: Hi Shimon, > Unless I am mistaken in my interpretation of the stances expressed by > different contributors to this thread, this kind of prediction is supposed, > on at least one account, to be impossible/imprudent/whatever in trying to > understand the brain. Yet, here it is: veridicality as a mathematically > guaranteed generic property of neural representations. I guess sticking > exclusively to Genesis at the expense of an occasional glimpse at what > Prophets are up to may not be the most productive way ahead, after all :-) > > I suppose it was inevitable that this discussion would turn towards religion. By the way, I think that your example neatly epitomizes the plight of theory in our field. Even on those infrequent occasions when we get something right, credit is rarely given. -Brad > > Shimon Edelman > Professor, Department of Psychology, 232 Uris Hall > Cornell University, Ithaca, NY 14853-7601 > http://kybele.psych.cornell.edu/~edelman > @shimonedelman > > -- Brad Wyble Assistant Professor Psychology Department Penn State University http://wyblelab.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From edelman at cornell.edu Thu Jan 30 09:37:32 2014 From: edelman at cornell.edu (Shimon Edelman) Date: Thu, 30 Jan 2014 14:37:32 +0000 Subject: Connectionists: Best practices in model publication In-Reply-To: References: <52E981D2.3020004@phys.psu.edu> Message-ID: <41EB2712-B546-4C99-9D41-4390184A5A01@cornell.edu> On Jan 30, 2014, at 12:07 AM, Brad Wyble wrote: > Thanks for your input John, I appreciate it extremely. One of the points that I think is perhaps most important to improve on in our field is the scope of the predictions. In physics, predictions seem to occur on a much grander scale, and very inspiring, and this itself engenders respect for the theoretical work. > > In neuroscience, the predictions are expected to be easily testable, which leads necessarily to predictions that are smaller in scope, and testable with technology that is already at hand. > > -Brad Speaking of predictions, here?s one, regarding the veridicality of the representation of some metric properties of the world in brains. In several papers and one book, published between 1994 and 1999, I predicted (from certain mathematical properties of smooth functions) that the ensemble activity of graded responses of neurons in the primate inferotemporal cortex will be found to capture similarity patterns over families of 3D shapes. Here?s how the prediction was formulated in my 1998 BBS paper: "1. The cell will respond equally to different views of its preferred object, but its response will decrease with parameter- space distance from the point corresponding the shape of the preferred object (three such cells have been reported by Logothetis et al. 1995). 2. The responses of a number of cells, each tuned to a different reference object, will carry enough information to classify novel stimuli of the same general category as the reference objects. 3. If the pattern of stimuli has a simple low-dimensional characterization in some underlying parameter space (as in Fig. 6, left), it will be recoverable from the ensemble response of a number of cells, using multidimensional scaling.? Prediction #1 was really an explanation of some findings that just began to emerge when it was formulated. Predictions #2 and #3, however, were really about the future :-) I am particularly fond of #3. A couple of years later, it was tested and found correct ("Inferotemporal neurons represent low-dimensional configurations of parameterized shapes?, Hans Op de Beeck, Johan Wagemans and Rufin Vogels, Nature Neuroscience 4:1244, 2001). Funny enough, that paper opened with this sentence: "Behavioral studies with parameterized shapes have shown that the similarities among these complex stimuli can be represented using a low number of dimensions.? ? apparently, a theoretical prediction wasn?t good enough; they (or, more likely, the Nat Neuro reviewers) preferred to kick off an empirical finding, as if that had existed in a theoretical vacuum? Unless I am mistaken in my interpretation of the stances expressed by different contributors to this thread, this kind of prediction is supposed, on at least one account, to be impossible/imprudent/whatever in trying to understand the brain. Yet, here it is: veridicality as a mathematically guaranteed generic property of neural representations. I guess sticking exclusively to Genesis at the expense of an occasional glimpse at what Prophets are up to may not be the most productive way ahead, after all ;-) Cheers, ?Shimon p.s. My papers are all available here: http://kybele.psych.cornell.edu/~edelman/archive.html Shimon Edelman Professor, Department of Psychology, 232 Uris Hall Cornell University, Ithaca, NY 14853-7601 http://kybele.psych.cornell.edu/~edelman @shimonedelman From barak at cs.nuim.ie Thu Jan 30 04:52:55 2014 From: barak at cs.nuim.ie (Barak A. Pearlmutter) Date: Thu, 30 Jan 2014 09:52:55 +0000 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <3BCD5774-9EE4-4B12-A7BC-DFB62600A49B@uthscsa.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> <52E82256.2080906@pitt.edu> <3BCD5774-9EE4-4B12-A7BC-DFB62600A49B@uthscsa.edu> Message-ID: <87sis56bk8.fsf@cs.nuim.ie> Jim, As a pure aside... > ... accept (sic) that the Cepholapod retina is pointed in the ?right? > direction - i.e. towards the light source ... I think this is a bit of a "myth of science". It is true that the mammalian retina has the circuitry and blood vessels between the lens and the photoreceptors, while the cepholapod retina has them the other way around. But the conclusion that this is a design flaw in the mammalian eye is not quite so clear. There are four advantages that I know of to putting the photoreceptors behind the circuitry. (1) If the back of the eyeball is reflective (as in a cat) light gets two chances to be caught by a photoreceptor. Resolution when doing this is degraded if the photoreceptors are moved away from the reflective surface. (2) It is easier to keep the sheet of photoreceptors smooth. (3) If blood vessels are between the eyeball and the photoreceptors, the heartbeat can make the photoreceptors wiggle. (4) The photoreceptors are very metabolically demanding, and can be easier to maintain (nourish, clear toxic wastes) from the eyeball. --Barak. -- Barak A. Pearlmutter Hamilton Institute & Dept Comp Sci, NUI Maynooth, Co. Kildare, Ireland http://www.bcl.hamilton.ie/~barak/ From bower at uthscsa.edu Thu Jan 30 11:29:02 2014 From: bower at uthscsa.edu (james bower) Date: Thu, 30 Jan 2014 10:29:02 -0600 Subject: Connectionists: Physics and Psychology (and the C-word) In-Reply-To: <244EE72E-9BAB-486A-881B-6F20106D7F1F@uthscsa.edu> References: <547820602.1495848.1390813780785.JavaMail.root@inria.fr> <11A62D04-E423-41ED-8DA0-E04C185B46FA@uthscsa.edu> <52E7BF66.3070000@susaro.com> <52E7FF7B.7090100@susaro.com> <52E817FD.2020400@susaro.com> <52E82256.2080906@pitt.edu> <3BCD5774-9EE4-4B12-A7BC-DFB62600A49B@uthscsa.edu> <87sis56bk8.fsf@cs.nuim.ie> <244EE72E-9BAB-486A-881B-6F20106D7F1F@uthscsa.edu> Message-ID: <9939366D-EF3F-4608-92EF-786C4FE9055C@uthscsa.edu> An interesting point has been made off list with respect to the squid and the mammalian retina. Given their remarkable structural similarity, maybe from a computational point of view, it is irrelevant which way the thing points. I think an argument can be made for that point of view if you are interested in building a physical device. However, if your primary objective is to understand how the nervous system is ?engineered?, then, clearly, as pointed out nicely by Barak, there are additional constraints that one needs to take into account. One big one likely, is that it costs biology to be inefficient - brains can?t simply burn fossil fuels. Of course, probably a bad idea that we do to run our inefficient machines. Another reason to pay attention to biology I would say. Jim Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Thu Jan 30 11:46:43 2014 From: bower at uthscsa.edu (james bower) Date: Thu, 30 Jan 2014 10:46:43 -0600 Subject: Connectionists: Best practices in model publication In-Reply-To: <41EB2712-B546-4C99-9D41-4390184A5A01@cornell.edu> References: <52E981D2.3020004@phys.psu.edu> <41EB2712-B546-4C99-9D41-4390184A5A01@cornell.edu> Message-ID: The diminished role of true predictions in neuroscience is, obviously, another reflection of the lack of a computational base for the field. The vast majority of neuroscience papers are data descriptions, usually with a nod to something that sounds vaguely hypothesis or theory like in the introduction and discussion. In many cases, in fact, the attempts to tie the work into some larger context are laughable. However, modelers and theorists don?t help the situation by loose use of the word ?prediction? either. Perhaps we should all adopt the distinction between ?explanation of findings? and true predictions Shimon makes. While in the limit this is a complex epistemological issue that philosophers of science have debated for many years, in practical application day to day it is, I think, often a pretty clear distinction. Sadly, under current circumstances, it seems to me that many theorists and modelers, to get any attention, feel the need to assert the importance of their work by listing all its ?predictions?. I have sat next to many an experimentalist in workshops and meetings who just roll their eyes at these kinds of presentations. Jim On Jan 30, 2014, at 8:37 AM, Shimon Edelman wrote: > > On Jan 30, 2014, at 12:07 AM, Brad Wyble wrote: > >> Thanks for your input John, I appreciate it extremely. One of the points that I think is perhaps most important to improve on in our field is the scope of the predictions. In physics, predictions seem to occur on a much grander scale, and very inspiring, and this itself engenders respect for the theoretical work. >> >> In neuroscience, the predictions are expected to be easily testable, which leads necessarily to predictions that are smaller in scope, and testable with technology that is already at hand. >> >> -Brad > > > Speaking of predictions, here?s one, regarding the veridicality of the representation of some metric properties of the world in brains. In several papers and one book, published between 1994 and 1999, I predicted (from certain mathematical properties of smooth functions) that the ensemble activity of graded responses of neurons in the primate inferotemporal cortex will be found to capture similarity patterns over families of 3D shapes. Here?s how the prediction was formulated in my 1998 BBS paper: > > > "1. The cell will respond equally to different views of its > preferred object, but its response will decrease with parameter- > space distance from the point corresponding the > shape of the preferred object (three such cells have been > reported by Logothetis et al. 1995). > 2. The responses of a number of cells, each tuned to a > different reference object, will carry enough information to > classify novel stimuli of the same general category as the > reference objects. > 3. If the pattern of stimuli has a simple low-dimensional > characterization in some underlying parameter space (as in > Fig. 6, left), it will be recoverable from the ensemble response > of a number of cells, using multidimensional scaling.? > > Prediction #1 was really an explanation of some findings that just began to emerge when it was formulated. Predictions #2 and #3, however, were really about the future :-) > I am particularly fond of #3. A couple of years later, it was tested and found correct ("Inferotemporal neurons represent low-dimensional configurations of parameterized shapes?, Hans Op de Beeck, Johan Wagemans and Rufin Vogels, Nature Neuroscience 4:1244, 2001). Funny enough, that paper opened with this sentence: "Behavioral studies with parameterized shapes have shown that the similarities among these complex stimuli can be represented using a low number of dimensions.? ? apparently, a theoretical prediction wasn?t good enough; they (or, more likely, the Nat Neuro reviewers) preferred to kick off an empirical finding, as if that had existed in a theoretical vacuum? > > Unless I am mistaken in my interpretation of the stances expressed by different contributors to this thread, this kind of prediction is supposed, on at least one account, to be impossible/imprudent/whatever in trying to understand the brain. Yet, here it is: veridicality as a mathematically guaranteed generic property of neural representations. I guess sticking exclusively to Genesis at the expense of an occasional glimpse at what Prophets are up to may not be the most productive way ahead, after all ;-) > > Cheers, > > ?Shimon > > p.s. My papers are all available here: > http://kybele.psych.cornell.edu/~edelman/archive.html > > > Shimon Edelman > Professor, Department of Psychology, 232 Uris Hall > Cornell University, Ithaca, NY 14853-7601 > http://kybele.psych.cornell.edu/~edelman > @shimonedelman > > > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.geurts at ulg.ac.be Thu Jan 30 11:04:56 2014 From: p.geurts at ulg.ac.be (Pierre Geurts) Date: Thu, 30 Jan 2014 17:04:56 +0100 Subject: Connectionists: Post-doctoral position in machine learning for bioimaging and bioinformatics Message-ID: <9A21D641-A4A1-4E0C-A21E-7BEB25257C64@ulg.ac.be> Job Advertisement: Position at the post-doctoral level for 1 year at the Systems and Modeling research unit at the GIGA Biomedical Research Center and at the Montefiore Institute of Electrical Engineering and Computer Science, at the University of Li?ge, Belgium. Within European FP7 Marie Curie Work Programme coordinated by Prof. dr. S. Heymans (Maastricht University), we are looking for an excellent, highly competitive researcher at the post-doctoral level. He/she should have worked for at least 3 years outside Belgium during the past years. Topic : Development & application of machine learning, image analysis, and bioinformatics techniques for high-throughput and high-content analysis for microRNA-medicines for Cardiac Metabolic Diseases. Keywords: Machine Learning ? Image Analysis ? Bioimage Informatics - Bioinformatics Environment: The GIGA Research Center with excellent research groups in biology, bioinformatics, and machine learning fields grouped together. Project: The researcher will be working closely with a team of software developers, researchers in machine learning & bioinformatics, and biologists, to advance high-throughput automated analysis of bioimaging & microRNA data. The applicant will be involved in the application of existing algorithms and the design of novel algorithms and their large-scale evaluation for cellular image recognition and phenotype quantification to discover novel microRNAs implicated in heart failure. She/He will have access to a high-performance computing environment and various bioinformatics tools including the inhouse developed Cytomine platform able to process terabytes of imaging data. Capacities & Experiences: Applicants should hold a PhD and have strong knowledge in machine learning, bioinformatics and/or computer vision. Programming and data analysis skills are also highly desirable. The candidates should be highly motivated, with a strong interest in large-scale biomedical applications. A working knowledge of English language is mandatory This project is a collaborative project (exchange of people) between Maastricht University (Netherlands), Cenix Biosciences (Dresden, Germany) and SystMod (GIGA, Li?ge, Belgium). Interested candidates should send a CV, a brief statement of research and development interests, three relevant publications, and the names and contact details of two references by e-mail (with subject ?Postdoc CardiomiR?) to: raphael.maree at ulg.ac.be; pierre.geurts at ulg.ac.be Related links: http://www.montefiore.ulg.ac.be/systmod http://www.giga.ulg.ac.be http://ec.europa.eu/research/participants/portal/page/people?callIdentifier=FP7-PEOPLE-2012-IAPP http://www.cenix.com http://www.cytomine.be/ http://www.montefiore.ulg.ac.be/~maree http://www.montefiore.ulg.ac.be/~geurts From aurel at ee.columbia.edu Thu Jan 30 13:25:23 2014 From: aurel at ee.columbia.edu (Aurel A. Lazar) Date: Thu, 30 Jan 2014 13:25:23 -0500 Subject: Connectionists: Brain-like computing fanfare and big data fanfare In-Reply-To: References: <00c701cf1cdc$9738f930$c5aaeb90$@is.umk.pl> Message-ID: Brian, the theoretical representation of the auditory and visual space with massively parallel neural circuits (receptive fields + spiking neurons) in the spike domain is well understood. The mathematical formalism uses the representation (encoding) of space-time signals in Hilbert space(s). You can find pointers to the literature, decoding demonstrations of encoded video streams and, Matlab and/or Python/PyCUDA open source code at: http://www.bionet.ee.columbia.edu/research/nce Best, Aurel http://www.bionet.ee.columbia.edu On Jan 29, 2014, at 1:02 PM, Brian J Mingus wrote: > Hi W?odek, > > Suggesting that we use a Hilbert space instead of a standard vector space for semantic representation seems similar to the question of whether we should use spiking or rate-coded neurons. It would seem that if all of the information is encoded in the rate, that the Hilbert space representation could then be compressed into a simpler vector space representation. I did recently see a paper that suggested that different parts of the brain seem to use different codes, i.e., some convey information in spikes, and some convey information in the rate (I can't find the paper at this time). In that case, a Hilbert space representation might be simpler to the extent that it captures all of the information, either way. > > I am not a physicist so I may not have a deep enough understanding of what a Hilbert space is - could you perhaps explain what new information it might be capable of representing over and above the vector space representation, which is essentially what is used in rate coded deep neural nets? > > Thanks, > > Brian > > http://grey.colorado.edu/mingus > > > > On Wed, Jan 29, 2014 at 3:26 AM, W?odzis?aw Duch wrote: > Dear all, > > > > QM has yet to show some advantages over strong synchronization in classical models that unifies the activity of the whole network. There is another aspect to this discussion: we need to go beyond na?ve interpretation of assigning functions to activity of single structures. We have to use a formalism similar to the quantum mechanical representation theory in Hilbert space, decomposing brain activations into combinations of other activations. In wrote a bit about it in sec. 2 of ?Neurolinguistic Approach to Natural Language Processing?, Neural Networks 21(10), 1500-1510, 2008. > > > > QM seems to be attractive because we do not understand how to make a transition between brain activations and subjective experience, described in some psychological spaces, outside and inside (3rd and 1st person) points of view. I have tried to explain it in a paper for APA, Mind-Brain Relations, Geometric Perspective and Neurophenomenology, American Philosophical Association Newsletter 12(1), 1-7, 2012 > > > QM formalism of representation theory may be useful also for classical distributed computing systems. > > > > Best regards, W?odek Duch > > ____________________ > > Google W. Duch > > > > > > > > From: Connectionists [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Carson Chow > Sent: Tuesday, January 28, 2014 10:01 PM > To: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Physics and Psychology (and the C-word) > > > > Brian, > > Quantum mechanics can be completely simulated on a classical computer so if quantum mechanics do matter for C then it must be a matter of computational efficiency and nothing more. We also know that BQP (i.e. set of problems solved efficiently on a quantum computer) is bigger than BPP (set of problems solved effficiently on a classical computer) but not by much. I'm not fully up to date on this but I think factoring and boson sampling or about the only two examples that are in BQP and not in BPP. We also know that BPP is much smaller than NP, so if C does require QM then for some reason it sits in a small sliver of complexity space. > > best, > Carson > > PS I do like your self-consistent test for confirming consciousness. I once proposed that we could just run Turing machines and see which ones asked why they exist as a test of C. Kind of similar to your idea. > > > On 1/28/14 3:09 PM, Brian J Mingus wrote: > > Hi Richard, thanks for the feedback. > > > > > Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate) > > > > Actually, the outcome measure I described is independent of a clear and unambiguous meaning for C itself, and in an interesting way: the models, like us, essentially reinvent the entire literature, and have a conversation as we do, inventing almost all the same positions that we've invented (including the one in your paper). > > > > I will read your paper and see if it changes my position. At the present time, however, I can't imagine any information that would solve the so-called zombie problem. I'm not a big fan of integrative information theory - I don't think hydrogen atoms are conscious, and I don't think naive bayes trained on a large corpus and run in generative mode is conscious. Thus, if the model doesn't go through the same philosophical reasoning that we've collectively gone through with regards to subjective experience, then I'm going to wonder if its experience is anything like mine at all. > > > > Touching back on QM, if we create a point neuron-based model that doesn't wax philosophical on consciousness, I'm going to wonder if we should add lower levels of analysis. > > > > I will take a look at your paper, and see if it changes my view on this at all. > > > > Cheers, > > > > Brian Mingus > > > > http://grey.colorado.edu/mingus > > > > > > On Tue, Jan 28, 2014 at 12:05 PM, Richard Loosemore wrote: > > > > Brian, > > Everything hinges on the definition of the concept ("consciousness") under consideration. > > In the chapter I wrote in Wang & Goertzel's "Theoretical Foundations of Artificial General Intelligence" I pointed out (echoing Chalmers) that too much is said about C without a clear enough understanding of what is meant by it .... and then I went on to clarify what exactly could be meant by it, and thereby came to a resolution of the problem (with testable predictions). So I think the answer to the question you pose below is that: > > (a) Yes, in general, having an outcome measure that correlates with C ... that is good, but only with a clear and unambigous meaning for C itself (which I don't think anyone has, so therefore it is, after all, of no value to look for outcome measures that correlate), and > > (b) All three of the approaches you mention are sidelined and finessed by the approach I used in the abovementioned paper, where I clarify the definition by clarifying first why we have so much difficulty defining it. In other words, there is a fourth way, and that is to explain it as ... well, I have to leave that dangling because there is too much subtlety to pack into an elevator pitch. (The title is the best I can do: " Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism "). > > Certainly though, the weakness of all quantum mechanics 'answers' is that they are stranded on the wrong side of the explanatory gap. > > > Richard Loosemore > > > Reference > Loosemore, R.P.W. (2012). Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism. In: P. Wang & B. Goertzel (Eds), Theoretical Foundations of Artifical General Intelligence. Atlantis Press. > http://richardloosemore.com/docs/2012a_Consciousness_rpwl.pdf > > > > > On 1/28/14, 10:34 AM, Brian J Mingus wrote: > > Hi Richard, > > > > > I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. > > > > I'm not sure I see the argument you're trying to make here. If you have an outcome measure that you agree correlates with consciousness, then we have a framework for scientifically studying it. > > > > Here's my setup: If you create a society of models and do not expose them to a corpus containing consciousness philosophy and they then, in a reasonably short amount of time, independently rewrite it, they are almost certainly conscious. This design explicitly rules out a generative model that accidentally spits out consciousness philosophy. > > > > Another approach is to accept that our brains are so similar that you and I are almost certainly both conscious, and to then perform experiments on each other and study our subjective reports. > > > > Another approach is to perform experiments on your own brain and to write first person reports about your experience. > > > > These three approaches each have tradeoffs, and each provide unique information. The first approach, in particular, might ultimately allow us to draw some of the strongest possible conclusions. For example, it allows for the scientific study of the extent to which quantum effects may or may not be relevant. > > > > I'm very interested in hearing any counterarguments as to why this general approach won't work. If it can't work, then I would argue that perhaps we should not create full models of ourselves, but should instead focus on upgrading ourselves. From that perspective, getting this to work is extremely important, despite however futuristic it may seem. > > > > > So let's let that sleeping dog lie.... (?). > > > > Not gonna' happen. :) > > > > Brian Mingus > > http://grey.colorado.edu > > > > On Tue, Jan 28, 2014 at 7:32 AM, Richard Loosemore wrote: > > On 1/27/14, 11:30 PM, Brian J Mingus wrote: > > Consciousness is also such a bag of worms that we can't rule out that qualia owes its totally non-obvious and a priori unpredicted existence to concepts derived from quantum mechanics, such as nested observers, or entanglement. > > As far as I know, my litmus test for a model is the only way to tell whether low-level quantum effects are required: if the model, which has not been exposed to a corpus containing consciousness philosophy, then goes on to independently recreate consciousness philosophy, despite the fact that it is composed of (for example) point neurons, then we can be sure that low-level quantum mechanical details are not important. > > Note, however, that such a model might still rely on nested observers or entanglement. I'll let a quantum physicist chime in on that - although I will note that according to news articles I've read that we keep managing to entangle larger and larger objects - up to the size of molecules at this time, IIRC. > > > Brian Mingus > http://grey.colorado.edu/mingus > > Speaking as someone is both a physicist and a cognitive scientist, AND someone who has written papers resolving that whole C-word issue, I can tell you that the quantum story isn't nearly enough clear in the minds of physicists, yet, so how it can be applied to the C question is beyond me. Frankly, it does NOT apply: saying anything about observers and entanglement does not at any point touch the kind of statements that involve talk about qualia etc. So let's let that sleeping dog lie.... (?). > > As for using the methods/standards of physics over here in cog sci ..... I think it best to listen to George Bernard Shaw on this one: "Never do unto others as you would they do unto you: their tastes may not be the same." > > Our tastes (requirements/constraints/issues) are quite different, so what happens elsewhere cannot be directly, slavishly imported. > > > Richard Loosemore > > Wells College > Aurora NY > USA > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahu at cs.stir.ac.uk Thu Jan 30 18:51:05 2014 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Thu, 30 Jan 2014 23:51:05 +0000 Subject: Connectionists: Cognitive Computation journal (Springer): Table of Contents, Vol.5, No.4 / Dec 2013 Issue Message-ID: Dear Colleagues: (with advance apologies for any cross-postings) We are delighted to announce the publication of Volume 5, No.4/Dec 2013 Issue, of Springer's Cognitive Computation journal - www.springer.com/12559 In addition to regular papers (which include an invited paper by Professor Ron Sun, Rensselaer Polytechnic Institute, USA, titled: Moral Judgment, Human Motivation, and Neural Networks), this Issue comprises a Special Issue titled: Advanced Cognitive Systems Based on Nonlinear Analysis. Guest Editors: Carlos M. Travieso and Jes?s B. Alonso. (The Guest Editorial is available here: http://link.springer.com/content/pdf/10.1007%2Fs12559-013-9237-9.pdf ) The individual list of published articles (Table of Contents) for this Issue can be viewed here (and also at the end of this message, followed by an overview of the previous Issues/Archive listings): http://link.springer.com/journal/12559/5/4/page/1 You may also be interested in the journal's Seminal Special Issue (Sep 2013 Issue): In Memory of John G Taylor: A Polymath Scholar, by Guest Editors: Vassilis Cutsuridis and Amir Hussain (the Guest Editorial is available here: http://link.springer.com/content/pdf/10.1007%2Fs12559-013-9226-z.pdf and full listing of articles can be found at: http://link.springer.com/journal/12559/5/3/page/1) A list of the journal's most downloaded articles (which can always be read for FREE) can be found here: http://www.springer.com/biomed/neuroscience/journal/12559?hideChart=1#realtime Other 'Online First' published articles not yet in a print issue can be viewed here: http://www.springerlink.com/content/121361/?Content+Status=Accepted All previous Volumes and Issues of the journal can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 ======================================================= NEW: ISI Impact Factor for Cognitive Computation of 0.867 for 2012 (5 year IF: 1.137) ======================================================= As you will know, Cognitive Computation was selected for coverage in Thomson Reuter?s products and services in 2011. Beginning with V.1 (1) 2009, this publication is now indexed and abstracted in: ? Science Citation Index Expanded (also known as SciSearch?) ? Journal Citation Reports/Science Edition ? Current Contents?/Engineering Computing and Technology ? Neuroscience Citation Index? Cognitive Computation also received its first Impact Factor of 1.0 (Thomson Reuters Journal Citation Reports? 2011) in 2011 0.867(Thomson Reuters Journal Citation Reports? 2011) in 2012 ============================================ Reminder: New Cognitive Computation "LinkedIn" Group: ============================================ To further strengthen the bonds amongst the interdisciplinary audience of Cognitive Computation, we have set-up a "Cognitive Computation LinkedIn group", which has over 700 members already! We warmly invite you to join us at: http://www.linkedin.com/groups?gid=3155048 For further information on the journal and to sign up for electronic "Table of Contents alerts" please visit the Cognitive Computation homepage: http://www.springer.com/12559 or follow us on Twitter at: http://twitter.com/CognComput for the latest On-line First Issues. For any questions with regards to LinkedIn and/or Twitter, please contact Springer's Publishing Editor: Dr. Martijn Roelandse: martijn.roelandse at springer.com Finally, we would like to invite you to submit short or regular papers describing original research or timely review of important areas - our aim is to peer review all papers within approximately six-eight weeks of receipt. We also welcome relevant high quality proposals for Special Issues - five are already planned for 2014-15 (for CFPs, see: http://www.springer.com/biomed/neuroscience/journal/12559?detailsPage=press ) With our very best wishes for the New Year to all aspiring readers and authors of Cognitive Computation, Professor Amir Hussain, PhD (Editor-in-Chief: Cognitive Computation) E-mail: ahu at cs.stir.ac.uk (University of Stirling, Scotland, UK) Professor Igor Aleksander, PhD (Honorary Editor-in-Chief: Cognitive Computation) (Imperial College, London, UK) --------------------------------------------------------------------------------------------------------------- Table of Contents Alert -- Cognitive Computation Vol 5 No 4, Dec 2013 --------------------------------------------------------------------------------------------------------------- Special Issue on Advanced Cognitive Systems Based on Nonlinear Analysis Carlos M. Travieso & Jes?s B. Alonso http://link.springer.com/article/10.1007/s12559-013-9237-9 Characterizing Neurological Disease from Voice Quality Biomechanical Analysis Pedro G?mez-Vilda , Victoria Rodellar-Biarge , V?ctor Nieto-Lluis , Cristina Mu?oz-Mulas , Luis Miguel Mazaira-Fern?ndez , Rafael Mart?nez-Olalla , Agust?n ?lvarez-Marquina , Carlos Ram?rez-Calvo & Mario Fern?ndez-Fern?ndez http://link.springer.com/article/10.1007/s12559-013-9207-2 Auditory-Inspired Morphological Processing of Speech Spectrograms: Applications in Automatic Speech Recognition and Speech Enhancement Joyner Cadore , Francisco J. Valverde-Albacete , Ascensi?n Gallardo-Antol?n & Carmen Pel?ez-Moreno http://link.springer.com/article/10.1007/s12559-012-9196-6 Detecting Speech Polarity with High-Order Statistics Thomas Drugman & Thierry Dutoit http://link.springer.com/article/10.1007/s12559-012-9167-y Nonlinear Dynamics for Hypernasality Detection in Spanish Vowels and Words J. R. Orozco-Arroyave , J. F. Vargas-Bonilla , J. D. Arias-Londo?o , S. Murillo-Rend?n , G. Castellanos-Dom?nguez & J. F. Garc?s http://link.springer.com/article/10.1007/s12559-012-9166-z Improving Automatic Detection of Obstructive Sleep Apnea Through Nonlinear Analysis of Sustained Speech Jos? Luis Blanco , Luis A. Hern?ndez , Rub?n Fern?ndez & Daniel Ramos http://link.springer.com/article/10.1007/s12559-012-9168-x Voice Quality Modification Using a Harmonics Plus Noise Model ?ngel Calzada Defez & Joan Claudi Dr Socor? Carri? http://link.springer.com/article/10.1007/s12559-012-9193-9 A Fast Gradient Approximation for Nonlinear Blind Signal Processing Jordi Sol?-Casals & Cesar F. Caiafa http://link.springer.com/article/10.1007/s12559-012-9192-x Improved Convolutive and Under-Determined Blind Audio Source Separation with MRF Smoothing Rafa? Zdunek http://link.springer.com/article/10.1007/s12559-012-9185-9 A Real-Time Speech Enhancement Framework in Noisy and Reverberated Acoustic Scenarios Rudy Rotili , Emanuele Principi , Stefano Squartini & Bj?rn Schuller http://link.springer.com/article/10.1007/s12559-012-9176-x Global Selection of Features for Nonlinear Dynamics Characterization of Emotional Speech Patricia Henr?quez Rodr?guez , Jes?s B. Alonso Hern?ndez , Miguel A. Ferrer Ballester , Carlos M. Travieso Gonz?lez & Juan R. Orozco-Arroyave http://link.springer.com/article/10.1007/s12559-012-9157-0 Children?s Emotion Recognition from Spontaneous Speech Using a Reduced Set of Acoustic and Linguistic Features Santiago Planet & Ignasi Iriondo http://link.springer.com/article/10.1007/s12559-012-9174-z Low-variance Multitaper Mel-frequency Cepstral Coefficient Features for Speech and Speaker Recognition Systems Md. Jahangir Alam , Patrick Kenny & Douglas O?Shaughnessy http://link.springer.com/article/10.1007/s12559-012-9197-5 Enhancing the Feature Extraction Process for Automatic Speech Recognition with Fractal Dimensions Aitzol Ezeiza , Karmele L?pez de Ipi?a , Carmen Hern?ndez & Nora Barroso http://link.springer.com/article/10.1007/s12559-012-9165-0 Meteorological Prediction Implemented on Field-Programmable Gate Array Jos? L. V?squez , Santiago T. P?rez , Carlos M. Travieso & Jes?s B. Alonso http://link.springer.com/article/10.1007/s12559-012-9158-z Automatic Apnea Identification by Transformation of the Cepstral Domain Carlos M. Travieso , Jes?s B. Alonso , Marcos del Pozo-Ba?os , Jaime R. Ticay-Rivas & Karmele Lopez-de-Ipi?a http://link.springer.com/article/10.1007/s12559-012-9184-x -------Regular Papers-------------------------------------------------------------------- (Invited Paper) Moral Judgment, Human Motivation, and Neural Networks Ron Sun http://link.springer.com/article/10.1007/s12559-012-9181-0 A Twin Multi-Class Classification Support Vector Machine Yitian Xu , Rui Guo & Laisheng Wang http://link.springer.com/article/10.1007/s12559-012-9179-7 Binocular Energy Estimation Based on Properties of the Human Visual System Rafik Bensalma & Mohamed-Chaker Larabi http://link.springer.com/article/10.1007/s12559-012-9187-7 A Perceptual Visual Feature Extraction Method Achieved by Imitating V1 and V4 of the Human Visual System Sungho Kim , Soon Kwon & In So Kweon http://link.springer.com/article/10.1007/s12559-012-9194-8 Erratum to: Blame the Opponent! Effects of Multimodal Discrediting Moves in Public Debates Francesca D?Errico & Isabella Poggi http://link.springer.com/article/10.1007/s12559-012-9190-z --------------------------------------------------- Previous Issues/Archive: Overview: --------------------------------------------------- All previous Volumes and Issues can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 Alternatively, the full listing of the Inaugural Vol. 1, No. 1 / March 2009, can be viewed here (which included invited authoritative reviews by leading researchers in their areas - including keynote papers from London University's John Taylor, Igor Aleksander and Stanford University's James McClelland, and invited papers from Ron Sun, Pentti Haikonen, Geoff Underwood, Kevin Gurney, Claudius Gross, Anil Seth and Tom Ziemke): http://www.springerlink.com/content/1866-9956/1/1/ The full listing of Vol. 1, No. 2 / June 2009, can be viewed here (which included invited reviews and original research contributions from leading researchers, including Rodney Douglas, Giacomo Indiveri, Jurgen Schmidhuber, Thomas Wennekers, Pentti Kanerva and Friedemann Pulvermuller): http://www.springerlink.com/content/1866-9956/1/2/ The full listing of Vol.1, No. 3 / Sep 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/3/ The full listing of Vol. 1, No. 4 / Dec 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/4/ The full listing of Vol.2, No. 1 / March 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/1/ The full listing of Vol.2, No. 2 / June 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/2/ The full listing of Vol.2, No. 3 / Aug 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/3/ The full listing of Vol.2, No. 4 / Dec 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/4/ The full listing of Vol.3, No.1 / Mar 2011 (Special Issue on: Saliency, Attention, Active Visual Search and Picture Scanning, edited by John Taylor and Vassilis Cutsuridis), can be viewed here: http://www.springerlink.com/content/1866-9956/3/1/ The Guest Editorial can be viewed here: http://www.springerlink.com/content/hu2245056415633l/ The full listing of Vol.3, No.2 / June 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/2/ The full listing of Vol. 3, No. 3 / Sep 2011 (Special Issue on: Cognitive Behavioural Systems, Guest Edited by: Anna Esposito, Alessandro Vinciarelli, Simon Haykin, Amir Hussain and Marcos Faundez-Zanuy), can be viewed here: http://www.springerlink.com/content/1866-9956/3/3/ The Guest Editorial for the special issue can be viewed here: http://www.springerlink.com/content/h4718567520t2h84/ The full listing of Vol. 3, No. 4 / Dec 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/4/ The full listing of Vol. 4, No.1 / Mar 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/1/ The full listing of Vol. 4, No.2 / June 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/2/ The full listing of Vol. 4, No.3 / Sep 2012 (Special Issue on: Computational Creativity, Intelligence and Autonomy, Edited by: J. Mark Bishop and Yasemin J. Erden) can be viewed here: http://www.springerlink.com/content/1866-9956/4/3/ The full listing of Vol. 4, No.4 / Dec 2012 (Special Issue titled: "Cognitive & Emotional Information Processing", Edited by: Stefano Squartini, Bj?rn Schuller and Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/4/4/page/1 The full listing of Vol. 5, No.1 / March 2013 Special Issue titled: Computational Intelligence and Applications Guest Editors: Zhigang Zeng & Haibo He, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/1/page/1 The full listing of Vol. 5, No.2 / June 2013 Special Issue titled: Advances on Brain Inspired Computing, Guest Editors: Stefano Squartini, Sanqing Hu & Qingshan Liu, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/2/page/1 The full listing of Vol. 5, No.3 / Sep 2013 Special Issue titled: In Memory of John G Taylor: A Polymath Scholar, Guest Editors: Vassilis Cutsuridis & Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/5/3/page/1 -------------------------------------------------------------------------------------------- The University of Stirling is ranked in the top 50 in the world in The Times Higher Education 100 Under 50 table, which ranks the world's best 100 universities under 50 years old. The University of Stirling is a charity registered in Scotland, number SC 011159. -- The University of Stirling has been ranked in the top 12 of UK universities for graduate employment*. 94% of our 2012 graduates were in work and/or further study within six months of graduation. *The Telegraph The University of Stirling is a charity registered in Scotland, number SC 011159. From birgit.ahrens at bcf.uni-freiburg.de Fri Jan 31 14:59:19 2014 From: birgit.ahrens at bcf.uni-freiburg.de (Birgit Ahrens) Date: Fri, 31 Jan 2014 20:59:19 +0100 Subject: Connectionists: Open PhD positions at Bernstein Center Freiburg, Germany Message-ID: <001b01cf1ebe$edcb58e0$c9620aa0$@bcf.uni-freiburg.de> Dear Computational Neuroscience Community, Please find below our job posting for PhD positions at the Bernstein Center Freiburg (BCF), Germany. Best regards Birgit Ahrens PhD position in Neurotechnology and Computational Neuroscience A PhD position is available in the lab of Carsten Mehring at the Bernstein Center of the University of Freiburg, Germany. This is a new lab, established to study sensorimotor behavior and brain-machine interfaces. Research topics include: motor adaptation and motor skill learning; brain-machine and machine-brain interfaces; neuronal dynamics; decision making. The primary research tools are behavioural experiments (using virtual reality), electrophysiology and neuroimaging (EEG, ECoG, fNIRS), transcranial electrical stimulation (tDCS&tACS), advanced neural signal analysis and computational modeling. We invite applications to join the lab for a 3-4 year PhD project, and to enter the PhD program "iCoNeT" at the Bernstein Center Freiburg. The project is financed by a fellowship of the DAAD that can only be awarded to non-German applicants that have not been staying in Germany for more than 15 months at the time of their application. The Bernstein Center Freiburg concentrates research in Computational Neuroscience and Neurotechnology at the University of Freiburg. The projects are highly interdisciplinary and span across mathematical-theoretical approaches on the function and dynamics of neuronal networks, neuroanatomy, experimentally driven neurophysiology and the development of technologies for medical application. Please apply using our online form at https://yoda.bcf.uni-freiburg.de/ and indicate "Mehring" as preferred project. The deadline for applications is February 28, 2014. Further details on: www.bcf.uni-freiburg.de/jobs PhD position in Experimental Epilepsy Research The goal Prof. Carola Haas' group of researchers form medicine and biology is to understand the interplay between molecular, cellular and functional determinants leading to focal epilepsies in the mammalian brain. Our main tools are in vivo animal models and in vitro approaches to study the contribution of new neurons to epileptogenesis in the dentate gyrus network. We invite applications to join the lab for a 3-4 year PhD project, and to enter the PhD program "iCoNeT" at the Bernstein Center Freiburg. The project is financed by a fellowship of the DAAD that can only be awarded to non-German applicants that have not been staying in Germany for more than 15 months at the time of their application. The successful applicant has prior training in neuroscience and/or experience with molecular biological techniques to trace cellular progeny. Good knowledge of the English language, high motivation for independent work, but also ability to work in an international team are mandatory. The Bernstein Center Freiburg concentrates research in Computational Neuroscience and Neurotechnology at the University of Freiburg. The projects are highly interdisciplinary and span across mathematical-theoretical approaches on the function and dynamics of neuronal networks, neuroanatomy, experimentally driven neurophysiology and the development of technologies for medical application. Please apply using our online form at https://yoda.bcf.uni-freiburg.de/ and indicate "Haas" as preferred project. The deadline for applications is February 28, 2014. Further details on: www.bcf.uni-freiburg.de/jobs PhD position on Structure and Dynamics of Cortical Networks in the Computational Neuroscience lab of Prof. Stefan Rotter Our goal is to understand the interplay between network topology and spiking activity dynamics in the neocortex and other parts of the mammalian brain, and to explore the possibilities and constraints of dynamical brain function. Our main tools are mathematical/numerical network modeling and statistical data analysis, often used side by side within the framework of stochastic point processes and statistical graph theory. In collaboration with physiologists and anatomists, we seek to develop new perspectives for the model-based analysis and interpretation of neuronal signals. We are a young group of researchers from mathematics, physics, computer science and biology and invite applications to join the lab for a 3-4 year PhD project, and to enter the PhD program in Computational Neuroscience at the Bernstein Center Freiburg. The Bernstein Center Freiburg performs research in Computational Neuroscience and Neurotechnology at the University of Freiburg, Germany. The projects are highly interdisciplinary and span from mathematical-theoretical approaches on the function and dynamics of neuronal networks over neuroanatomy and experimentally driven neurophysiology up to the development of technologies for medical application. Further details on: www.bcf.uni-freiburg.de/jobs PhD position on Closed-Loop Control of Neuronal Networks Using Machine-Learning Techniques in the Biomicrotechnology lab of Prof. Ulrich Egert We are currently offering a PhD position for a candidate with a background in experimental network neuroscience. The project investigates the opportunities of machine learning in order to develop controllers that interact with neuronal networks. The project is part of the Cluster of Excellence "BrainLinks-BrainTools'' (www.brainlinks.uni-freiburg.de ) together with the Bernstein Center Freiburg (www.bcf.uni-freiburg.de ) and will combine neuroscience and engineering. Our aim is to identify the fundamental principles and boundary conditions relevant to control network activity with machine learning algorithms and through various points of intervention. We use cultured neuronal networks on microelectrode arrays as a model system to test out concepts of network control. Eventually these concepts will be expanded and adapted to in vivo applications to improve the efficacy of neurotechnical implants, such as in deep brain stimulation. It is essential that the candidate has a background in neuroscience, ideally in experimental neurophysiology, an MSc degree and a strong interest in network analysis. The international PhD training program of the Bernstein Center Freiburg will help you fill in any knowledge gaps that you may have. Further details on: www.bcf.uni-freiburg.de/jobs PhD position on sensorimotor processing in the basal ganglia A PhD position is available in the new junior research group of Robert Schmidt in the Cluster of Excellence BrainLinks-BrainTools in Freiburg (Germany). We currently assemble a young, ambitious research team to study neural foundations of action selection, initiation and execution at the intersection of computational and experimental neuroscience. The research project is centered around the analysis of electrophysiological recordings from rats performing behavioral tasks. Our goal is to gain understanding of basal ganglia processing of sensory- and movement-related information (see e.g. Schmidt et al., 2013. Canceling actions involves a race between basal ganglia pathways. Nat. Neurosci. 16: 1118-1124.). We want to integrate advanced data analysis methods with computational modelling and clinical applications (e.g. for Parkinson's Disease). The project includes close collaborations with computational and experimental groups in Freiburg (e.g. Ad Aertsen and Arvind Kumar at the Bernstein Center Freiburg), and internationally (e.g. Joshua Berke at University of Michigan, USA and Nicolas Mallet at CNRS Bordeaux, France). The ideal candidate has profound neurobiological knowledge, programming skills (e.g. Matlab or Python), and mathematical expertise. High motivation and interest in neuroscientific research is mandatory. Experience in the analysis of neurophysiological data and computational modelling is a big plus. Applicants with degrees from interdisciplinary programs such as computational neuroscience or cognitive science are highly welcome, but applicants from other disciplines such as biology or physics are also strongly encouraged to apply. The position is for three years (65% TV-L E13) and is starting as soon as possible. Please send your CV together with contact details of at least two referees and a scientific research statement (max. 2 pages) as PDF files to basal-ganglia at brainlinks-braintools.uni-freiburg.de . -- Dr. Birgit Ahrens -- Coordinator for the Teaching & Training Programs Bernstein Center Freiburg Albert-Ludwig University of Freiburg Hansastr. 9a D - 79104 Freiburg Germany Phone: +49 (0) 761 203-9575 Fax: +49 (0) 761 203-9559 Email: birgit.ahrens at bcf.uni-freiburg.de Web: www.bcf.uni-freiburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From birgit.ahrens at bcf.uni-freiburg.de Fri Jan 31 16:25:45 2014 From: birgit.ahrens at bcf.uni-freiburg.de (Birgit Ahrens) Date: Fri, 31 Jan 2014 22:25:45 +0100 Subject: Connectionists: Open Postdoc positions at Bernstein Center Freiburg, Germany Message-ID: <004601cf1ecb$00c0e670$0242b350$@bcf.uni-freiburg.de> Dear Computational Neuroscience Community, Please find below our job posting for Postdoc positions at the Bernstein Center Freiburg (BCF), Germany. Best regards Birgit Ahrens Postdoc Position on Structure and Dynamics of Cortical Networks in the Computational Neuroscience lab of Prof. Stefan Rotter Our goal is to understand the interplay between network topology and spiking activity dynamics in the neocortex and other parts of the mammalian brain, and to explore the possibilities and constraints of dynamical brain function. Our main tools are mathematical/numerical network modeling and statistical data analysis, often used side by side within the framework of stochastic point processes and statistical graph theory. In collaboration with physiologists and anatomists, we seek to develop new perspectives for the model-based analysis and interpretation of neuronal signals. We are a young group of researchers from mathematics, physics, computer science and biology and invite applications to join the lab for a 2-3 year PostDoc project, and to enter the PostDoc program in Computational Neuroscience at the Bernstein Center Freiburg. The Bernstein Center Freiburg performs research in Computational Neuroscience and Neurotechnology at the University of Freiburg, Germany. The projects are highly interdisciplinary and span from mathematical-theoretical approaches on the function and dynamics of neuronal networks over neuroanatomy and experimentally driven neurophysiology up to the development of technologies for medical application. Further details on: www.bcf.uni-freiburg.de/jobs Postdoc position in Non-Clinical Epilepsy Research in the Biomicrotechnology lab of Prof. Ulrich Egert We are currently offering a Postdoc position (2 years) in the Laboratory for Biomicrotechnology (http://www.bcf.uni-freiburg.de/people/details/egert) at the University of Freiburg. The project investigates mechanisms underlying mesiotemporal lobe epilepsy from the perspective of the dysfunction of interaction between subnetworks in the hippocampal formation (Froriep et al. 2012, Epilepsia). We aim to use targeted stimulation to reduce the circuit's susceptibility for seizures. It is essential that you have a background in neuroscience, ideally in experimental neurophysiology in vivo as well as a PhD degree. You should further be competent in data analysis and have an affinity for the network perspective. The project is part of the Cluster of Excellence "BrainLinks-BrainTools'' (www.brainlinks.uni-freiburg.de ) together with the Bernstein Center Freiburg (www.bcf.uni-freiburg.de ) and will combine neurophysiology, computational neuroscience and neurotechnology. Further details on: www.bcf.uni-freiburg.de/jobs Postdoc position in Neurotechnology & Computational Neuroscience in the lab of Prof. Stefan Rotter We are looking for a postdoctoral researcher to join an international team of scientists and engineers in the NeuroSeeker project (see http://www.neuroseeker.eu/). The goal of the project is to develop and apply new methods and software to improve the yield of novel high-resolution probes for recording activity from many neurons simultaneously. Candidates should hold a PhD in physics, applied mathematics, computer science or biology, with proven experience in software engineering and user-oriented application programming (C++ and/or Python). Specific knowledge and scientific publications in the fields of "statistical analysis of neuronal data" and/or "numerical methods and data structures in the neurosciences" are a requirement. Funding is already available, starting date is negotiable. The Bernstein Center Freiburg concentrates research in Computational Neuroscience and Neurotechnology at the University of Freiburg, Germany. The projects are highly interdisciplinary and span from mathematical-theoretical approaches on the function and dynamics of neuronal networks over neuroanatomy and experimentally driven neurophysiology up to the development of technologies for medical application. Further details on: www.bcf.uni-freiburg.de/jobs -- Dr. Birgit Ahrens -- Coordinator for the Teaching & Training Programs Bernstein Center Freiburg Albert-Ludwig University of Freiburg Hansastr. 9a D - 79104 Freiburg Germany Phone: +49 (0) 761 203-9575 Fax: +49 (0) 761 203-9559 Email: birgit.ahrens at bcf.uni-freiburg.de Web: www.bcf.uni-freiburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.addyman at bbk.ac.uk Fri Jan 31 18:33:19 2014 From: c.addyman at bbk.ac.uk (Caspar Addyman) Date: Fri, 31 Jan 2014 23:33:19 +0000 Subject: Connectionists: Fwd: Best practices in model publication In-Reply-To: References: Message-ID: Hi Brad, I think you raise some interesting questions about whether models should stand or fall on the basis of their predictions. But in terms of slow progress in the field I think there is a prior issue that model and their results should be made more accessible. Bob French and I recently got frustrated with some of the bad habits of the modelling community in this regard and ended up writing a 'manifesto'. Addyman, C., & French, R. M. (2012). Computational Modeling in Cognitive Science: A Manifesto for Change. *Topics in Cognitive Science*, *4*(3), 332-341. doi:10.1111/j.1756-8765.2012.01206.x [pdf - http://bit.ly/1ee7GYR ] We were more interested in the more pragmatic steps on can take to give other people direct access to your model so that non-specialists (and other modellers) can get a genuine feel for exactly what your model is doing when it 'simulates' an experiments. There was an interesting follow up paper by Richard Cooper and Olivia Guest that perhaps addresses your questions about the separation of levels of explanation. Cooper, R. P., & Guest, O. (2013). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. *Cognitive Systems Research*, 1-8. doi:10.1016/j.cogsys.2013.05.001 All the best, Caspar Dr. Caspar Addyman Centre for Brain and Cognitive Development Birkbeck, University of London Malet Street London WC1E 7HX Tel: +447876140050 Twitter: @BrainStraining http://www.cbcd.bbk.ac.uk/people/scientificstaff/caspar http://yourbrainondrugs.net http://boozerlyzer.net http://babylaughter.net -------------- next part -------------- An HTML attachment was scrubbed... URL: