Connectionists: Final CFP for workshop on Cognitive and Neural Models for Automated Processing of Speech and Text 2010

Benjamin Schrauwen bschrauw at elis.UGent.be
Fri Apr 9 11:16:32 EDT 2010


Final Call for Papers for the workshop on 
"Cognitive and Neural Models for Automated Processing of Speech and Text" 
(CONAS 2010)

July 9 and 10, 2010, Ghent, Belgium.

Website: http://conas.elis.ugent.be

INVITED SPEAKERS:
- Yoshua Bengio, University of Montreal
- Tomaso Poggio, MIT
- Gordon Pipa, Max Planck Institute for Brain Research
- Stefan Kiebel, Max Planck Institute for Human Cognitive and Brain Sciences
- Adam Sanborn, Gatsby Computational Neuroscience Unit, University College London
- Ruslan Salakhutdinov, MIT
- Alex Graves, Technical University of München
- Peter Tino, University of Birmingham 

IMPORTANT DATES:
Submission of full papers:    April 23, 2010
Notification of acceptance:   June 1, 2010
Registration before:          June 23, 2010

The aim of this workshop is to elucidate this potential role of deep and recurrent models as an intermediary between the engineering and the empirical aspects of speech and language processing.

-------------------------------------

We would like to kindly invite you to submit papers to the workshop on "Cognitive and Neural Models for Automated Processing of Speech and Text" (CONAS 2010). This is a 2-day workshop on the role of cognitive, neural and deep models in speech recognition and language understanding, held in Ghent (Belgium) on July 9-10th, 2010.

*Background and Scope*

Speech recognition and language understanding is one of the prime fields where application engineering meets -- or should meet -- with cognitive neuroscience. However, the interdisciplinary connections are not as tight as one might wish. On the application engineering side, as we go up the linguistic hierarchy, we find a diversity of computational methods, ranging from signal processing over statistical pattern recognition to grammar and logic-based semantic representation formalisms - with a remarkably limited use of artificial neural networks. On the cognitive neuroscience side, we find a diversity of human performance phenomena that are studied, from acoustic perception over various memory competences to the temporal structure of syntactic and semantic parsing, repair processes and the planning and execution of oral communications. It is fair to say that engineers do not care too much how their computational methods can be realized by brains, nor do neuroscientists, as a rule, ground their models in executable algorithms.

In this situation, computational models based on recurrent and/or deep neural networks may serve as an interface between application engineering and brain/cognition modeling.

On the one hand, these models can, in principle, model complex spatio-temporal data and thus should eventually be capable to acquire (and possibly exceed) the functionalities which are today realized by a diversity of other formalisms. On the other hand, these deep and recurrent models appear intrinsically suited to be mapped to biological neural systems, although there is still a large complexity gap between formal and artificial vs. biological neural networks. While neural models are currently not much used in speech and language applications, it may be time to reconsider their role in the light of recent developments of novel, powerful RNN based and deep learning architectures.

The aim of this workshop is to elucidate this potential role of deep and recurrent models as an intermediary between the engineering and the empirical aspects of speech and language processing. We thus solicit contributions on topics of the following kind (list is indicative, not exclusive):

- novel deep or recurrent NN architectures for multi-scale temporal data processing (e.g. based on reservoir computing, multidirectional RNNs, temporal deep belief networks)
- implementations of complex statistical data processing in RNNs (e.g. Bayesian or dynamical Bayesian networks)
- short-term, working, and long-term memory mechanisms in RNNs
- implementing grammar in deep and recurrent NNs
- learning and adaptation in deep or recurrent NNs
- beyond supervised training
- technological solutions for speech / handwriting / language processing
- implementation and parallelization aspects

*Organizers*

The workshop is organized by the consortium of the European FP7 project ORGANIC (www.reservoir-computing.org), and is funded through this project. The objective of ORGANIC is to establish RNN models as a viable alternative to the mainstream statistical models in speech recognition, using the principles of reservoir computing as a starting point.

*Program Board*

Benjamin Schrauwen, University of Ghent (chair)
Herbert Jaeger, Jacobs University Bremen
Wolfgang Maass, Technical University Graz
Peter F. Dominey, INSERM Lyon
Jean-Pierre Martens, University of Ghent
Welf Wustlich, Planet intelligent systems GmbH




More information about the Connectionists mailing list