<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div dir="ltr">
<div>NEURAL NETWORKS<br>
</div>
<div><a href="https://www.journals.elsevier.com/neural-networks">https://www.journals.elsevier.com/neural-networks</a></div>
<div><br>
</div>
<div>Special issue on</div>
<div><b>Advances in Deep Learning Based Speech Processing<br>
</b></div>
<div><br>
</div>
<div><b>Deadline: June 30, 2020</b></div>
<div><br>
</div>
<div>Deep learning has triggered a revolution in speech
processing. The revolution started from the successful
application of deep neural networks to automatic speech
recognition, and quickly spread to other topics of speech
processing, including speech analysis, speech denoising and
separation, speaker and language recognition, speech synthesis,
and spoken language understanding. This tremendous success
<style type="text/css">
p { margin-bottom: 0.25cm; direction: ltr; line-height: 115%; text-align: left; orphans: 2; widows: 2 }
p.western { font-family: "SimSun", serif; font-size: 12pt }
p.cjk { font-family: "SimSun"; font-size: 12pt }
p.ctl { font-family: "SimSun"; font-size: 12pt }
a:link { color: #0000ff }</style>has been achieved thanks to the
advances in neural network technologies as well as the explosion
of speech data and fast development of computing power.<br>
<br>
Despite this success, deep learning based speech processing
still faces many challenges for real-world wide deployment. For
example, when the distance between a speaker and a microphone
array is larger than 10 meters, the word error rate of a speech
recognizer may be as high as over 50%; end-to-end deep learning
based speech processing systems have shown potential advantages
over hybrid systems, however, they require large-scale labelled
speech data; deep learning based speech synthesis has been
highly competitive with human-sounding speech and much better
than traditional methods, however, the models are not stable,
lack controllability and are still too large and slow to be
deployed onto mobile and IoT devices.<br>
<br>
Therefore, new methods and algorithms in deep learning and
speech processing are needed to tackle the above challenges, as
well as to yield novel insights into new directions and
applications.<br>
<br>
This special issue aims to accelerate research progress by
providing a forum for researchers and practitioners to present
their latest contributions that advance theoretical and
practical aspects of deep learning based speech processing
techniques. The special issue will feature theoretical articles
with novel new insights, creative solutions to key research
challenges, and state-of-the-art speech processing
algorithms/systems that demonstrate competitive performance with
potential industrial impacts. The ideas addressing emerging
problems and directions are also welcome.</div>
<div><br>
</div>
<br>
<div><b>Topics of interest</b> for this special issue include, but
are not limited to: <br>
</div>
<div>• Speaker separation</div>
<div>• Speech denoising</div>
<div>• Speech recognition</div>
<div>• Speaker and language recognition</div>
<div>• Speech synthesis</div>
<div>• Audio and speech analysis</div>
<div> • Multimodal speech processing</div>
<div><br>
</div>
<div><br>
</div>
<div><b>Submission instructions: </b></div>
<div>Prospective authors should follow the standard author
instructions for Neural Networks, and submit manuscripts online
at <a
href="https://www.editorialmanager.com/neunet/default.aspx">https://www.editorialmanager.com/neunet/default.aspx</a>.</div>
<div>Authors should select “VSI: Speech Based on DL" when they
reach the "Article Type" step and the "Request Editor" step in
the submission process.<br>
<br>
</div>
<div><br>
</div>
<div><b>Important dates: </b></div>
<div>June 30, 2020 - Submission deadline<br>
September 30, 2020 - First decision notification<br>
November 30, 2020 - Revised version deadline<br>
December 31, 2020 - Final decision notification<br>
March, 2021 - Publication</div>
<div><br>
</div>
<div><br>
</div>
<div><b>Guest Editors: </b></div>
Xiao-Lei Zhang, Northwestern Polytechnical University, China<br>
Lei Xie, Northwestern Polytechnical University, China<br>
Eric Fosler-Lussier, Ohio State University, USA<br>
Emmanuel Vincent, Inria, France</div>
</body>
</html>