<div dir="ltr"><b>Last Call for Papers:
</b><br><div class="gmail_quote"><div dir="ltr">IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (<b>IF 11.683</b>)<br><b><span style="color:rgb(7,55,99)">Special Issue on Deep Representation and Transfer Learning for Smart and Connected Health</span></b>
<br><div> <br></div><b>IMPORTANT DATES
</b><br>30 September 2019 – Deadline for manuscript submission
<br>31 December 2019 – Reviewer’s comments to authors
<br>28 February 2020 – Submission of revised papers
<br>30 April 2020 – Final decision of acceptance
<br>May-June 2020 – Tentative publication date
<br><b>
<br>AIMS AND SCOPE:
</b><br>Deep neural networks (DNNs) are one of the most efficient learning
systems. However, determining how to best learn a set of data
representations that are ideal for a given task remains a challenge.
Representation and Transfer Learning (RTL) can facilitate the learning
of complex data representations by improving the generalization
performance of DNNs, and by taking advantage of features learned by a
model in a source domain, and adapting the model to a new domain.
Nonetheless, contemporary theory in RTL is unable to deal with issues,
such as: the inherent trade-off between retaining too much information
from the input and learning universal features; limited data or changes
in the joint distribution of the input features and output labels in the
target domain; dataset bias, among others. Therefore, new theoretical
mechanisms and algorithms are required to improve the performance and
learning process of DNNs.
<br>Smart and Connected Health (SCH), an emerging and complex domain,
can benefit from new advancements in RTL. For instance, RTL can overcome
the limitations imposed by the lack of labelled data in SCH by (i)
training a model to learn universal data representations on larger
corpora in a different domain, and then adapting the model for use in a
SCH context, or (ii) by using RTL in conjunction with generative neural
networks to generate new healthcare-related data. Nonetheless, the use
of RTL in designing SCH applications requires overcoming problems such
as rejection of unrelated information, dataset bias or neural network
co-adaptation.
<br>This special issue invites novel contributions addressing
theoretical aspects of representation and transfer learning and
theoretical work aimed to improve the generalization performance of deep
models, as well as new theory attempting to explain and interpret both
concepts. State-of-the-art works on applying RTL for developing smart
and connected health intelligent systems are also welcomed. Topics of
interest for this special issue include but are not limited to:
<br>
<br><b>Theoretical Methods:
</b><br>• Distributed representation learning;
<br>• Transfer learning;
<br>• Invariant feature learning;
<br>• Domain adaptation;
<br>• Neural network interpretability theory;
<br>• Deep neural networks;
<br>• Deep reinforcement learning;
<br>• Imitation learning;
<br>• Continuous domain adaptation learning;
<br>• Optimization and learning algorithms for DNNs;
<br>• Zero and one-shot learning;
<br>• Domain invariant learning;
<br>• RTL in generative and adversarial learning;
<br>• Multi-task learning and Ensemble learning;
<br>• New learning criteria and evaluation metrics in RTL;
<br><b>Application Areas: </b>
<br>• Health monitoring;
<br>• Health diagnosis and interpretation;
<br>• Early health detection and prediction;
<br>• Virtual patient monitoring;
<br>• RTL in medicine;
<br>• Biomedical information processing;
<br>• Affect recognition and mining;
<br>• Health and affective data synthesis;
<br>• RTL for virtual reality in healthcare;
<br>• Physiological information processing;
<br>• Affective human-machine interaction;
<br>
<br><b>GUEST EDITORS
</b><br>Vasile Palade, Coventry University, UK
<br>Stefan Wermter, University of Hamburg, Germany
<br>Ariel Ruiz-Garcia, Coventry University, UK
<br>Antonio de Padua Braga, University of Minas Gerais, Brazil
<br>Clive Cheong Took, Royal Holloway (University of London), UK
<br>
<br><b>SUBMISSION INSTRUCTIONS
</b><br>1. Read the Information for Authors at <a href="https://cis.ieee.org/publications/t-neural-networks-and-learning-systems" rel="nofollow" target="_blank">https://cis.ieee.org/publications/t-neural-networks-and-learning-systems</a>.
<br>2. Submit your manuscript at the TNNLS webpage (<a href="http://mc.manuscriptcentral.com/tnnls" rel="nofollow" target="_blank">http://mc.manuscriptcentral.com/tnnls</a>)
and follow the submission procedure. Please, clearly indicate on the
first page of the manuscript and in the cover letter that the manuscript
is submitted to this special issue. Send an email to the guest editor
Vasile Palade (<a href="mailto:vasile.palade@coventry.ac.uk" rel="nofollow" target="_blank">vasile.palade@coventry.ac.uk</a>) with subject “TNNLS special issue submission” to notify about your submission.
<br>3. Early submissions are welcomed. We will start the review process as soon as we receive your contributions.
<br>
<br>For any other questions please contact Ariel Ruiz-Garcia (<a href="mailto:ariel.ruiz-garcia@coventry.ac.uk" rel="nofollow" target="_blank">ariel.9arcia@gmail.com</a>).<br>
</div></div></div>