Connectionists: CFP: IEEE TNNLS Special Issue on Deep Representation and Transfer Learning for Smart and Connected Health

Ariel Ruiz-Garcia ariel.9arcia at gmail.com
Mon Aug 26 14:22:37 EDT 2019


*Last Call for Papers: *
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (*IF 11.683*)
*Special Issue on Deep Representation and Transfer Learning for Smart and
Connected Health*

*IMPORTANT DATES *
30 September 2019 – Deadline for manuscript submission
31 December 2019 – Reviewer’s comments to authors
28 February 2020 – Submission of revised papers
30 April 2020 – Final decision of acceptance
May-June 2020 – Tentative publication date

*  AIMS AND SCOPE: *
Deep neural networks (DNNs) are one of the most efficient learning systems.
However, determining how to best learn a set of data representations that
are ideal for a given task remains a challenge.  Representation and
Transfer Learning (RTL) can facilitate the learning of complex data
representations by improving the generalization performance of DNNs, and by
taking advantage of features learned by a model in a source domain, and
adapting the model to a new domain. Nonetheless, contemporary theory in RTL
is unable to deal with issues, such as: the inherent trade-off between
retaining too much information from the input and learning universal
features; limited data or changes in the joint distribution of the input
features and output labels in the target domain; dataset bias, among
others. Therefore, new theoretical mechanisms and algorithms are required
to improve the performance and learning process of DNNs.
Smart and Connected Health (SCH), an emerging and complex domain, can
benefit from new advancements in RTL. For instance, RTL can overcome the
limitations imposed by the lack of labelled data in SCH by (i) training a
model to learn universal data representations on larger corpora in a
different domain, and then adapting the model for use in a SCH context, or
(ii) by using RTL in conjunction with generative neural networks to
generate new healthcare-related data. Nonetheless, the use of RTL in
designing SCH applications requires overcoming problems such as rejection
of unrelated information, dataset bias or neural network co-adaptation.
This special issue invites novel contributions addressing theoretical
aspects of representation and transfer learning and theoretical work aimed
to improve the generalization performance of deep models, as well as new
theory attempting to explain and interpret both concepts. State-of-the-art
works on applying RTL for developing smart and connected health intelligent
systems are also welcomed. Topics of interest for this special issue
include but are not limited to:

*Theoretical Methods: *
•    Distributed representation learning;
•    Transfer learning;
•    Invariant feature learning;
•    Domain adaptation;
•    Neural network interpretability theory;
•    Deep neural networks;
•    Deep reinforcement learning;
•    Imitation learning;
•    Continuous domain adaptation learning;
•    Optimization and learning algorithms for DNNs;
•    Zero and one-shot learning;
•    Domain invariant learning;
•    RTL in generative and adversarial learning;
•    Multi-task learning and Ensemble learning;
•    New learning criteria and evaluation metrics in RTL;
*Application Areas: *
•    Health monitoring;
•    Health diagnosis and interpretation;
•    Early health detection and prediction;
•    Virtual patient monitoring;
•    RTL in medicine;
•    Biomedical information processing;
•    Affect recognition and mining;
•    Health and affective data synthesis;
•    RTL for virtual reality in healthcare;
•    Physiological information processing;
•    Affective human-machine interaction;

*GUEST EDITORS *
Vasile Palade, Coventry University, UK
Stefan Wermter, University of Hamburg, Germany
Ariel Ruiz-Garcia, Coventry University, UK
Antonio de Padua Braga, University of Minas Gerais, Brazil
Clive Cheong Took, Royal Holloway (University of London), UK

*SUBMISSION INSTRUCTIONS   *
1.    Read the Information for Authors at
https://cis.ieee.org/publications/t-neural-networks-and-learning-systems.
2.    Submit your manuscript at the TNNLS webpage (
http://mc.manuscriptcentral.com/tnnls) and follow the submission procedure.
Please, clearly indicate on the first page of the manuscript and in the
cover letter that the manuscript is submitted to this special issue. Send
an email to the guest editor Vasile Palade (vasile.palade at coventry.ac.uk)
with subject “TNNLS special issue submission” to notify about your
submission.
3.    Early submissions are welcomed. We will start the review process as
soon as we receive your contributions.

For any other questions please contact Ariel Ruiz-Garcia (
ariel.9arcia at gmail.com <ariel.ruiz-garcia at coventry.ac.uk>).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20190826/ec0b0da5/attachment.html>


More information about the Connectionists mailing list