<div dir="ltr"><div class="gmail_default" style="font-size:small">
<p><b style="font-size:13pt;font-family:'Times New Roman',serif">Special
Issue in the journal Neurocomputing:</b><br></p>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:13pt"><b>Multiobjective
Reinforcement Learning: Theory and Applications</b></font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">***
One month extension for submitting papers ***</font></font></font></p>
<p>
<font color="#222222"><font face="arial, sans-serif"><font style="font-size:12pt"><font face="Times New Roman, serif"><font style="font-size:10pt">There
will be a month’s extension to the deadline for submitting papers
for the upcoming special issue of the</font></font> <font face="Times New Roman, serif"><font style="font-size:10pt">Neurocomputing
journal on the topic “Multiobjective Reinforcement Learning: Theory
and Applications”</font></font><font color="#1f497d"><font face="Times New Roman, serif"><font style="font-size:10pt">.</font></font></font></font></font></font></p>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">The
full Call For Papers is included below. The submission website for
the</font></font></font><font color="#222222"><font face="arial, sans-serif"><font style="font-size:12pt"> </font></font></font><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">journal
is located at: </font></font></font><a href="http://ees.elsevier.com/neucom/default.asp" target="_blank"><font color="#1155cc"><font face="Times New Roman, serif"><font style="font-size:10pt"><span lang="en-GB">http://ees.elsevier.com/</span></font></font></font></a></p>
<p>
<font color="#222222"><font face="arial, sans-serif"><font style="font-size:12pt"><font face="Times New Roman, serif"><font style="font-size:10pt">To
ensure that all manuscripts are correctly identified for inclusion
into the special issue, it is important that authors
select </font></font><font face="Times New Roman, serif"><font style="font-size:10pt"><b>SI:Multiobjective
RL </b></font></font><font face="Times New Roman, serif"><font style="font-size:10pt">when
they reach the “</font></font><font face="Times New Roman, serif"><font style="font-size:10pt"><b>Article
Type” </b></font></font><font face="Times New Roman, serif"><font style="font-size:10pt">step
in the submission process.</font></font></font></font></font></p>
<hr size="2">
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:13pt"><b>Multiobjective
Reinforcement Learning: Theory and Applications</b></font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Many
real-life problems involve dealing with multiple objectives. For
example, in network routing the criteria consist of energy
consumption, latency, and channel capacity, which are in essence
conflicting objectives. When system designers want to optimize more
than one objective, it is not always clear a priori which objectives
are correlated and how they influence each other upon inspecting the
problem at hand. As sometimes objectives are conflicting, there
usually exists no single optimal solution. In those cases, it is
desirable to obtain a set of trade-off solutions between the
objectives.</font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">This
problem has in the last decade also gained the attention of many
researchers in the field of Reinforcement Learning (RL). RL addresses
sequential decision problems in initially (possibly) unknown
stochastic environments. The goal is the maximization of the agent's
reward in a potentially unknown environment that is not always
completely observable. Until now, there has been no special issue in
a journal or a book on reinforcement learning that covered the topic
of multiobjective reinforcement learning.</font></font></font></p>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt"><b>State
of the art</b></font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">We
consider the extension of RL to multiobjective (stochastic) rewards
(also called utilities in decision theory). Techniques from
multi-objective optimization are often used for multi-objective RL in
order to improve the exploration-exploitation tradeoff.
Multi-objective optimization (MOO), which is a sub-area of
multi-criteria decision making (MCDM), considers the optimization of
more than one objective simultaneously and a decision maker decides
either which solutions are important for the user or when to present
these solutions to the user for further consideration. Currently, MOO
algorithms are seldom used for stochastic optimization, which makes
it an unexplored but very promising research area. The resulting
algorithms are a hybrid between MCDM and stochastic optimization. The
RL algorithms are enriched with the intuition and computational
efficiency of MOO in handing multi-objective problems.</font></font></font></p>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt"><b>Aim
and scope</b></font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">The
main goal of this special issue is to solicit research on
multi-objective reinforcement learning. We encourage submissions</font></font></font></p>
<ul type="disc">
<li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">describing
applications of MO methods in RL with a focus on optimization in
difficult environments that are possibly dynamic, uncertain and
partially observable.</font></font></font></p>
</li><li><p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">offering
theoretical insights in online or offline learning approaches for
multi-objective problem domains.</font></font></font></p>
</li></ul>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt"><b>Topics
of interests</b></font></font></font></p>
<p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">We
enthusiastically solicit papers on relevant topics such as:</font></font></font></p>
<ul type="disc">
<li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Reinforcement
learning algorithms for solving multi-objective sequential decision
making problems</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Dynamic
programming techniques and adaptive dynamic programming techniques
handling multiple objectives</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Theoretical
results on the learnability of optimal policies, convergence of
algorithms in qualitative settings, etc.</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Decision
making in dynamic and uncertain multi-objective environments</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Applications
and benchmark problems for multi-objective reinforcement learning.</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Novel
frameworks for multi-objective reinforcement learning</font></font></font></p>
</li><li><p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Real-world
applications in engineering, business, computer science, biological
sciences, scientific computation, etc. in dynamic and uncertain
environments solved with multi-objective reinforcement learning</font></font></font></p>
</li></ul>
<p><font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt"><b>Important
dates</b></font></font></font></p>
<ul type="disc">
<li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Submissions
open: December 1<sup>st </sup>2015</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Submissions
close: <span style="display:inline-block;border:none;padding:0in">March
5<sup>th</sup> 2016</span></font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Notification
of acceptance: May 15<sup>th </sup>2016</font></font></font></p>
</li><li><p style="margin-bottom:0in">
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Final
manuscript due: <span style="display:inline-block;border:none;padding:0in">1
August 2016</span></font></font></font></p>
</li><li><p>
<font color="#222222"><font face="Times New Roman, serif"><font style="font-size:10pt">Expected
publication date (online): November 2016</font></font></font></p>
</li></ul></div></div>