<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-forward-container">
<div class="moz-forward-container">
<meta http-equiv="content-type"
content="text/html; charset=UTF-8">
<br>
<div class="moz-forward-container">
<div class="moz-forward-container">
<div class="moz-forward-container">
<div id="website_title">
<div><b>DEEPK 2024<br>
<i>International Workshop on Deep Learning and
Kernel Machines</i></b><br>
</div>
<div><br>
</div>
<div>March 7-8, 2024, Leuven, Arenberg Castle, Belgium<br>
<a class="moz-txt-link-freetext"
href="https://www.esat.kuleuven.be/stadius/E/DEEPK2024"
moz-do-not-send="true">https://www.esat.kuleuven.be/stadius/E/DEEPK2024</a></div>
<div><br>
</div>
<div><b><i>- Main scope -</i></b><br>
</div>
<div><br>
</div>
<div>Major progress and impact has been achieved through
deep learning architectures with many exciting
applications such as by generative models and
transformers. At the same time it triggers new
questions on the fundamental possibilities and
limitations of the models, with respect to
representations, scalability, learning and
generalization aspects. Through kernel-based methods
often a deeper understanding and solid foundations
have been obtained, complementary to the powerful and
flexible deep learning architectures. Recent examples
are understanding generalization of over-parameterized
models in the double descent phenomenon and conceiving
attention mechanisms in transformers as kernel
machines. The aim of DEEPK 2024 is to provide a
multi-disciplinary forum where researchers of
different communities can meet, to find new synergies
between deep learning and kernel machines, both at the
level of theory and applications. <br>
</div>
<div><br>
</div>
<div><b><i>- Topics - </i></b><br>
</div>
<div><br>
</div>
<div>Topics include but are not limited to:<br>
<ul>
<li>Deep learning and generalization </li>
<li>Double descent phenomenon and over-parameterized
models </li>
<li>Transformers and asymmetric kernels </li>
<li>Attention mechanisms, kernel singular value
decomposition </li>
<li>Learning with asymmetric kernels </li>
<li>Duality and deep learning </li>
<li>Regularization schemes, normalization </li>
<li>Neural tangent kernel </li>
<li>Deep learning and Gaussian processes </li>
<li>Transformers, support vector machines and least
squares support vector machines </li>
<li>Autoencoders, neural networks and kernel methods
</li>
<li>Kernel methods in GANs, variational
autoencoders, diffusion models, Generative Flow
Networks </li>
<li>Generative kernel machines </li>
<li>Deep Kernel PCA, deep kernel machines, deep
eigenvalues, deep eigenvectors </li>
<li>Restricted Boltzmann machines, Restricted kernel
machines, deep learning, energy based models </li>
<li>Disentanglement and explainability </li>
<li>Tensors, kernels and deep learning </li>
<li>Convolutional kernels </li>
<li>Sparsity, robustness, low-rank representations,
compression </li>
<li>Nystrom method, Nystromformer </li>
<li>Efficient training methods </li>
<li>Lagrange duality, Fenchel duality, estimation in
Hilbert spaces, reproducing kernel Hilbert spaces,
vector-valued reproducing kernel Hilbert spaces,
Krein spaces, Banach spaces, RKHS and C*-algebra</li>
<li>Applications</li>
</ul>
<br>
</div>
<div><b><i>- Invited Speakers -</i></b><br>
</div>
<div>
<ul>
<li><a href="http://misha.belkin-wang.org/"
moz-do-not-send="true">Mikhail Belkin</a>
(University of California San Diego)<br>
</li>
<li><a href="https://www.epfl.ch/labs/lions/"
moz-do-not-send="true">Volkan Cevher</a> (EPFL)<br>
</li>
<li><a
href="https://perso.telecom-paristech.fr/fdalche/"
moz-do-not-send="true">Florence d'Alche-Buc</a><a
moz-do-not-send="true"> (Telecom Paris, Institut
Polytechnique de Paris)<br>
</a></li>
<li><a
href="https://lear.inrialpes.fr/people/mairal/"
moz-do-not-send="true">Julien Mairal</a> (INRIA)<br>
</li>
<li><a
href="https://www.iit.it/people/massimiliano-pontil"
moz-do-not-send="true">Massimiliano Pontil</a>
(IIT and University College London)<br>
</li>
<li><a
href="https://www.maths.usyd.edu.au/ut/people?who=DX_Zhou&sms=y"
moz-do-not-send="true">Dingxuan Zhou</a>
(University of Sydney)<br>
</li>
</ul>
</div>
<div><br>
</div>
<div><b><i>- Call for abstracts -</i></b></div>
<div><br>
</div>
<div>The DEEPK 2024 program will include <b>oral and
poster sessions</b>. Interested participants are
cordially invited to submit an <b>extended abstract
(max. 2 pages)</b> for their contribution. Please
prepare your extended abstract submission in LaTeX,
according to the provided stylefile and submit it in
pdf format (max. 2 pages). Further extended abstract
information is given at <a
class="moz-txt-link-freetext"
href="https://www.esat.kuleuven.be/stadius/E/DEEPK2024/call_for_abstracts.php"
moz-do-not-send="true">https://www.esat.kuleuven.be/stadius/E/DEEPK2024/call_for_abstracts.php</a>
.</div>
<div><br>
</div>
<div><b><i>- Schedule - </i></b><br>
</div>
<div>
<ul>
<li><b>Deadline extended abstract submission:</b><br>
<b>Feb 8, 2024 (deadline extended to Feb 15, 2024)
</b></li>
<li>Notification of acceptance and presentation
format (oral/poster):<br>
Feb 22, 2024 </li>
<li>Deadline for registration:<br>
Feb 29, 2024 <br>
</li>
<li><b>International Workshop DEEPK 2024:</b><br>
<span style="color:#990000;font-weight:bold"> </span><span
style="font-weight: bold;">March 7-8, 2024</span>
</li>
</ul>
</div>
<div><b><i>- Organizing committee - </i></b><br>
</div>
<div><br>
</div>
<div>Johan Suykens (Chair), Alex Lambert, Panos
Patrinos, Qinghua Tao, Francesco Tonin</div>
<div><br>
</div>
<div><b><i>- Other info -</i></b></div>
<div><br>
</div>
<div>Please consult the DEEPK 2024 website <a
class="moz-txt-link-freetext"
href="https://www.esat.kuleuven.be/stadius/E/DEEPK2024"
moz-do-not-send="true">https://www.esat.kuleuven.be/stadius/E/DEEPK2024</a>
for info on program, registration, location and venue.
The event is co-sponsored by ERC Advanced Grant
E-DUALITY and KU Leuven.<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>