Connectionists: CFP: AVSEC-3 Challenge Workshop & IEEE JSTSP Special Issue

Amir Hussain hussain.doctor at gmail.com
Mon Apr 1 05:10:09 EDT 2024


Dear connectionists (with apologies for any cross-postings)



We are pleased to announce the third edition of the International Challenge on audio-visual speech enhancement (AVSEC-3) as a Satellite Workshop of 2024 INTERSPEECH on 1st September 2024, at Kos Island, Greece (submissions deadline 20 June 2024 - see details below and here: http://challenge.cogmhear.org<http://challenge.cogmhear.org/>).


Contributions are also invited to the related Special Issue of the IEEE Journal of Selected Topics in Signal Processing (JSTSP) on "Deep Multimodal Speech Enhancement and Separation" (paper submission deadline: 30 Sep 2024 - see CFP below and here: https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-jstsp-special-issue-deep-multimodal-speech-enhancement-and-separation<https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-jstsp-special-issue-deep-multimodal-speech-enhancement-and-separation..>)


AVSEC-3 Workshop: Important Dates:

16th February 2024: Release of training and development data.

22nd March 2024: Release of low-latency baseline system.

10th April 2024: Evaluation data release.

10th April 2024: Leaderboard open for submissions.

6th May 2024: Paper submission opens.

20th June/2024: Deadline for challenge submissions.

28th June 2024: Paper submission closes.

12th July: Acceptance notification.

26th July: early release of evaluation results.

1st August 2024: camera-ready paper.


The AVSEC Challenge sets the first benchmark in the field by providing a carefully crafted dataset and scalable protocol for human listening evaluation of audio-visual speech enhancement systems. The open AVSEC framework aims to foster collaborative research and innovation to facilitate the development and evaluation of next-generation audio-visual speech enhancement and separation systems, including multimodal assistive hearing and communication technologies.


The success of the two previous editions of the Challenge (organized as part of IEEE SLT 2022 and IEEE ASRU 2023) demonstrates a consistent trend of system improvement, yet highlights an enduring intelligibility gap when compared to clean speech. We anticipate that the third edition will further enhance system performance and provide a networking and collaborative platform for deliberating on the scope, challenges and opportunities in co-designing and evaluating future speech and hearing technologies.


To register for the challenge please follow the guidelines on the website:

https://challenge.cogmhear.org/#/getting-started/register


We welcome submissions from participants of the second (AVSEC-2) and third editions (AVSEC-3) of the Challenge and also invite submissions on related research topics, including but not limited to the following:

- Low-latency approaches to audio-visual speech enhancement and separation.

- Human auditory-inspired models of multi-modal speech perception and enhancement.

- Energy-efficient audio-visual speech enhancement and separation methods.

- Machine learning for diverse target listeners and diverse listening scenarios.

- Audio quality and intelligibility assessment of audio-visual speech enhancement systems.

- Objective metrics to predict quality and intelligibility from audio-visual stimuli.

- Understanding human speech perception in competing speaker scenarios.

- Clinical applications and live demonstrators of audio-visual speech enhancement and separation, (e.g. multi-modal hearing assistive technologies for hearing-impaired listeners; speech-enabled communication aids to support autistic people with speech disorders).

- Accessibility and human-centric factors in the design and evaluation of innovative multimodal technologies, including multi-modal corpus development, public perceptions, ethics considerations, standards, societal, economic and political impacts.


Accepted Workshop papers (both short 2-page and full-length papers of 4-6 pages) will be published in ISCA Proceedings. Authors of selected papers (including winners and runner-ups of each Challenge Track) will be invited to submit significantly extended papers for consideration in a Special Issue of the IEEE Journal of Selected Topics in Signal Processing (JSTSP) on "Deep Multimodal Speech Enhancement and Separation" - CFP available below and here: https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-jstsp-special-issue-deep-multimodal-speech-enhancement-and-separation - full manuscript submission deadline: 30 Sep 2024)


CFP: IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on: Deep Multimodal Speech Enhancement and Separation<https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-jstsp-special-issue-deep-multimodal-speech-enhancement-and-separation>


https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-jstsp-special-issue-deep-multimodal-speech-enhancement-and-separation


Manuscript Due: 30 September 2024

Publication Date: May 2025

Scope

Voice is the most commonly used modality by humans to communicate and psychologically blend into society. Recent technological advances have triggered the development of various voice-related applications in the information and communications technology market. However, noise, reverberation, and interfering speech are detrimental for effective communications between humans and other humans or machines, leading to performance degradation of associated voice-enabled services. To address the formidable speech-in-noise challenge, a range of speech enhancement (SE) and speech separation (SS) techniques are normally employed as important front-end speech processing units to handle distortions in input signals in order to provide more intelligible speech for automatic speech recognition (ASR), synthesis and dialogue systems. Emerging advances in artificial intelligence (AI) and machine learning, particularly deep neural networks, have led to remarkable improvements in SE and SS based solutions. A growing number of researchers have explored various extensions of these methods by utilising a variety of modalities as auxiliary inputs to the main speech processing task to access additional information from heterogeneous signals. In particular, multi-modal SE and SS systems have been shown to deliver enhanced performance in challenging noisy environments by augmenting the conventional speech modality with complementary information from multi-sensory inputs, such as video, noise type, signal-to-noise ratio (SNR), bone-conducted speech (vibrations), speaker, text information, electromyography, and electromagnetic midsagittal articulometer (EMMA) data. Various integration schemes, including early and late fusions, cross-attention mechanisms, and self-supervised learning algorithms, have also been successfully explored.

Topics

This timely special issue aims to collate latest advances in multi-modal SE and SS systems that exploit both conventional and unconventional modalities to further improve state-of-the-art performance in benchmark problems. We particularly welcome submissions for novel deep neural network based algorithms and architectures, including new feature processing methods for multimodal and cross-modal speech processing. We also encourage submissions that address practical issues related to multimodal data recording, energy-efficient system design and real-time low-latency solutions, such as for assistive hearing and speech communication applications.

Special Issue research topics of interest relate to open problems needing addressed These include, but are not limited to, the following.

  *   Novel acoustic features and architectures for multi-modal SE (MM-SE) and multi-modal SS (MM-SS) solutions.

  *   Self-supervised and unsupervised learning techniques for MM-SE and MM-SS systems.

  *   Adversarial learning for MM-SE and MM-SS.

  *   Large language model-based Generative approaches for MM-SE and MM-SS

  *   Low-delay, low-power, low-complexity MM-SE and MM-SS models

  *   Integration of multiple data acquisition devices for multimodal learning and novel learning algorithms robust to imperfect data.

  *   Few-shot/zero-shot learning and adaptation algorithms for MM-SE and MM-SS systems with a small amount of training and adaptation data.

  *   Approaches that effectively reduce model size and inference cost without reducing the speech quality and intelligibility of processed signals.

  *   Novel objective functions including psychoacoustics and perceptually motivated loss functions for MM-SE and MM-SS

  *   Holistic evaluation metrics for MM-SE and MM-SS systems.

  *   Real-world applications and use-cases of MM-SE and MM-SS, including human-human and human-machine communications

  *   Challenges and solutions in the integration of MM-SE and MM-SS into existing systems

We encourage submissions that not only propose novel approaches but also substantiate the findings with rigorous evaluations, including real-world datasets. Studies that provide insights into the challenges involved and the impact of MM-SE and MM-SS on end-users are particularly welcome.

Submission Guidelines

Manuscripts should be original and should not have been previously published or currently under consideration for publication elsewhere. All submissions will be peer-reviewed according to the IEEE Signal Processing Society review process. Authors should prepare their manuscripts according to the Instructions for Authors available from the Signal Processing Society website.

Follow the instructions given on the IEEE JSTSP webpage<https://signalprocessingsociety.org/publications-resources/ieee-journal-selected-topics-signal-processing>: and submit manuscripts<https://mc.manuscriptcentral.com/jstsp-ieee>.

Important Dates

Manuscript Submission Deadline: 30 September 2024

First Review Due: 15 December 2024

Revised Manuscript Due: 15 January 2024

Second Review Due: 15 February 2024

Final Decision: 28 February 2025

  *

Guest Editors

​For further information, please contact the guest editors at:

Amir Hussain, Edinburgh Napier University, UK (Lead GE)

Yu Tsao, Academia Sinica, Taiwan (co-Lead GE)

John H.L. Hansen, University of Texas at Dallas, USA

Naomi Harte, Trinity College Dublin, Ireland

Shinji Watanabe, Carnegie Mellon University, USA

Isabel Trancoso, Instituto Superior Técnico, IST, Univ. Lisbon, Portugal

Shixiong Zhang, Tencent AI Lab, USA



We look forward to your submissions.


Many thanks in advance,


Prof Amir Hussain

Edinburgh Napier University, Scotland, UK

E-mail: A.Hussain at napier.ac.uk

https://www.napier.ac.uk/people/amir-hussain

http://cogmhear.org<http://cogmhear.org/>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240401/0fde571a/attachment.html>


More information about the Connectionists mailing list