Connectionists: CFP: CVPR Workshop on Language and Vision

Siddharth N siddharth at iffsid.com
Sun Apr 16 12:07:41 EDT 2017


Second Workshop on Language and Vision @CVPR17
----------------------------------------------

 http://languageandvision.com/

 July 21, 2017 @ Honolulu, HI
 in conjunction with CVPR 2017

CALL FOR PAPERS:

We are calling for 1-4 page extended abstracts to be showcased at a
poster session along with talk spotlights.

Submission deadline: May 31st 2017 in the timezone of your choice

The interaction between language and vision, despite seeing traction
as of late, is still largely unexplored. This is a particularly
relevant topic to the vision community because humans routinely
perform tasks which involve both modalities. We do so largely without
even noticing. Every time you ask for an object, ask someone to
imagine a scene, or describe what you're seeing, you're performing a
task which bridges a linguistic and a visual representation. The
importance of vision-language interaction can also be seen by the
numerous approaches that often cross domains, such as the popularity
of image grammars. More concretely, we've recently seen a renewed
interest in one-shot learning for object and event models. Humans go
further than this using our linguistic abilities; we perform zero-shot
learning without seeing a single example. You can recognize a picture
of a zebra after hearing the description "horse-like animal with black
and white stripes" without ever having seen one.

Furthermore, integrating language with vision brings with it the
possibility of expanding the horizons and tasks of the vision
community. We have seen significant growth in image and video-to-text
tasks but many other potential applications of such integration –
answering questions, dialog systems, and grounded language acquisition
– remain largely unexplored. Going beyond such novel tasks, language
can make a deeper contribution to vision: it provides a prism through
which to understand the world. A major difference between human and
machine vision is that humans form a coherent and global understanding
of a scene. This process is facilitated by our ability to affect our
perception with high-level knowledge which provides resilience in the
face of errors from low-level perception. It also provides a framework
through which one can learn about the world: language can be used to
describe many phenomena succinctly thereby helping filter out
irrelevant details.

Topics covered (non-exhaustive):

language as a mechanism to structure and reason about visual perception,
language as a learning bias to aid vision in both machines and humans,
novel tasks which combine language and vision,
dialogue as means of sharing knowledge about visual perception,
stories as means of abstraction,
transfer learning across language and vision,
understanding the relationship between language and vision in humans,
reasoning visually about language problems,
visual captioning, dialogue, and question-answering,
visual synthesis from language,
sequence learning towards bridging vision and language,
joint video and language alignment and parsing, and
video sentiment analysis.

The workshop will also include presentations related to the MSR Video to
Language Challenge, including data collection, benchmarking, and performance
evaluations.

 http://ms-multimedia-challenge.com/

This challenge aims to foster the development of new techniques for video
understanding, in particular video captioning, with the goal of automatically
generating a complete, natural, and salient sentence describing a video.

We are calling for 1 to 4 page extended abstracts to be showcased at a
poster session along with short talk spotlights. Abstracts are not
archival and will not be included in the Proceedings of CVPR 2017. In
the interests of fostering a freer exchange of ideas we welcome both
novel and previously-published work.

We are also accepting full submissions which will not be included in
the Proceedings of CVPR 2017 but we will at the option of the authors
provide a link to the relevant arXiv submission.

Organized by:

Andrei Barbu, MIT
Tao Mei, Microsoft Research, China
Siddharth Narayanaswamy, University of Oxford
Puneet Kumar Dokania, University of Oxford
Quanshi Zhang, UCLA
Nishant Shukla, UCLA
Jiebo Luo, University of Rochester
Rahul Sukthankar, Google Research and CMU


More information about the Connectionists mailing list