<div dir="ltr"><span id="gmail-docs-internal-guid-560f7168-7fff-2c3a-de7d-943984183239"><br><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:12pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multimodal Algorithmic Reasoning Workshop (MAR 2025)</span></p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:12pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">June 11 or 12th, 2025, Nashville, TN</span></p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Held in conjunction with CVPR 2025</span></p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><a href="https://marworkshop.github.io/cvpr25/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">https://marworkshop.github.io/cvpr25/</span></a></p><br><br><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">CALL FOR CONTRIBUTIONS</span></p><br><p dir="ltr" style="line-height:1.656;text-align:justify;background-color:rgb(255,253,250);margin-top:0pt;margin-bottom:0pt;padding:12pt 0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Deep learning–powered AI systems have rapidly advanced in their data modeling capabilities, yielding compelling applications that often seem to rival human intelligence. Despite these impressive achievements, questions remain about whether these systems possess the foundational elements of general intelligence, or whether they simply excel at task-specific computations without human-like understanding. Addressing these questions calls for new methods of both developing and assessing such models.</span></p><p dir="ltr" style="line-height:1.656;text-align:justify;background-color:rgb(255,253,250);margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 12pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">In this workshop, we aim to bring together researchers working in neural algorithmic learning, multimodal reasoning, and cognitive models of intelligence to showcase cutting-edge research, tackle current challenges, and highlight critical yet underexplored problems in perception and language modeling—issues at the core of achieving true artificial general intelligence. A key focus is on the emerging field of multimodal algorithmic reasoning, which explores neural representations of algorithms to devise novel solutions for real-world tasks. These span a wide range of areas, including multimodal learning, algorithms over foundational models for solving problems related to analysis, synthesis, or planning, mathematical problem-solving, procedural learning in robotic manipulation, and more.</span></p><p dir="ltr" style="line-height:1.656;text-align:justify;background-color:rgb(255,253,250);margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 12pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Our goal is to delve deeply into this exciting intersection of multimodal algorithmic learning and cognitive science, reflecting on the current progress in machine intelligence while examining the gaps that distinguish it from human cognition. Through talks by leading researchers and faculty, we aim to inspire participants to explore the "missing rungs" on the ladder to true intelligence.</span></p><p dir="ltr" style="line-height:1.656;text-align:justify;background-color:rgb(255,253,250);margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 12pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">We invite you to submit high-quality papers to the workshop that propose innovative approaches, theoretical insights, or practical applications towards advancing this exciting field, as well as foster meaningful discussions and collaborations.</span></p><p dir="ltr" style="line-height:1.656;background-color:rgb(255,253,250);margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 6pt"><br></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">___________________________________________________________________________</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">IMPORTANT DATES & DETAILS</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Submission deadline: ***March 12, 2025***</span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> (11:59 PM PDT) </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Rebuttal: March 25-26, 2025</span></p><p dir="ltr" style="line-height:1.9872;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Paper decisions to authors: April 3, 2025</span></p><p dir="ltr" style="line-height:1.9872;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Camera-ready deadline: April 7, 2025 </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">___________________________________________________________________________</span></p><br><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">TOPICS</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">We invite submissions of high-quality research papers in the topics related to multimodal algorithmic reasoning. The topics for MAR 2025 include, but are not limited to:</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Multimodal machine reasoning</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Algorithmic reasoning in vision, including program synthesis, planning, and procedural learning</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Neural architectures and approaches for mathematical reasoning</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Architectures for aligning/integrating multimodal foundation models, including vision, language, audio, and 3D content.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Architectures for solving abstract multimodal reasoning/language-based IQ puzzles, e.g., using sketches, diagrams, audio-visual clips, etc.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* New tasks, datasets, benchmarks, and models for multimodal reasoning including algorithmic reasoning, neuro-symbolic reasoning, abstract reasoning, mathematical reasoning, etc.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Extreme generalization to new tasks and few-shot concept induction</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Synthetic data and automatic verification for reasoning</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Multimodal agents including programmable agent, tool-use agent, etc., for reasoning tasks</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Position papers on novel perspectives to understand AI and human problem solving</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Studies comparing AI and human problem solving skills, including but not limited to: i) Perspectives from psychology, neuroscience, and educational science, ii) Children's cognitive development, and iii) Limitations of large vision-and-language models</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Vision-and-language applications.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">___________________________________________________________________________</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">SUBMISSION INSTRUCTIONS FOR PAPER TRACK</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">We have two tracks for paper submissions:</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">      1. Papers with IEEE/CVF workshop proceedings (≤ 8 pages)</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">      2. Papers without workshop proceedings (≤ 8 pages)</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(33,33,33);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">For track 1, we are inviting only original, previously unpublished papers, and dual submissions are not allowed. </span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(47,49,56);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">The page limits described above are excluding the references. Papers accepted to track 2 will not be included in the proceedings, however will be publicly shared on the workshop website. The submissions to this track can be novel/ongoing work (limited to 4 pages) or accepted/previously published papers (limited to 8 pages), both excluding references. Please see the workshop website for more details.</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* </span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(33,33,33);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">All submissions are handled via the workshop’s CMT website: </span><a href="https://cmt3.research.microsoft.com/MAR2025/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">https://cmt3.research.microsoft.com/MAR2025/</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(33,33,33);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Submissions should be made in PDF format and should follow the official CVPR 2025 template and guidelines. </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* All submissions should maintain author anonymity and should abide by the CVPR conference guidelines for double-blind review.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Accepted papers will be presented as either an oral, spotlight, or poster presentation. At least one author of each accepted submission must present the paper at the workshop. </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Presentation of accepted papers at our workshop will follow the same policy as that for accepted papers at the CVPR main conference</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Papers accepted in track 1 will be part of the CVPR 2025 workshop proceedings.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">* Authors may optionally upload supplementary materials, the deadline for which is the same as that of the main paper and should be submitted separately.</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">___________________________________________________________________________</span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><br></span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">WORKSHOP ORGANIZERS</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="http://users.cecs.anu.edu.au/~cherian/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Anoop Cherian</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Mitsubishi Electric Research Laboratories</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="https://www.merl.com/people/kpeng" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Kuan-Chuan Peng</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Mitsubishi Electric Research Laboratories</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="https://www.merl.com/people/slohit" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Suhas Lohit</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Mitsubishi Electric Research Laboratories</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="https://sites.google.com/view/hongluzhou/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Honglu Zhou</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Salesforce AI Research</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="http://www.mit.edu/~k2smith/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Kevin A. Smith</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Massachusetts Institute of Technology</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="https://www.merl.com/people/tmarks" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Tim K. Marks</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Mitsubishi Electric Research Laboratories</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="http://web.mit.edu/cocosci/josh.html" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">Joshua B. Tenenbaum</span></a><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, Massachusetts Institute of Technology</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">___________________________________________________________________________</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">CONTACT</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Email: </span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(17,85,204);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><a href="mailto:smart101@googlegroups.com">smart101@googlegroups.com</a></span><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Website: </span><a href="https://marworkshop.github.io/cvpr25/" style="text-decoration-line:none"><span style="font-size:10pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">https://marworkshop.github.io/cvpr25/</span></a></p><br></span></div>