Connectionists: Deadline approaching: Implications of neural nets for linguistic theory

Volya Kapatsinski vkapatsi at uoregon.edu
Mon Jun 17 07:37:57 EDT 2024


Just a reminder that the deadline to submit abstracts for our special issue of Linguistics Vanguard on the Implications of Neural Networks and other Learning Models for Linguistic Theory is in two weeks (July 1). More information below.

Vsevolod (Volya) Kapatsinski

Special collection: Implications of Neural Networks and other Learning Models for Linguistic Theory (more information here: https://blogs.uoregon.edu/ublab/lmlt/)

Managing Editor: Vsevolod Kapatsinski (University of Oregon)
Co-editor: Gašper Beguš (University of California, Berkeley)

This Linguistics Vanguard<https://www.degruyter.com/journal/key/lingvan/9/1/html#overview> special collection is motivated by the recent breakthroughs in the application of neural networks to language data. Linguistics Vanguard publishes short 3000-4000 word articles on cutting-edge topics in linguistics and neighboring areas. Inclusion of multimodal content designed to integrate interactive content (including, but not limited to audio and video, images, maps, software code, raw data, hyperlinks to external databases, and any other media enhancing the traditional written word) is particularly encouraged. Special collections contributors should follow general submission guidelines for the journal (https://www.degruyter.com/journal/key/lingvan/html#overview).

Overview of the special issue topic:
Neural network models of language have been around for several decades, and became the de facto standard in psycholinguistics by the 1990s. There have also been several important attempts to incorporate neural network insights into linguistic theory (e.g., Bates & MacWhinney, 1989; Bybee, 1985; Bybee & McClelland, 2005; Heitmeier et al., 2021; Smolensky & Legendre, 2006).

However, until recently, neural network models did not approximate the generative capacity of a human speaker or writer. This changed in the last few years, when large language models (e.g., the GPT family), embodying largely the same principles but trained on vastly larger amounts of data, have made a breakthrough so that the language they generate is now usually indistinguishable from that generated by a human.

The accomplishments of these models have led to both calls for further integration between linguistic theory and neural networks (Beguš 2020; Kapatsinski, 2023; Kirov & Cotterell, 2018; Pater, 2019; Piantadosi, 2023) and criticism suggesting that the way they work is fundamentally unlike human language learning and processing (e.g., Bender et al., 2021; Chomsky et al., 2023).

The present special collection for Linguistics Vanguard aims to foster a productive discussion between linguists, cognitive scientists, neural network modelers, neuroscientists, and proponents of other approaches to learning theory (e.g., Bayesian probabilistic inference, instance-based lazy learning, reinforcement learning, active inference; Jamieson et al., 2022; Tenenbaum et al., 2011; Sajid et al., 2021). We call for contributions addressing the central question of linguistic theory — Why are languages the way they are? – by means of a computational modeling approach. Reflections and position papers motivating the best ways to approach this question computationally are also welcome.

The contributions are encouraged to compare different models trained on the same data approximating human experience. Insightful position papers will also be accepted. Contributions should explicitly address the ways in which the training data of the model(s) they discuss resembles and differs from human experience. Contributions can involve either hypothesis testing via minimally different versions of the same well-motivated model (e.g., Kapatsinski, 2023), or comparisons of state-of-the-art models from different intellectual traditions (e.g., Albright & Hayes, 2003; Sajid et al., 2021) on how well they answer the question above.

Research topics within this broad topic include:
1) the learning mechanisms and biases needed for modeling humanlike processing from humanlike experience
2) biases and mechanisms required for modeling trajectories of language change through iterated learning and/or use, or linguistic typology
More information is available here: https://blogs.uoregon.edu/ublab/lmlt/

Contributors are asked to submit a one-page non-anonymous abstract (plus one page for figures and references) in .pdf format via the following link https://oregon.qualtrics.com/jfe/form/SV_e8LaCg8EqKHzjQG.

The abstract should have the title as the top line, author names, affiliations and emails as the second line, and the body of the abstract as a separate paragraph (or three). Please contact managing editor Vsevolod (Volya) Kapatsinski, vkapatsi at uoregon.edu<mailto:vkapatsi at uoregon.edu> with any questions.

Abstracts will be evaluated for topic relevance for the special collection, and on overall quality. Contributors of selected abstracts will be invited to submit a full paper (3000-4000 words) that will undergo peer review.

Timeline:
abstract due by July 1, 2024
notification of authors by August 1, 2024
full paper due by November 1, 2024
reviews to be completed by January 31, 2025
publication by March 2025


Best,
Volya

--
Vsevolod Kapatsinski
Professor
Department of Linguistics, University of Oregon
visiting at Department of English, University of Freiburg
Area Editor, Linguistics Vanguard (cog, exp, comp)
blogs.uoregon.edu/ublab/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240617/1b8f0198/attachment.html>


More information about the Connectionists mailing list