Sign Language Grammars, Parsing Models, & the Brain
Interdisciplinary Workshop, 6-7 November 2025, Max Planck Institute for Human Cognitive & Brain Sciences

Carlo Cecchetto

Portrait picture of Carlo Cecchetto.

The Challenge of Simultaneity for Formal Accounts of Sign Language Grammars

Abstract: In sign languages, different articulators (i.e., the two hands, facial expressions etc.) can simultaneously represent different characters involved in the same action or different facets of the same event. In this talk, I will ask to what extent these massive cases of simultaneity can be integrated into the core grammatical system of sign languages and to what extent they can be described by the analytical tools coming from spoken language linguistics. My answer to these questions will build on the assumption that, as in bilingual bimodal productions, signers can utter at the same time two propositional units or two constituents inside the same clause.

David P. Corina

Portrait picture of David P. Corina.

Neurobiological Perspectives on Sign Production: Implications for Predictive Models of Sign Language Processing

Abstract: Neurolinguistic research on signed languages has significantly advanced our understanding of sign language structure and processing. In this talk, I present two case studies that offer novel insights into the production of American Sign Language (ASL), highlighting distinct roles for the somatosensory and proprioceptive systems. The first case involves electrocorticography (ECoG) recordings from a deaf ASL signer undergoing awake neurosurgery. We analyze the neural responses associated with articulatory features, specifically handshape and location, focusing on their timing and cortical specificity during sign production. The second case features a deaf individual with conduction aphasia, whose sign language errors suggest a disruption in somatosensory feedback mechanisms involved in hierarchical motor planning. Together, these findings support a forward model of sign language production, emphasizing the central role of articulatory location and motivating further exploration of holistic “postures” in sign perception.

Karen Emmorey

Portrait picture of Karen Emmorey.

From Perception to Phonology: Neural Tuning for Visual-Manual Phonological Structure

Abstract: Many linguistic and psycholinguistic studies have demonstrated that sign languages exhibit phonological structure that is parallel, but not identical to spoken languages. I will present ERP and fMRI data that are beginning to reveal how the brain adapts to phonological structure in a visual-manual language. Our ERP studies demonstrate that deaf ASL signers, in contrast to hearing non-signers, exhibit form-priming effects on the N400 component that differ for shared handshape vs. place of articulation (location on the body). An fMRI adaptation study reveals repetition suppression (reduced BOLD response) for signs with handshape overlap (compared to no phonological overlap) in bilateral parietal cortex, and for place of articulation, repetition suppression was observed in the extrastriate body area (EBA), bilaterally. These effects were not observed for hearing non-signers. Thus, the EBA appears to become neurally tuned to where the hands are positioned on the body (place of articulation) rather than to hand configurations, and neural representations of linguistic handshapes may reside in parietal cortex. Overall, these studies indicate how psycholinguistic processes and linguistic units are mapped onto a functional neuroanatomical network for a visual-manual language.

Vadim Kimmelman

Portrait picture of Vadim Kimmelman.

Can Computer Vision Provide Insight Into the Nature of Nonmanual Markers in Sign Languages?

Abstract: Recent developments in Computer Vision have made it possible to track and measure movements of signers in video recordings, including the nonmanual components: head and eyebrow movements, eye blinks and eye gaze, mouth shapes, etc. Previous theoretical and empirical reserach in sign linguistics has demonstrated the crucial role nonmanual markers play in sign language grammar and use. The technological developments offers the possibility to investigate the subtle formal properties of the markers, that is, their kinematics. These formal properties have consequences for theoretical accounts of nonmanual markers, and possibly for future experimental research on them. In my talk, I will discuss some cases studies of nonmanuals using Computer Vision, their theoretical implications, and methodological limitations.

Note: Due to logistical reasons, this presentation will be given remotely.

Rachel I. Mayberry

Portrait picture of Rachel I. Mayberry.

Childhood Language Shapes the Adult Brain-Language System: Insights from American Sign Language

Abstract: Young children typically develop language so quickly and show responses in the brain language system shortly after birth giving the impression that the human language faculty requires little or no linguistic experience to become fully functional. But is this an accurate picture of how the brain language system acquires its functionality? Children born deaf cannot hear the language spoken around them and the lipreading signal is too impoverished to enable spontaneous language acquisition. The majority of children born deaf do not encounter sign language until they have matured well beyond infancy. This naturally occurring variation in linguistic experience as a function of biological maturation over childhood and adolescence provides a unique window into how the brain language system develops its capacity to parse linguistic structure. The results of behavioral, neurolinguistic, and anatomical studies investigating the sequalae of this unique situation all show that the adult ability to comprehend and process linguistic structure is shaped by the timing of language experience in relation to neural maturation.

Note: Due to logistical reasons, this presentation will be given remotely.

Marloes Oomen

Portrait picture of Marloes Oomen.

Do Signers Interpret R-Loci as Regions or Points? Insights From an Online Probe Recognition Task

Abstract: Sign languages use space referentially by associating discourse referents with R(eferential)-loci. The traditional model of referential use of space assumes that R-loci can, in principle, be set up anywhere in horizontal signing space. However, this implies there are infinitely many potential R-loci, which poses a theoretical problem (it makes R-loci ‘unlistable’) and also seems empirically inaccurate. Recently, it has been proposed that R-loci represent regions—not points—in space, where referents get associated with spatial regions via a recursive system of maximal contrast. In this talk, I present an on-line probe recognition task in which 30 deaf signers of Dutch Sign Language (NGT) participated. This reaction-time based experiment was designed to discover which of these theoretical proposals is best empirically supported. The results offer us greater insight into the processing of anaphoric elements in languages rooted in the visuo-spatial modality.