Skip to main content
eScholarship
Open Access Publications from the University of California

Comparing Children and Large Language Models in Word Sense Disambiguation: Insights and Challenges

Abstract

Understanding how children process ambiguous words is a challenge because sense disambiguation depends on sentence context bottom-up and top-down aspects. Here, we seek insight into this phenomenon by investigating how such a competence might arise in large distributional learners (Transformers) that purport to acquire sense representations from language input in a largely unsupervised fashion. We investigated how sense disambiguation might be achieved using model representations derived from naturalistic child-directed speech. We tested a large pool of Transformer models, varying in their pretraining input size/nature as well as the size of their parameter space. Tested across three behavioral experiments from the developmental literature, we found that these models capture some essential properties of child sense disambiguation, although most still struggle in the more challenging tasks with contrastive cues. We discuss implications for both theories of word learning and for using Transformers to capture child language processing.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View