Individuals with Higher Levels of Autistic Traits Show More Lexically-Guided Perceptual Learning
Speech is inherently variable as people pronounce words differently depending on, e.g., accent, dialect, gender or age. To cope with this variation, in our social interactions, we continuously tune into individual speakers. More specifically, we employ lexical and phonotactic knowledge to decode ambiguous speech sounds; and we adjust our phonetic categories (in broad terms: our expectations for how phonemes ought to sound) to include such ambiguous sounds. This phenomenon, referred to as lexically-guided perceptual learning is important for our ability to communicate effectively.
Lexically-guided perceptual learning is also relevant to a family of recent theories of autistic perception formulated within the Bayesian and the predictive-coding computational frameworks. These accounts propose that autistic individuals present fundamental limitations in calibrating their perceptual systems to current environments by using knowledge accrued from recent sensory experiences. Previous studies have examined such limitations focusing on non-linguistic domains. In this study, we examine whether such limitations are also present in speech perception. If so, we should predict that autistic individuals and individuals with higher levels of autistic traits present less pronounced lexically-guided perceptual learning effects compared to individuals with lower levels of autistic traits.
We sought to test this prediction by examining the relationship between lexically-guided perceptual learning and autism traits in adults.
In an ongoing study, we tested 47 adults (22 females) aged between 18 and 64 (M = 26.07; SD = 10.80). Each participant was administered a lexically-guided perceptual learning paradigm from an existing study (Drozdova et al., 2015) and subsequently completed the Autism Quotient questionnaire.
The lexically-guided perceptual learning paradigm had an ‘exposure-test’ design. Participants were first exposed to a short story containing an ambiguous sound from the [l/r] continuum. Each participant heard one of two versions of the story (learning conditions), in which the ambiguous sound replaced either all [r] sounds or all [l] sounds. Next, participants completed a phonetic-categorization task on a continuum of ambiguous [l/r] sounds.
We analysed the responses in the phonetic-categorization task with mixed-effect statistical modelling, and used the number of responses that were consistent with the learning condition as the dependent variable, similar to the original study. Here, the fitted statistical model also included effects of autistic symptomatology.
Analyses showed that, on average, our participants presented reliable lexically-driven perceptual learning effects, which, similar to the original study, manifested in the first levels of the [l/r] continuum. Contrary to our prediction, individuals with higher levels of autistic traits presented more pronounced lexically-driven perceptual learning effects than individuals with lower levels of autistic traits.
The patterns of individual variability in lexically-driven perceptual learning effects in our study are in the opposite direction to our prediction based on recent Bayesian and predictive-coding accounts of autistic perception. Our results challenge these accounts to accommodate individual differences in speech processing.
Our results are consistent with accounts suggesting enhanced perceptual abilities in autistic individuals or individuals with higher levels of autism traits. Our current work involves explicitly examining differences between autistic and neurotypical individuals in lexically-driven perceptual learning.