Weak Organization of Semantic Categories in Young Children with ASD

Friday, May 12, 2017: 3:04 PM
Yerba Buena 8 (Marriott Marquis Hotel)
C. E. Venker1, E. Premo2, T. Mahr1, J. Edwards3, J. R. Saffran4 and S. Ellis-Weismer5, (1)Waisman Center, University of Wisconsin-Madison, Madison, WI, (2)University of Wisconsin - Madison, Madison, WI, (3)Hearing and Speech Sciences, University of Maryland, College Park, MD, (4)Psychology, University of Wisconsin-Madison, Madison, WI, (5)University of Wisconsin-Madison, Madison, WI

Young children with ASD show striking delays in vocabulary development, but we are only beginning to understand why this is the case. One under-explored possibility is that these children have trouble organizing early-learned words into semantic categories (e.g., clothing items, food, animals). Failing to categorize words based on semantic similarities could disrupt lexical retrieval, thereby contributing to the language delays experienced by so many young children with ASD.


Our objective was to determine whether young children with ASD organize semantic categories differently than typically developing (TD) children. To accomplish this objective, we designed an eye-gaze task using the looking-while-listening paradigm to measure the extent to which children looked at objects that were semantically related to a spoken label, when the labeled object itself was not visible (i.e., whether children looked at a sock—instead of an apple—upon hearing, Where’s the hat?). We predicted that children with ASD would look less at semantically-related objects than typically developing children, indicating weaker organization of semantic categories.


Participants were 24 TD children and 25 children with ASD. All children in the ASD group received a DSM-5 diagnosis of ASD as part of the research visit, based on the ADI-R, ADOS-2, and expert clinical judgment. Groups were matched on Auditory Comprehension growth scale values from the Preschool Language Scale, 5th Edition (p = .13). On average, TD children were 20 months old (SD = 1) and children with ASD were 32 months old (SD = 3).

For the eye-gaze task, children sat on their parent’s lap in front of a large television screen. In Target Present trials, children viewed two images (e.g., apple, hat) and heard one named (e.g., Where’s the hat?). In Target Absent trials, children viewed two images (e.g., apple, sock) and heard a label that was semantically related to one image (e.g., Where’s the hat?). Eye movements were coded offline.


Gaze from 300 to 1700 ms after noun onset was modeled using growth curve analysis. Time was the independent variable and log odds of looking to target was the dependent variable. The model of each group included linear, quadratic, and cubic orthogonal time terms and allowed for participant and participant x condition random effects.

Not surprisingly, children in both groups looked less at the ‘target’ (i.e., semantically-related object) in Target Absent trials (ps < .001). However, the discrepancy between the Target Present and Target Absent conditions was more pronounced in the ASD group than in the TD group (see Figure), as indicated by a shallower linear slope in the Target Absent condition the Target Present condition in the ASD group alone (p = .002).


These findings suggest that children with ASD have weaker semantic organization for early-learned words than TD children, which could contribute to the difficulties they experience learning language. Importantly, this was the case even though children in both groups recognized the words when the objects were correctly labeled. Learning semantic categories may be an important intervention goal for young children with ASD.