Skip to main content
Start main content

Very commonly adults have a special way of speaking when they talk to infants or young children. Characterised by a higher pitch, slower pace and exaggerated intonation, this speech register is known as ‘motherese’, or infant-directed speech (IDS). Instead of simply getting the child’s attention or conveying affection, previous research has suggested that phonetic modifications in IDS such as expanded vowels, which are acoustically further apart and more discriminable, facilitate speech and language development. Children with ASD, on the other hand, tend to show delayed speech development and atypical phonological processes. Based on the understanding that formant exaggeration plays an important role in language acquisition, does it mean that the brain of children with and without autism code such speech information differently? What is the relationship between auditory processing of IDS and language learning? Could atypical responses to motherese act as an early biomarker for autism?

Dr Chen and his collaborators used neuroimaging techniques to study the brain activity of 24 children with ASD and 24 typically-developing peers — in particular their neural coding of formant-exaggerated speech and non-speech sounds. Speech sounds, such as vowels, are spoken by human beings while non-speech sounds share similar acoustic features but do not sound like the human voice (e.g. music is a ubiquitous non-speech form). Vowels and the corresponding non-speech form with similar formant frequencies were used in the study.

 Key takeaways from the study
  • Children with ASD showed a lack of neural enhancement responding to motherese even in autistic individuals without intellectual disability.
  • The auditory neural responses to motherese might also serve as a reliable predictor of verbal capacity in young children with ASD.
  • The current findings confirmed the superior non-speech processing skill of frequency/spectral information beyond pitch in the clinical population of ASD.

This study made new discoveries about basic auditory processing and IDS perception in children with ASD. ‘In the speech domain, children with ASD showed no differences in their brain activity towards the exaggerated vs. non-exaggerated vowel formants,’ said Dr Chen. ‘In stark contrast, in the non-speech domain, children with ASD showed differences in neural synchronisation for processing acoustic formant changes embedded in non-speech. The results add neurophysiological evidence for speech-specific auditory processing atypicality in children with ASD.’

The researchers suggested that the lack of enhanced brain activity to motherese in children with ASD might exert a negative influence on their later speech and language development. These findings contribute to the understanding of the neurophysiological underpinnings of auditory and speech processing atypicalities in individuals with ASD.

‘Starting from their first month of life, typically-developing infants show increased attention to and preference for hearing IDS over adult-directed speech, which plays an important functional role in their socioemotional and language development,’ said Dr Chen in explaining how atypical auditory processing of IDS affects language learning. ‘However, for young children with autism, they show substantial difficulties in the realm of social communication. The typical preference for the socially relevant linguistic stimuli of IDS tends to be altered in this clinical population. Children with ASD as a whole group do not demonstrate a similar preference.’

ChenFeiResearch_IllustrationThe figure presents the neural activation pattern of the groups of ASD children and typically-developing children towards the exaggerated vs. non-exaggerated vowel formants in both speech and non-speech conditions measured by ITPC, namely trial-by-trial phase alignment of EEG oscillations.

In line with extant findings in pitch processing, more and more research proposed a speech-specific viewpoint that speech and language learners with autism fail to engage or develop specialised networks for vocal processing and phonetic learning in speech sounds.

‘An interesting finding is that, unlike infants within the first year, older children both with and without ASD (aged from four to eleven in this study) showed a diminishing effect of vowel hyperarticulation on neural enhancement. Importantly, infants’ perceptual weighting might change with age from listening to syllables to learning words. Thus, contrary to the younger infants, older children tend to show a decrease in neural activity in response to the changes of phonetic information in IDS, which becomes less important the older the child gets,’ Dr Chen remarked.

Noting that the current findings were concluded based on the Mandarin-speaking children with and without ASD (different from English, the monophthong /i/ in Mandarin can act as an independent syllable, and can be combined with different lexical tones (such as high-level tone in this study)), Dr Chen and his collaborators suggested that an important next step would be applying the current experimental design to the English-speaking children to check the robustness of the current conclusions.

As suggested by this research, atypical neural responses to motherese might act as a potential early biomarker of risk for children with ASD. Dr Chen added, ‘given the linguistic functions of vowel space expansion and the role of IDS in social interaction, further longitudinal studies are also needed to examine the neural correlates in response to different spectral and temporal aspects of IDS for a better understanding of their potential relationships with language development as well as communication outcomes in younger children with ASD.’

In addition to our PhD graduate Chen Fei, School of Foreign Languages, Hunan University, also working on this research were Zhang Hao, Speech-Language-Hearing Centre, Shanghai Jiao Tong University (SJTU); Ding Hongwei, Department of English, SJTU; Wang Suiping, Department of Psychology, South China Normal University; Peng Gang, Department of Chinese and Bilingual Studies, PolyU; and Zhang Yang, Department of Speech-Language-Hearing Sciences, University of Minnesota Twin Cities. The National Science Foundation of China supported this work.

More about the research: https://www.researchgate.net/publication/350545157_Neural_coding_of_formant-exaggerated_speech_and_nonspeech_in_children_with_and_without_autism_spectrum_disorders

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here