Displaying 1 - 3 of 3
  • Bai, F., Meyer, A. S., & Martin, A. E. (2022). Neural dynamics differentially encode phrases and sentences during spoken language comprehension. PLoS Biology, 20(7): e3001713. doi:10.1371/journal.pbio.3001713.


    Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.
  • Bai, F. (2022). Neural representation of speech segmentation and syntactic structure discrimination. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tian, X., Ding, N., Teng, X., Bai, F., & Poeppel, D. (2018). Imagined speech influences perceived loudness of sound. Nature Human Behaviour, 2, 225-234. doi:10.1038/s41562-018-0305-8.


    The way top-down and bottom-up processes interact to shape our perception and behaviour is a fundamental question and remains highly controversial. How early in a processing stream do such interactions occur, and what factors govern such interactions? The degree of abstractness of a perceptual attribute (for example, orientation versus shape in vision, or loudness versus sound identity in hearing) may determine the locus of neural processing and interaction between bottom-up and internal information. Using an imagery-perception repetition paradigm, we find that imagined speech affects subsequent auditory perception, even for a low-level attribute such as loudness. This effect is observed in early auditory responses in magnetoencephalography and electroencephalography that correlate with behavioural loudness ratings. The results suggest that the internal reconstruction of neural representations without external stimulation is flexibly regulated by task demands, and that such top-down processes can interact with bottom-up information at an early perceptual stage to modulate perception.

Share this page