Linguistic structure and language familiarity sharpen phoneme encoding in the brain
How does the brain turn a physical signal like speech into meaning? It draws on two key sources: linguistic structure (e.g., phonemes, syntax) and statistical regularities from experience. Yet how these jointly shape neural representations of language remains unclear. We used MEG to track phonemic and acoustic encoding during spoken language comprehension in native Dutch, Mandarin-Chinese, and Turkish speakers. Phoneme-level encoding is stronger during sentence comprehension than in word lists, and more robust within words than random syllables. Surprisingly, similar encoding emerges even in an uncomprehended language but only with prior exposure. In contrast, acoustic edges are briefly suppressed early in comprehension. This suggests that the brain’s alignment to speech (in phase and power) is robustly tuned by structure and by learned statistical patterns. Our findings show how structured knowledge and experience-based learning interact to shape neural responses to language, offering insight into how the brain processes complex, meaningful signals.
Share this page