Peter Hagoort

Presentations

Displaying 1 - 100 of 424
  • Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2019). Multiset canonical correlation analysis of MEG reveals stimulus-modality independent language areas. Poster presented at the 25th Annual Meeting of the Organization for Human Brain Mapping (OHBM 2019), Rome, Italy.
  • Callaghan, E., Peeters, D., & Hagoort, P. (2019). Prediction: When, where & how? An investigation into spoken language prediction in naturalistic virtual environ-ments. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Coopmans, C. W., Martin, A. E., De Hoop, H., & Hagoort, P. (2019). The interpretation of noun phrases and their structure: Views from constituency vs. dependency grammars. Talk presented at the workshop 'Doing experiments with theoretical linguistics'. Amsterdam, The Netherlands. 2019-04-04.
  • Coopmans, C. W., Martin, A. E., De Hoop, H., & Hagoort, P. (2019). The interpretation of noun phrases and their structure: Views from constituency vs. dependency grammars. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
  • Giglio, L., Hagoort, P., Federmeier, K. D., & Rommers, J. (2019). Memory benefits of expectation violations. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Hagoort, P. (2019). Far beyond the back of the brain. Talk presented at the Cambridge Chaucer Club of MRC Cognition and Brain Sciences Unit, University of Cambridge. Cambridge, UK. 2019-03-14.
  • Hagoort, P. (2019). Language beyond the input given: A neurobiological account. Talk presented at the Psychology Distinguished Speaker Series, at the University of California. Davis, CA, USA. 2019-05-02.
  • Hagoort, P. (2019). Swiebertje en de vrije wil. Talk presented at the Stadhuis in Oudewater. Oudewater, The Netherlands. 2019-02-06.
  • Hagoort, P. (2019). Waarom spiegelneuronen niet deugen. Talk presented at Berichten uit de bovenkamer, een KNAW symposium over hersenen en gedrag. Amsterdam, The Netherlands. 2019-05-13.
  • Hagoort, P. (2019). Which aspects of the brain make humans unique?. Talk presented at the MPI Lunch Talk. Nijmegen, The Netherlands. 2019-02-08.
  • Hagoort, P. (2019). Far beyond the back of the brain. Talk presented at the 3rd Salzburg Mind-Brain Annual Meeting (SAMBA 2019). Salzburg, Austria. 2019-07-11 - 2019-07-12.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2019). Shared situation models between production and comprehension: fMRI evidence on the neurocognitive processes underlying the construction and sharing of representations in discourse. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Misersky, J., Slivac, K., Hagoort, P., & Flecken, M. (2019). The State of the Onion: Grammatical aspect modulates object representation in event comprehension. Poster presented at the 32nd Annual CUNY Conference on Human Sentence Processing, Boulder, CO, USA.
  • Misersky, J., Wu, T., Slivac, K., Hagoort, P., & Flecken, M. (2019). The State of the Onion: Language specific structures modulate object representation in event comprehension. Talk presented at the Workshop Crosslinguistic Perspectives on Processing and Learning (X-PPL). Zurich, Switzerland. 2019-11-04 - 2019-11-05.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects combinatorial processes. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Rommers, J., Hagoort, P., & Federmeier, K. D. (2019). Lingering word expectations in recognition memory. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Schoffelen, J.-M., Oostenveld, R., Lam, N. H. L., Udden, J., Hulten, A., & Hagoort, P. (2019). MOUS, a 204-subject multimodal neuroimaging dataset to study language processing. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Slivac, K., Hervais-Adelman, A., Hagoort, P., & Flecken, M. (2019). Can language cue the visual detection of biological motion?. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Slivac, K., Flecken, M., Hervais-Adelman, A., & Hagoort, P. (2019). Can language cue the visual detection of biological motion?. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • Tan, Y., Lewis, A. G., & Hagoort, P. (2019). Catetholaminergic modulation of evoked power related to semantic processing. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Tan, Y., & Hagoort, P. (2019). Catecholaminergic modulation of the semantic processing in sentence comprehension. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.
  • Terporten, R., Kösem, A., Schoffelen, J.-M., Callaghan, E., Heidlmayr, K., Dai, B., & Hagoort, P. (2019). Alpha oscillations mark the interaction between language processing and cognitive control operations during sentence reading. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Araújo, S., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Effects of verb position on sentence planning. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. Auditory feedback processing has been studied using perturbed auditory feedback. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. For example, when speakers hear themselves at a higher pitch than intended, they would compensate by lowering their pitch. However, sometimes speakers follow the perturbation instead (i.e., raising their pitch in response to higher-than-expected pitch). Although most past studies observe some following responses, current theoretical frameworks cannot account for following responses. In addition, recent experimental work has suggested that following responses may be more common than has been assumed to date. In the current study, we performed two experiments (N = 39 and N = 24) to investigate whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. Participants vocalized while they tried to match a target pitch level. Meanwhile, the pitch in their auditory feedback was briefly (500 ms) perturbed in half of the vocalizations, increasing or decreasing pitch by 25 cents. None of the participants were aware of these manipulations. Subsequently, we analyzed the pitch contour of the participants’ vocalizations. The results suggest that whether a perturbation-related response is opposing or following unexpected feedback depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. In addition, the results show that all speakers show both following and opposing responses, although the distribution of response types varies across individuals. Both the interaction with ongoing fluctuations of the speech system and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation. More generally, the current study indicates that looking beyond the average response can lead to a more complete view on the nature of feedback processing in motor control. Future work should explore whether the direction of feedback-based control in domains outside of speech production will also be conditional on the state of the motor system at the time of the perturbation.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.

    Abstract

    When talking, speakers continuously monitor the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. For example, when speakers hear themselves at a higher pitch than intended, they would compensate by lowering their pitch. However, sometimes speakers follow the perturbation instead (i.e., raising their pitch in response to higher-than-expected pitch). Current theoretical frameworks cannot account for following responses. In the current study, we performed two experiments to investigate whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. Participants vocalized while the pitch in their auditory feedback was briefly (500 ms) perturbed in half of the vocalizations. None of the participants were aware of these manipulations. Subsequently, we analyzed the pitch contour of the participants’ vocalizations. The results suggest that whether a perturbation-related response is opposing or following unexpected feedback depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. In addition, the results show that all speakers show both following and opposing responses, although the distribution of response types varies across individuals. Both the interaction with ongoing fluctuations and the non-trivial number of following responses suggest that current speech production models are inadequate. More generally, the current study indicates that looking beyond the average response can lead to a more complete view on the nature of feedback processing in motor control. Future work should explore whether the direction of feedback-based control in domains outside of speech production will also be conditional on the state of the motor system at the time of the perturbation.
  • Hagoort, P. (2018). The mapping from language in the brain to the language of the brain. Talk presented at the Athenian Symposia - Cerebral Instantiation of Memory. Pasteur Hellenic Institute, Athens, Greece. 2018-03-30 - 2018-03-31.
  • Hagoort, P. (2018). Beyond semantics proper [Plenary lecture]. Talk presented at the Conference Cognitive Structures: Linguistic, Philosophical and Psychological Perspectives. Düsseldorf, Germany. 2018-09-12 - 2018-09-14.
  • Hagoort, P. (2018). On reducing language to biology. Talk presented at the Workshop Language in Mind and Brain. Munich, Germany. 2018-12-10 - 2018-12-11.
  • Hagoort, P. (2018). The language-ready brain. Talk presented at the NRW Akademie der Wissenschaften und der Künste. Düsseldorf, Germany. 2018-09-26.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2018). The neural basis of shared discourse: fMRI evidence on the relation between speakers’ and listeners’ brain activity when processing language in different states of ambiguity. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Québec City, Canada.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2018). No sentence processing without feedback mechanisms: How awareness modulates semantic combinatorial operations. Poster presented at the 22nd meeting of the Association for the Scientific Study of Consciousness (ASSC 22), Krakow, Poland.
  • Ostarek, M., Van Paridon, J., Hagoort, P., & Huettig, F. (2018). Multi-voxel pattern analysis reveals conceptual flexibility and invariance in language. Poster presented at the 10th Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Québec City, Canada.
  • Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Lexical and syntactic memory representations for sentence production: Effects of lexicality and verb arguments. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Producing sentences in the MRI scanner: Effects of lexicality and verb arguments. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.
  • Terporten, R., Schoffelen, J.-M., Dai, B., Hagoort, P., & Kösem, A. (2018). The relation between alpha/beta oscillations and the encoding of sentence induced contextual information. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.
  • Arana, S., Schoffelen, J.-M., Mitchell, T., & Hagoort, P. (2017). Neurolinguistic decoding during sentence processing: Exploring the syntax-semantic interface. Poster presented at the Donders Discussions 2017, Nijmegen, The Netherlands.
  • Dai, B., Kösem, A., McQueen, J. M., Jensen, O., & Hagoort, P. (2017). Linguistic information of distracting speech modulates neural entrainment to target speech. Poster presented at the 47th Annual Meeting of the Society for Neuroscience (SfN), Washington, DC, USA.
  • Dai, B., Kösem, A., McQueen, J. M., Jensen, O., & Hagoort, P. (2017). Linguistic information of distracting speech modulates neural entrainment to target speech. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2017). Activity-silent short-term memory for language processing. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.

    Abstract

    One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speech sound categories and lexical items. A major issue with this mapping problem is the variability in the acoustic realizations of sound categories, both within and across speakers. Past research has suggested listeners may use various sources of information, such as lexical knowledge or visual cues (e.g., lip-reading) to recalibrate these speech categories to the current speaker. Previous studies have focused on audiovisual recalibration of consonant categories. The present study explores whether vowel categorization, which is known to show less sharply defined category boundaries, also benefit from visual cues. Participants were exposed to videos of a speaker pronouncing one out of two vowels (Dutch vowels /e/ and /ø/), paired with audio that was ambiguous between the two vowels. The most ambiguous vowel token was determined on an individual basis by a categorization task at the beginning of the experiment. In one group of participants, this auditory token was paired with a video of an /e/ articulation, in the other group with an /ø/ video. After exposure to these videos, it was found in an audio-only categorization task that participants had adapted their categorization behavior as a function of the video exposure. The group that was exposed to /e/ videos showed a reduction of /ø/ classifications, suggesting they had recalibrated their vowel categories based on the available visual information. These results show that listeners indeed use visual information to recalibrate vowel categories, which is in line with previous work on audiovisual recalibration in consonant categories, and lexically-guided recalibration in both vowels and consonants. In addition, a secondary aim of the current study was to explore individual variability in audiovisual recalibration. Phoneme categories vary not only in terms of boundary location, but also in terms of boundary sharpness, or how strictly categories are distinguished. The present study explores whether this sharpness is associated with the amount of audiovisual recalibration. The results tentatively support that a fuzzy boundary is associated with stronger recalibration, suggesting that listeners’ category sharpness may be related to the weight they assign to visual information in audiovisual speech perception. If listeners with fuzzy boundaries assign more weight to visual cues, given that vowel categories have less sharp boundaries than consonants, there ought to be audiovisual recalibration for vowels as well. This is exactly what was found in the current study.
  • Hagoort, P. (2017). Beyond Broca, brain, and binding. Talk presented at the Maastricht Brain Imaging Center Lecture series. Maastricht, The Netherlands. 2017-03-13.
  • Hagoort, P. (2017). Het belang van een tweetalige ontwikkeling voor vroegdoven. Talk presented at the Mini-symposium 'Wetenschappers over onze doelgroepen' organised as farewell for Kees Knol, director GGMD (Geestelijke Gezondheidszorg en Maatschappelijke Dienstverlening). Gouda, The Netherlands. 2017-05-09.
  • Hagoort, P. (2017). Language and reading: The consequences of the Kantian brain for the classroom. Talk presented at the Symposium "From neuroscience to the classroom” at the Swedish Collegium for Advanced Study. Uppsala, Sweden. 2017-04-05 - 2017-04-06.

    Abstract

    The classroom is designed to teach children cultural inventions for which the brain is not evolutionary designed. Hence the classroom environment has to implement cultural reycling of neuronal maps. To do this effectively it has to recruit existing neural infrastructure. Therefore, teaching programmes have to be tailored to the possibilities and limitations of available neural architecture. An example in case is reading, a cultural invention of a few thousand years old. Orthographies and reading methods need to use visual cortex areas in the most optimal way. I will discuss how the characteristics of different orthographies are tailored to the possibilities of complex cells in visual cortex. In addition, different reading methods will be evaluated in the light of our understanding of human brain organization. I will argue that a systematic investigation of culture-brain relations is much needed for optimizing the optimal environment.
  • Hagoort, P. (2017). Science not silence. Talk presented at the March for Science event on Museumplein. Amsterdam, The Netherlands. 2017-04-22.
  • Hagoort, P. (2017). Singing in the brain: over hersenen, poëzie en muziek. Talk presented at Studiedag Poëzie en Muziek. Faculty of Arts, University of Gent. Gent, Belgium. 2017-03-23.
  • Mongelli, V., Meijs, E., Van Gaal, S., & Hagoort, P. (2017). I know what you mean (but I may not see it) - Semantic processing in absence of awareness. Talk presented at the NVP Winter Conference 2017. Egmond aan Zee, The Netherlands. 2017-12-14 - 2017-12-16.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2017). I know what you mean (but I may not see it): Semantic processing in absence of awareness. Poster presented at the 21st meeting of the Association for the Scientific Study of Consciousness (ASSC 21), Beijing, China.
  • Sharoh, D., Van Mourik, T., Bains, L., Segaert, K., Weber, K., Hagoort, P., & Norris, D. (2017). Approaching directed connectivity in the language network with Laminar fMRI. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
  • Sharoh, D., Van Mourik, T., Bains, L., Segaert, K., Weber, K., Hagoort, P., & Norris, D. (2017). Depth-dependent BOLD as a measure of directed connectivity during language processing. Poster presented at the 23rd Annual Meeting of the Organization for Human Brain Mapping (OHBM 2017), Vancouver, Canada.
  • Terporten, R., Kösem, A., Schoffelen, J.-M., & Hagoort, P. (2017). Alpha oscillations as neural marker for context induced constraints during sentence processing. Poster presented at the NVP Winter Conference 2017, Egmond aan Zee, The Netherlands.
  • Terporten, R., Schoffelen, J.-M., Dai, B., Hagoort, P., & Kösem, A. (2017). Alpha oscillations as neural marker for context induced constraints during sentence processing. Talk presented at the Donders Discussions 2017. Nijmegen, The Netherlands. 2017-10-26 - 2017-10-27.
  • Terporten, R., Schoffelen, J.-M., Dai, B., Hagoort, P., & Kösem, A. (2017). The relation between alpha/beta oscillations and the encoding of sentence induced contextual information. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2017). Combining Virtual Reality and EEG to study semantic and pragmatic processing in a naturalistic environment. Talk presented at the workshop 'Revising formal Semantic and Pragmatic theories from a Neurocognitive Perspective' (NeuroPragSem, 2017). Bochum, Germany. 2017-06-19 - 2017-06-20.
  • Uhlmann, M., Van den Broek, D., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). Ambiguity resolution in a spiking network model of sentence comprehension. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Van den Broek, D., Uhlmann, M., Duarte, R., Fitz, H., Hagoort, P., & Petersson, K. M. (2017). The best spike filter kernel is a neuron. Poster presented at the 1st Annual Conference on Cognitive Computational Neuroscience (CCN 2017), New York, NY, USA.
  • Weber, K., Meyer, A. S., & Hagoort, P. (2017). Learning lexical-syntactic biases: An fMRI study on how we connect words and structures. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
  • Arana, S., Rommers, L., Hagoort, P., Snijders, T. M., & Kösem, A. (2016). The role of entrained oscillations during foreign language listening. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
  • Belavina Kuerten, A., Mota, M., Segaert, K., & Hagoort, P. (2016). Syntactic priming effects in dyslexic children: A study in Brazilian Portuguese. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Dyslexia is a learning disorder caused primarily by a phonological processing deficit. So far, few studies have examined whether dyslexia deficits extend to syntactic processing. We investigated how dyslexic children process syntactic structures. In a self-paced reading syntactic priming paradigm, the passive voice was repeated in mini-blocks of five sentences. These were mixed with an equal number of filler mini-blocks (actives, intransitives); the verb was repeated within all mini-blocks. The data of 20 dyslexic children (Mean(age)=12,8), native speakers of Brazilian Portuguese, were compared to that of 25 non-dyslexic children (Mean(age)=10,4 years). A repeated-measures ANOVA on reading times for the verb revealed a significant sentence repetition (p<.001) and group by sentence repetition effect (p<.001). Dyslexics demonstrated priming effects between all consecutive passive voice repetitions (all p<.05), whereas reading times for controls differed only between the first and second passive (p<.001). For active sentences, dyslexics showed priming effects only between the first and second sentences (p<.05) while controls did not show any significant effect, suggesting that the effects for passives are not solely due to the verb being repeated, but at least in part due to the repeated syntactic structure. These findings thus reveal syntactic processing differences between dyslexic and non-dyslexic children.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    In certain situations, human listeners have more difficulty in understanding speech in a multi-talker environment than in the presence of non-intelligible noise. The costs of speech-in-speech masking have been attributed to informational masking, i.e. to the competing processing of the target and the distractor speech’s information. It remains unclear what kind of information is competing, as intelligible speech and unintelligible speech-like signals (e.g. reversed, noise-vocoded, and foreign speech) differ both in linguistic content and in acoustic information. Thus, intelligible speech could be a stronger distractor than unintelligible speech because it presents closer acoustic information to the target speech, or because it carries competing linguistic information. In this study, we intended to isolate the linguistic component of speech-in-speech masking and we tested its influence on the comprehension of target speech. To do so, 24 participants performed a dichotic listening task in which the interfering stimuli consisted of 4-band noise-vocoded sentences that could become intelligible through training. The experiment included three steps: first, the participants were instructed to report the clear target speech from a mixture of one clear speech channel and one unintelligible noise-vocoded speech channel; second, they were trained on the interfering noise-vocoded sentences so that they became intelligible; third, they performed the dichotic listening task again. Crucially, before and after training, the distractor speech had the same acoustic features but not the same linguistic information. We thus predicted that the distracting noise-vocoded signal would interfere more with target speech comprehension after training than before training. To control for practice/fatigue effects, we used additional 2-band noise-vocoded sentences, which participants were not trained on, as interfering signals in the dichotic listening tasks. We expected that performance on these trials would not change after training, or would change less than that on trials with trained 4-band noise-vocoded sentences. Performance was measured under three SNR conditions: 0, -3, and -6 dB. The behavioral results are consistent with our predictions. The 4-band noise-vocoded signal interfered more with the comprehension of target speech after training (i.e. when it was intelligible) compared to before training (i.e. when it was unintelligible), but only at SNR -3dB. Crucially, the comprehension of the target speech did not change after training when the interfering signals consisted of unintelligible 2-band noise-vocoded speech sounds, ruling out a fatigue effect. In line with previous studies, the present results show that intelligible distractors interfere more with the processing of target speech. These findings further suggest that speech-in-speech interference originates, to a certain extent, from the parallel processing of competing linguistic content. A magnetoencephalography study with the same design is currently being performed, to specifically investigate the neural origins of informational masking.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the 8th Speech in Noise Workshop (SpiN), Groningen, The Netherlands.
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
  • Fitz, H., Van den Broek, D., Uhlmann, M., Duarte, R., Hagoort, P., & Petersson, K. M. (2016). Silent memory for language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Integrating sentence meaning over time requires memory ranging from milliseconds (words) to seconds (sentences) and minutes (discourse). How do transient events like action potentials in the human language system support memory at these different temporal scales? Here we investigate the nature of processing memory in a neurobiologically motivated model of sentence comprehension. The model was a recurrent, sparsely connected network of spiking neurons. Synaptic weights were created randomly and there was no adaptation or learning. As input the network received word sequences generated from construction grammar templates and their syntactic alternations (e.g., active/passive transitives, transfer datives, caused motion). The language environment had various features such as tense, aspect, noun/verb number agreement, and pronouns which created positional variation in the input. Similar to natural speech, word durations varied between 50ms and 0.5s of real, physical time depending on their length. The model's task was to incrementally interpret these word sequences in terms of semantic roles. There were 8 target roles (e.g., Agent, Patient, Recipient) and the language generated roughly 1,2m distinct utterances from which a sequence of 10,000 words was randomly selected and filtered through the network. A set of readout neurons was then calibrated by means of logistic regression to decode the internal network dynamics onto the target semantic roles. In order to accomplish the role assignment task, network states had to encode and maintain past information from multiple cues that could occur several words apart. To probe the circuit's memory capacity, we compared models where network connectivity, the shape of synaptic currents, and properties of neuronal adaptation were systematically manipulated. We found that task-relevant memory could be derived from a mechanism of neuronal spike-rate adaptation, modelled as a conductance that hyperpolarized the membrane following a spike and relaxed to baseline exponentially with a fixed time-constant. By acting directly on the membrane potential it provided processing memory that allowed the system to successfully interpret its sentence input. Near optimal performance was also observed when an exponential decay model of post-synaptic currents was added into the circuit, with time-constants approximating excitatory NMDA and inhibitory GABA-B receptor dynamics. Thus, the information flow was extended over time, creating memory characteristics comparable to spike-rate adaptation. Recurrent connectivity, in contrast, only played a limited role in maintaining information; an acyclic version of the recurrent circuit achieved similar accuracy. This indicates that random recurrent connectivity at the modelled spatial scale did not contribute additional processing memory to the task. Taken together, these results suggest that memory for language might be provided by activity-silent dynamic processes rather than the active replay of past input as in storage-and-retrieval models of working memory. Furthermore, memory in biological networks can take multiple forms on a continuum of time-scales. Therefore, the development of neurobiologically realistic, causal models will be critical for our understanding of the role of memory in language processing.
  • Fitz, H., Hagoort, P., & Petersson, K. M. (2016). A spiking recurrent network for semantic processing. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
  • Franken, M. K., Schoffelen, J.-M., McQueen, J. M., Acheson, D. J., Hagoort, P., & Eisner, F. (2016). Neural correlates of auditory feedback processing during speech production. Poster presented at New Sounds 2016: 8th International Conference on Second-Language Speech, Aarhus, Denmark.

    Abstract

    An important aspect of L2 speech learning is the interaction between speech production and perception. One way to study this interaction is to provide speakers with altered auditory feedback to investigate how unexpected auditory feedback affects subsequent speech production. Although it is generally well established that speakers on average compensate for auditory feedback perturbations, even when unaware of the manipulation, the neural correlates of responses to perturbed auditory feedback are not well understood. In the present study, we provided speakers with auditory feedback that was intermittently pitch-shifted, while we measured the speaker’s neural activity using magneto-encephalography (MEG). Participants were instructed to vocalize the Dutch vowel /e/ while they tried to match the pitch of a short tone. During vocalization, participants received auditory feedback through headphones. In half of the trials, the pitch in the feedback signal was shifted by -25 cents, starting at a jittered delay after speech onset and lasting for 500ms. Trials with perturbed feedback and control trials (with normal feedback) were in random order. Post-experiment questionnaires showed that none of the participants was aware of the pitch manipulation. Behaviorally, the results show that participants on average compensated for the auditory feedback by shifting the pitch of their speech in the opposite (upward) direction. This suggests that even though participants were not aware of the pitch shift, they automatically compensate for the unexpected feedback signal. The MEG results show a right-lateralized response to both onset and offset of the pitch perturbation during speaking. We suggest this response relates to detection of the mismatch between the predicted and perceived feedback signals, which could subsequently drive behavioral adjustments. These results are in line with recent models of speech motor control and provide further insights into the neural correlates of speech production and speech feedback processing.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Hagoort, P. (2016). Beyond the core networks of language. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    Speakers and listeners do more than exchanging propositional content. They try to get things done with their utterances. For speakers this requires planning of utterances with knowledge about the listener in mind, whereas listeners need to make inferences that go beyond simulating sensorimotor aspects of propositional content. For example, the statement "It is hot in here" will usually not be answered with a statement of the kind "Yes, indeed it is 32 degrees Celsius", but rather with the answer "I will open the window", since the listener infers the speaker's intention behind her statement. I will discuss a series of studies that identify the network of brain regions involved in audience design and inferring speaker meaning. Likewise for indirect replies that require conversational implicatures, as in A: "Did you like my talk?" to which B replies: "It is hard to give a good presentation." I will show that in these cases the core language network needs to be extended with brain systems providing the necessary inferential machinery
  • Hagoort, P. (2016). Cognitive enhancement: A few observations and remarks. Talk presented at the LUX. Nijmegen, The Netherlands. 2016-02.
  • Hagoort, P. (2016). Healthy Brain. Talk presented at the Meeting Ministry of Economic Affairs. Papendal, The Netherlands. 2016-09.
  • Hagoort, P. (2016). Healthy brain initiative. Talk presented at the Radboud University. Nijmegen, the Netherlands. 2016-06.
  • Hagoort, P. (2016). Het talige brein. Talk presented at the Studiedag Regionaal Instituut Dyslexie (RID). Arnhem, the Netherlands. 2016-11-19.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Het talige brein. Talk presented at Dyslexie Nederland. Amsterdam, The Netherlands. 2016-11-12.
  • Hagoort, P. (2016). Neuroanatomy of language [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Hagoort, P. (2016). Language from an embrained perspective: It is hard to give a good presentation. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). De magie van het talige brein. Talk presented at the Akademie van Kunsten. Amsterdam, The Netherlands. 2016-01.
  • Hagoort, P. (2016). Dutch science on the move. Talk presented at the Donders Institute for Brain, Cognition and Behaviour. Nijmegen, The Netherlands. 2016-06.
  • Hagoort, P. (2016). The toolkit of cognitive neuroscience. Talk presented at the FENS-Hertie Winter School on Neurobiology of language and communication. Obergurgl, Austria. 2016-01-03 - 2016-01-08.
  • Hagoort, P. (2016). Towards team science. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Hagoort, P. (2016). Wetenschap is emotie. Talk presented at the opening InScience Filmfestival. Nijmegen, The Netherlands. 2016-11-02.
  • Hagoort, P. (2016). The neurobiology of morphological processing. Talk presented at the MPI Workshop Morphology in the Parallel Architecture. Nijmegen, The Netherlands. 2016-03-18.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). How social opinion influences syntactic processing - an investigation using Virtual Reality. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Adapting your grammatical preferences to match that of your interlocutor, a phenomenon known as structural priming, can be influenced by the social opinion you have of your interlocutor. However, the direction and reliability of this effect is unclear as different studies have reported seemingly contrary results. When investigating something as abstract as social opinion, there are numerous differences between the studies that could be causing the differing results. We have operationalized social opinion as the ratings of favorability for a wide range of different avatars in a virtual reality study. This way we can accurately determine how the strength of the structural priming effect changes with differing social opinions. . Our results show an inverted U-shaped curve in passive structure repetition as a function of favorability: the participants showed the largest priming effects for the avatar with average favorability ratings, with a decrease when interacting with the least- or most-favorable avatars. This result suggests that the relationship between social opinion and priming magnitude may not be a linear one, contrary to what the literature has been assuming. Instead there is 'happy medium' which evokes the highest priming effect and on either side of this ideal is a decrease in priming
  • Heyselaar, E., Segaert, K., Walvoort, S., Kessels, R., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff?s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Our control group didn't show a priming tendency, presumably due to cognitive interference between declarative and non-declarative memory systems. To verify the absence of the effect in the controls, we ran an independent group of 54 participants on the same paradigm that also showed no priming effect. The results are further discussed in relation to amnesia, aging and compensatory mechanisms
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2016). The role of procedural memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing 17 patients with Korsakoff’s amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education and premorbid intelligence. Patients with Korsakoff's amnesia display deficits in all subdomains of declarative memory, yet their implicit learning remains intact, making them an ideal patient group to use in this study. We used the traffic-light design for the syntactic priming task: the actors in the prime trial photos were colour-coded and the participants were instructed to name the 'green' actor before the 'red' actor in the picture. This way we can control which syntactic structure the participant uses to describe the photo. For target trials, the photos were grey-scale so there was no bias towards one structure over another. This set-up allows us to ensure the primes are properly encoded. In addition to the priming task, we also measured declarative memory, implicit learning ability, and verbal IQ from all participants. Memory tests supported the claim that our 17 patients did have a severely impaired declarative memory system, yet a functional implicit/procedural one. The control group showed no deficit in any of the control measurements. In line with the hypothesis that syntactic priming relies on procedural memory, the patient group showed strong priming tendencies (12.6% passive structure repetition). Unexpectedly, our healthy control group did not show a priming tendency. In order to verify the absence of a syntactic priming effect in the healthy controls, we ran an independent group of 54 participants with the exact same paradigm. The results replicated the earlier findings such that there was no priming effect compared to baseline. This lack of priming ability in the healthy older population could be due to cognitive interference between declarative and non-declarative memory systems, which increases as we get older (mean age of the control group is 62 years).
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Visual attention influences language processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Research into the interaction between attention and language has mainly focused on how language influences attention. But how does attention influence language ? Considering we are constantly bombarded with attention grabbing stimuli unrelated to the conversation we are conducting, this is certainly an interesting topic of investigation. In this study we aim to uncover how limiting attentional resources influences language behaviour. We focus on syntactic priming: a task which captures how participants adapt their syntactic choices to their partner. Participants simultaneously conducted a motion - object tracking (MOT) task, a task commonly used to tax attentional re sources. We thus measured participants ’ ability to process syntax while their attention is not - , slightly - , or overly - taxed. We observed an inverted U - shaped curve on priming magnitude when conducting the MOT task concurrently with prime sentences, but no effect when conducted with target sentences. Our results illustrate how, during the prime phase of the syntactic priming task, attention differentially affects syntactic processing whereas during the target phase there is no effect of attention on language behaviour. We explain these results in terms of the implicit learning necessary to prime and how different levels of attention taxation can either impair or enhance the way language is encoded
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment reflects temporal predictions guiding speech comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech segmentation requires flexible mechanisms to remain robust to features such as speech rate and pronunciation. Recent hypotheses suggest that low-frequency neural oscillations entrain to ongoing syllabic and phrasal rates, and that neural entrainment provides a speech-rate invariant means to discretize linguistic tokens from the acoustic signal. How this mechanism functionally operates remains unclear. Here, we test the hypothesis that neural entrainment reflects temporal predictive mechanisms. It implies that neural entrainment is built on the dynamics of past speech information: the brain would internalize the rhythm of preceding speech to parse the ongoing acoustic signal at optimal time points. A direct prediction is that ongoing neural oscillatory activity should match the rate of preceding speech even if the stimulation changes, for instance when the speech rate suddenly increases or decreases. Crucially, the persistence of neural entrainment to past speech rate should modulate speech perception. We performed an MEG experiment in which native Dutch speakers listened to sentences with varying speech rates. The beginning of the sentence (carrier window) was either presented at a fast or a slow speech rate, while the last three words (target window) were displayed at an intermediate rate across trials. Participants had to report the perception of the last word of the sentence, which was ambiguous with regards to its vowel duration (short vowel /ɑ/ – long vowel /aː/ contrast). MEG data was analyzed in source space using beamformer methods. Consistent with previous behavioral reports, the perception of the ambiguous target word was influenced by the past speech rate; participants reported more /aː/ percepts after a fast speech rate, and more /ɑ/ after a slow speech rate. During the carrier window, neural oscillations efficiently tracked the dynamics of the speech envelope. During the target window, we observed oscillatory activity that corresponded in frequency to the preceding speech rate. Traces of neural entrainment to the past speech rate were significantly observed in medial prefrontal areas. Right superior temporal cortex also showed persisting oscillatory activity which correlated with the observed perceptual biases: participants whose perception was more influenced by the manipulation in speech rate also showed stronger remaining neural oscillatory patterns. The results show that neural entrainment lasts after rhythmic stimulation. The findings further provide empirical support for oscillatory models of speech processing, suggesting that neural oscillations actively encode temporal predictions for speech comprehension.
  • Kösem, A., Bosker, H. R., Meyer, A. S., Jensen, O., & Hagoort, P. (2016). Neural entrainment to speech rhythms reflects temporal predictions and influences word comprehension. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
  • Lockwood, G., Drijvers, L., Hagoort, P., & Dingemanse, M. (2016). In search of the kiki-bouba effect. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The kiki-bouba effect, where people map round shapes onto round sounds (such as [b] and [o]) and spiky shapes onto “spiky” sounds (such as [i] and [k]), is the most famous example of sound symbolism. Many behavioural variations have been reported since Köhler’s (1929) original experiments. These studies examine orthography (Cuskley, Simner, & Kirby, 2015), literacy (Bremner et al., 2013), and developmental disorders (Drijvers, Zaadnoordijk, & Dingemanse, 2015; Occelli, Esposito, Venuti, Arduino, & Zampini, 2013). Some studies have suggested that the cross-modal associations between linguistic sound and physical form in the kiki-bouba effect are quasi-synaesthetic (Maurer, Pathman, & Mondloch, 2006; Ramachandran & Hubbard, 2001). However, there is a surprising lack of neuroimaging data in the literature that explain how these cross-modal associations occur (with the exceptions of Kovic et al. (2010)and Asano et al. (2015)). We presented 24 participants with randomly generated spiky or round figures and 16 synthesised, reduplicated CVCV (vowels: [i] and [o], consonants: [f], [v], [t], [d], [s], [z], [k], and [g]) nonwords based on Cuskley et al. (2015). This resulted in 16 nonwords across four conditions: full match, vowel match, consonant match, and full mismatch. Participants were asked to rate on a scale of 1 to 7 how well the nonword fit the shape it was presented with. EEG was recorded throughout, with epochs timelocked to the auditory onset of the nonword. There were significant behavioural effects of condition (p<0.0001). Bonferroni t-tests show participants rated full match more highly than full mismatch nonwords. However, there was no reflection of this behavioural effect in the ERP waveforms. One possible reason for the absence of an ERP effect is that this effect may jitter over a broad latency range. Currently oscillatory effects are being analysed, since these are less dependent on precise time-locking to the triggering events.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized size-sound sound symbolism. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Maybe syntactic alignment is not affected by social goals?. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Although it is suggested that linguistic alignment can be influenced by speakers' relationship with their listener, previous studies provide inconsistent results. We tested whether speakers' desire to be liked affects syntactic alignment, and simultaneously assessed whether alignment affects perceived likeability. Primed participants (PPs) were therefore primed by another naive participant (Evaluator). PP and Evaluator took turns describing photographs with active/passive sentences. Unknown to PP, we controlled Evaluator's syntax by having them read out sentences. PPs' desire to be liked was manipulated by assigning pairs to a Control (secret evaluation by Evaluator), Evaluation (PPs were aware of evaluation), or Directed Evaluation (PPs knew about the evaluation and were instructed to make a positive impression) condition. PPs showed significant syntactic alignment (more passives produced after passive primes). However, there was no interaction with condition: PPs did not align more in the (Directed) Evaluation than in the Control condition. Our results thus do not support the conclusion that speakers' desire to be liked affects syntactic alignment. Furthermore, there was no reliable relationship between syntactic alignment and how likeable PPs appeared to their Evaluator: there was a negative effect in the Control and Evaluation conditions, but no relationship in the Directed Evaluation condition.
  • Schoot, L., Stolk, A., Hagoort, P., Garrod, S., Segaert, K., & Menenti, L. (2016). Finding your way in the zoo: How situation model alignment affects interpersonal neural coupling. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION: We investigated how speaker-listener alignment at the level of the situation model is reflected in inter-subject correlations in temporal and spatial patterns of brain activity, also known as between-brain neural coupling (Stephens et al., 2010). We manipulated the complexity of the situation models that needed to be communicated (simple vs complex situation model) to investigate whether this affects neural coupling between speaker and listener. Furthermore, we investigated whether the degree to which alignment was successful was positively related to the degree of between-brain coupling. METHOD: We measured neural coupling (using fMRI) between speakers describing abstract zoo maps, and listeners interpreting those descriptions. Each speaker described both a ‘simple’ map, a 6x6 grid including five animal locations, and a ‘complex’ map, an 8x8 grid including 7 animal locations, from memory, and with the order of map description randomized across speakers. Audio-recordings of the speakers’ utterances were then replayed to the listeners, who had to reconstruct the zoo maps on the basis of their speakers’ descriptions. On the group level, we used a GLM approach to model between-brain neural coupling as a function of condition (simple vs complex map). Communicative success, i.e. map reproduction accuracy, was added as a covariate. RESULTS: Whole brain analyses revealed a positive relationship between communicative success and the strength of speaker-listener neural coupling in the left inferior parietal cortex. That is, the more successful listeners were in reconstructing the map based on what their partner described, the stronger the correlation between that speaker and listener's BOLD signals in that area. Furthermore, within the left inferior parietal cortex, pairs in the complex situation model condition showed stronger between-brain neural coupling than pairs in the simple situation model condition. DISCUSSION: This is the first two-brain study to explore the effects of complexity of the communicated situation model and the degree of communicative success on (language driven) between-brain neural coupling. Interestingly, our effects were located in the inferior parietal cortex, previously associated with visuospatial imagery. This process likely plays a role in our task in which the communicated situation models had a strong visuospatial component. Given that there was more coupling the more situation models were successfully aligned (i.e. map reproduction accuracy), it was surprising that we found stronger coupling in the complex than the simple situation model condition. We plan in ROI analyses in primary auditory, core language, and discourse processing regions. The present findings open the way for exploring the interaction between situation models and linguistic computations during communication.
  • Sharoh, D., van Mourik, T., Bains, L. J., Segaert, K., Weber, K., Hagoort, P., & Norris, D. G. (2016). Investigation of depth-dependent BOLD during language processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Neocortex is known to be histologically organized with respect to depth, and neuronal connections across cortical layers form part of the brain's functional organization[1]. Efferent (outgoing) and afferent (incoming) inter-regional connections are found to originate and terminate at different depths, and this structure relates to the internal/external origin of neuronal activity. Specifically, efferent, inter-regional connections are associated with internally directed, top-down activity; afferent inter-regional connections are associated with bottom-up activity originating from external stimulation. The contribution of top-down and bottom-up neuronal activity to the BOLD signal can perhaps be inferred from depth-related fluctuations in BOLD. By dissociating top-down from bottom-up effects in fMRI, investigators could observe the relative contribution of internally and externally generated activity to the BOLD signal, and potentially test hypotheses regarding the directionality of BOLD connectivity. Previous investigation of depth-dependent BOLD has focused on human visual cortex[2]. In the present work, we have designed an experiment to serve as a proof of principle that (1) depth-dependent BOLD can be measured in higher cortical areas during a language processing task, and (2) that differences in the relative contribution of the BOLD signal at discrete depths, to the total BOLD signal, vary as a function of experimental condition. Data were collected on the Siemens 7T scanner at the Hahn Institute in Essen, Germany. Submillimeter (0.8mm3), T1-weighted data were acquired using MP2RAGE, along with near whole-brain, submillimeter (0.9x0.9x0.943mm x112 slices) 3D-EPI task data. The field of view fully covered bilateral temporal and fusiform regions, but excluded superior brain areas on the order of several centimeters. Participants were presented with an event-related paradigm involving the presentation of words, pseudowords and nonwords in visual and auditory modalities. Only the visual modality is discussed here. Cortical segmentation was performed using FreeSurfer's surface-pipeline. We parcellated the gray matter volume into discrete depths, and the analysis of depth-dependent BOLD was performed with the Laminar Analysis Toolbox (van Mourik). Further analysis was performed using FreeSurfer, AFNI and in-house MATLAB code. Regions included in the depth-dependent analysis were determined by first-level analysis. We have presently collected data from 10 participants. 4 were excluded due to equipment malfunction. In the first-level analysis (volume registration, smoothing, GLM, and significance testing), we observe fusiform activation for Realword>Nonword and Pseudoword>Nonword contrasts. These contrasts additionally show activation along middle temporal gyrus. The depth-dependent analysis was performed on fusiform clusters generated during the first-level analysis. These clusters appeared to show depth-dependent signal differences as a function of experimental condition. We suspect these differences may be related to layer-specific activation and reflect the relative contribution of top-down and bottom-up activity in the observed signal. These are preliminary results, and part of an ongoing effort to establish novel, depth-dependent analysis techniques in higher cortical areas and within the language domain. Future analysis will investigate the nature of the depth-dependent differences and the connectivity profiles of depth-dependent variation among distal cortical regions.[1]DouglasR.J.&MartinK.A.C.(2004).Neuronal Circuits of the Neocortex.Annual Review of Neuroscience,27,419-551.[2]Kok,P.,et al.(2016).Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.Current Biology,26,371-376.
  • Tan, Y., Acheson, D. J., & Hagoort, P. (2016). Moving beyond single words: Dissociating levels of linguistic representation in short-term memory (STM). Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    This study assessed the role of semantic, phonological, and grammatical levels of representation in short-term list recall through a 2 (meaningfulness) × 2 (phonological similarity) ×2 (grammaticality) manipulation. Dutch subjects (Experiment 1-2) and English subjects (Experiment 3-4) and seven aphasic patients (Experiment 5) were required to recall lists consisting of adjective-noun word-pairs. Within each list, meaningfulness was manipulated by pairing adjectives and nouns in a meaningful or non-meaningful way; phonological similarity was manipulated through the degree of phonological overlap between words; grammaticality was manipulated through the order of the adjective and noun within each word pair in English (e.g., “salty mea”´ vs. “meat salty”) and through morphological agreement in Dutch. Overall, subjects showed better recall for words in the meaningful, phonologically-dissimilar, and grammatical conditions. Moreover, by relating these main effects to subjects' phonological and semantic STM capacity, we found that subjects with better phonological STM were less affected by the meaningfulness manipulation, while subjects with better semantic STM were less affected by the phonological manipulations. These results demonstrated that there are multiple routes to group information in STM via the combinatorial constraints afforded by language, and subjects might benefit from additional cues when memory load is high in certain level(s).
  • Udden, J., Hulten, A., Schoffelen, J.-M., Lam, N., Kempen, G., Petersson, K. M., & Hagoort, P. (2016). Dynamics of supramodal unification processes during sentence comprehension. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    It is generally assumed that structure building processes in the spoken and written modalities are subserved by modality-independent lexical, morphological, grammatical, and conceptual processes. We present a large-scale neuroimaging study (N=204) on whether the unification of sentence structure is supramodal in this sense, testing if observations replicate across written and spoken sentence materials. The activity in the unification network should increase when it is presented with a challenging sentence structure, irrespective of the input modality. We build on the well-established findings that multiple non-local dependencies, overlapping in time, are challenging and that language users disprefer left- over right-branching sentence structures in written and spoken language, at least in the context of mainly right-branching languages such as English and Dutch. We thus focused our study with Dutch participants on a left-branching processing complexity measure. Supramodal effects of left-branching complexity were observed in a left-lateralized perisylvian network. The left inferior frontal gyrus (LIFG) and the left posterior middle temporal gyrus (LpMTG) were most clearly associated with left-branching processing complexity. The left anterior middle temporal gyrus (LaMTG) and left inferior parietal lobe (LIPL) were also significant, although less specifically. The LaMTG was increasingly active also for sentences with increasing right-branching processing complexity. A direct comparison between left- and right-branching processing complexity yielded activity in an LIFG ROI for left > right-branching complexity, while the right > left contrast showed no activation. Using a linear contrast testing for increases in the left-branching complexity effect over the sentence, we found significant activity in LIFG and LpMTG. In other words, the activity in these regions increased from sentence onset to end, in parallel with the increase of the left-branching complexity measure. No similar increase was observed in LIPL. Thus, the observed functional segregation during sentence processing of LaMTG and LIPL vs. LIFG and LpMTG is consistent with our observation of differential activation changes in sensitivity to left- vs. right-branching structure. While LIFG, LpMTG, LaMTG and LIPL all contribute to the supramodal unification processes, the results suggest that these regions differ in their respective contributions to the subprocesses of unification. Our results speak to the high processing costs of (1) simultaneous unification and (2) maintenance of constituents that are not yet attached to the already unified part of the sentence. Sentences with high left- (compared to right-) branching complexity impose an added load on unification. We show that this added load leads to an increased BOLD response in left perisylvian regions. The results are relevant for understanding the neural underpinnings of the processing difficulty linked to multiple, overlapping non-local dependencies. In conclusion, we used the left- and right branching complexity measures to index this processing difficulty and showed that the unification network operates with similar spatiotemporal dynamics over the course of the sentence, during unification of both written and spoken sentences.
  • Van den Broek, D., Uhlmann, M., Fitz, H., Hagoort, P., & Petersson, K. M. (2016). Spiking neural networks for semantic processing. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
  • Weber, K., Meyer, A. S., & Hagoort, P. (2016). The acquisition of verb-argument and verb-noun category biases in a novel word learning task. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    We show that language users readily learn the probabilities of novel lexical cues to syntactic information (verbs biasing towards a prepositional object dative vs. double-object dative and words biasing towards a verb vs. noun reading) and use these biases in a subsequent production task. In a one-hour exposure phase participants read 12 novel lexical items, embedded in 30 sentence contexts each, in their native language. The items were either strongly (100%) biased towards one grammatical frame or syntactic category assignment or unbiased (50%). The next day participants produced sentences with the newly learned lexical items. They were given the sentence beginning up to the novel lexical item. Their output showed that they were highly sensitive to the biases introduced in the exposure phase. Given this rapid learning and use of novel lexical cues, this paradigm opens up new avenues to test sentence processing theories. Thus, with close control on the biases participants are acquiring, competition between different frames or category assignments can be investigated using reaction times or neuroimaging methods. Generally, these results show that language users adapt to the statistics of the linguistic input, even to subtle lexically-driven cues to syntactic information.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing speech production-perception interactions through individual differences. Talk presented at Psycholinguistics in Flanders. Marche-en-Famenne. 2015-05-21 - 2015-05-22.

    Abstract

    This study aims to test recent theoretical frameworks in speech motor control which claim that speech production targets are specified in auditory terms. According to such frameworks, people with better auditory acuity should have more precise speech targets. Participants performed speech perception and production tasks in a counterbalanced order. Speech perception acuity was assessed using an adaptive speech discrimination task, where participants discriminated between stimuli on a /ɪ/-/ɛ/ and a /ɑ/-/ɔ/ continuum. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording of the vowels /ɪ/, /ɛ/, /ɑ/ and /ɔ/ in 288 pseudowords (18 per vowel, each of which was repeated 4 times). We predicted that speech production variability would correlate inversely with discrimination performance. Results confirmed this prediction as better discriminators had more distinctive vowel production targets. In addition, participants with higher auditory acuity produced vowels with smaller within-phoneme variability but spaced farther apart in vowel space. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interactions.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. Poster presented at International Congress of Phonetic Sciences, Glasgow, UK.

    Abstract

    This study aims to test a prediction of recent theoretical frameworks in speech motor control: if speech production targets are specified in auditory terms, people with better auditory acuity should have more precise speech targets. To investigate this, we had participants perform speech perception and production tasks in a counterbalanced order. To assess speech perception acuity, we used an adaptive speech discrimination task. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording. We predicted that speech production variability to correlate inversely with discrimination performance. The results suggest that people do vary in their production and perceptual abilities, and that better discriminators have more distinctive vowel production targets, confirming our prediction. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interaction.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Effects of auditory feedback consistency on vowel production. Poster presented at Psycholinguistics in Flanders, Marche-en-Famenne.

    Abstract

    In investigations of feedback control during speech production, researchers have focused on two different kinds of responses to erroneous or unexpected auditory feedback. Compensation refers to online, feedback-based corrections of articulations. In contrast, adaptation refers to long-term changes in the speech production system after exposure to erroneous/unexpected feedback, which may last even after feedback is normal again. In the current study, we aimed to compare both types of feedback responses by investigating the conditions under which the system starts adapting in addition to merely compensating. Participants vocalized long vowels while they were exposed to either consistently altered auditory feedback, or to feedback that was unpredictably either altered or normal. Participants were not aware of the manipulation of auditory feedback. We predicted that both conditions would elicit compensation, whereas adaptation would be stronger when the altered feedback was consistent across trials. The results show that although there seems to be somewhat more adaptation for the consistently altered feedback condition, a substantial amount of individual variability led to statistically unreliable effects at the group level. The results stress the importance of taking into account individual differences and show that people vary widely in how they respond to altered auditory feedback.
  • Franken, M. K., Eisner, F., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Following and Opposing Responses to Perturbed Auditory Feedback. Poster presented at Society for the Neurobiology of Language Annual Meeting 2015, Chicago, IL.

Share this page