Publications

Displaying 301 - 400 of 1079
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Han, J.-I., & Verdonschot, R. G. (2019). Spoken-word production in Korean: A non-word masked priming and phonological Stroop task investigation. Quarterly Journal of Experimental Psychology, 72(4), 901-912. doi:10.1177/1747021818770989.

    Abstract

    Speech production studies have shown that phonological unit initially used to fill the metrical frame during phonological encoding is language specific, that is, a phoneme for English and Dutch, an atonal syllable for Mandarin Chinese, and a mora for Japanese. However, only a few studies chronometrically investigated speech production in Korean, and they obtained mixed results. Korean is particularly interesting as there might be both phonemic and syllabic influences during phonological encoding. The purpose of this study is to further examine the initial phonological preparation unit in Korean, employing a masked priming task (Experiment 1) and a phonological Stroop task (Experiment 2). The results showed that significant onset (and onset-plus, that is, consonant-vowel [CV]) effects were found in both experiments, but there was no compelling evidence for a prominent role for the syllable. When the prime words were presented in three different forms related to the targets, namely, without any change, with re-syllabified codas, and with nasalised codas, there were no significant differences in facilitation among the three forms. Alternatively, it is also possible that participants may not have had sufficient time to process the primes up to the point that re-syllabification or nasalisation could have been carried out. In addition, the results of a Stroop task demonstrated that the onset phoneme effect was not driven by any orthographic influence. These findings suggest that the onset segment and not the syllable is the initial (or proximate) phonological unit used in the segment-to-frame encoding process during speech planning in Korean.

    Additional information

    stimuli for experiment 1 and 2
  • Hao, X., Huang, Y., Song, Y., Kong, X., & Liu, J. (2017). Experience with the Cardinal Coordinate System Contributes to the Precision of Cognitive Maps. Frontiers in Psychology, 8: 1166. doi:10.3389/fpsyg.2017.01166.

    Abstract

    The coordinate system has been proposed as a fundamental and cross-culturally used spatial representation, through which people code location and direction information in the environment. Here we provided direct evidence demonstrating that daily experience with the cardinal coordinate system (i.e., east, west, north, and south) contributed to the representation of cognitive maps. Behaviorally, we found that individuals who relied more on the cardinal coordinate system for daily navigation made smaller errors in an indoor pointing task, suggesting that the cardinal coordinate system is an important element of cognitive maps. Neurally, the extent to which individuals relied on the cardinal coordinate system was positively correlated with the gray matter volume of the entorhinal cortex, suggesting that the entorhinal cortex may serve as the neuroanatomical basis of coordinate-based navigation (the entorhinal coordinate area, ECA). Further analyses on the resting-state functional connectivity revealed that the intrinsic interaction between the ECA and two hippocampal sub-regions, the subiculum and cornu ammonis, might be linked with the representation precision of cognitive maps. In sum, our study reveals an association between daily experience with the cardinal coordinate system and cognitive maps, and suggests that the ECA works in collaboration with hippocampal sub-regions to represent cognitive maps.
  • Harmon, Z., Idemaru, K., & Kapatsinski, V. (2019). Learning mechanisms in cue reweighting. Cognition, 189, 76-88. doi:10.1016/j.cognition.2019.03.011.

    Abstract

    Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare the predictions of supervised error-driven learning (Rescorla & Wagner, 1972) and reinforcement learning (Sutton & Barto, 1998) using computational simulations. Supervised learning predicts downweighting of an informative cue when the learner receives evidence that it is no longer informative. In contrast, reinforcement learning suggests that a reduction in cue weight requires positive evidence for the informativeness of an alternative cue. Experimental evidence supports the latter prediction, implicating reinforcement learning as the mechanism behind the effect of feedback on cue weighting in speech perception. Native English listeners were exposed to either bimodal or unimodal VOT distributions spanning the unaspirated/aspirated boundary (bear/pear). VOT is the primary cue to initial stop voicing in English. However, lexical feedback in training indicated that VOT was no longer predictive of voicing. Reduction in the weight of VOT was observed only when participants could use an alternative cue, F0, to predict voicing. Frequency distributions had no effect on learning. Overall, the results suggest that attention shifting in learning the phonetic cues to phonological categories is accomplished using simple reinforcement learning principles that also guide the choice of actions in other domains.
  • Harmon, Z., & Kapatsinski, V. (2017). Putting old tools to novel uses: The role of form accessibility in semantic extension. Cognitive Psychology, 98, 22-44. doi:10.1016/j.cogpsych.2017.08.002.

    Abstract

    An increase in frequency of a form has been argued to result in semantic extension (Bybee, 2003; Zipf, 1949). Yet, research on the acquisition of lexical semantics suggests that a form that frequently co-occurs with a meaning gets restricted to that meaning (Xu & Tenenbaum, 2007). The current work reconciles these positions by showing that – through its effect on form accessibility – frequency causes semantic extension in production, while at the same time causing entrenchment in comprehension. Repeatedly experiencing a form paired with a specific meaning makes one more likely to re-use the form to express related meanings, while also increasing one’s confidence that the form is never used to express those meanings. Recurrent pathways of semantic change are argued to result from a tug of war between the production-side pressure to reuse easily accessible forms and the comprehension-side confidence that one has seen all possible uses of a frequent form.
  • Harneit, A., Braun, U., Geiger, L. S., Zang, Z., Hakobjan, M., Van Donkelaar, M. M. J., Schweiger, J. I., Schwarz, K., Gan, G., Erk, S., Heinz, A., Romanczuk‐Seiferth, N., Witt, S., Rietschel, M., Walter, H., Franke, B., Meyer‐Lindenberg, A., & Tost, H. (2019). MAOA-VNTR genotype affects structural and functional connectivity in distributed brain networks. Human Brain Mapping, 40(18), 5202-5212. doi:10.1002/hbm.24766.

    Abstract

    Previous studies have linked the low expression variant of a variable number of tandem repeat polymorphism in the monoamine oxidase A gene (MAOA‐L) to the risk for impulsivity and aggression, brain developmental abnormalities, altered cortico‐limbic circuit function, and an exaggerated neural serotonergic tone. However, the neurobiological effects of this variant on human brain network architecture are incompletely understood. We studied healthy individuals and used multimodal neuroimaging (sample size range: 219–284 across modalities) and network‐based statistics (NBS) to probe the specificity of MAOA‐L‐related connectomic alterations to cortical‐limbic circuits and the emotion processing domain. We assessed the spatial distribution of affected links across several neuroimaging tasks and data modalities to identify potential alterations in network architecture. Our results revealed a distributed network of node links with a significantly increased connectivity in MAOA‐L carriers compared to the carriers of the high expression (H) variant. The hyperconnectivity phenotype primarily consisted of between‐lobe (“anisocoupled”) network links and showed a pronounced involvement of frontal‐temporal connections. Hyperconnectivity was observed across functional magnetic resonance imaging (fMRI) of implicit emotion processing (pFWE = .037), resting‐state fMRI (pFWE = .022), and diffusion tensor imaging (pFWE = .044) data, while no effects were seen in fMRI data of another cognitive domain, that is, spatial working memory (pFWE = .540). These observations are in line with prior research on the MAOA‐L variant and complement these existing data by novel insights into the specificity and spatial distribution of the neurogenetic effects. Our work highlights the value of multimodal network connectomic approaches for imaging genetics.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Haworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L. and 15 moreHaworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L., Timmers, P. R. H. J., UK10K consortium, Davey Smith, G., Fisher, S. E., Wilson, J. F., Cole, T. J., Fernandez-Orth, D., Bønnelykke, K., Bisgaard, H., Pennell, C. E., Jaddoe, V. W. V., Dedoussis, G., Timpson, N. J., Zeggini, E., Vitart, V., & St Pourcain, B. (2019). Low-frequency variation in TP53 has large effects on head circumference and intracranial volume. Nature Communications, 10: 357. doi:10.1038/s41467-018-07863-x.

    Abstract

    Cranial growth and development is a complex process which affects the closely related traits of head circumference (HC) and intracranial volume (ICV). The underlying genetic influences affecting these traits during the transition from childhood to adulthood are little understood, but might include both age-specific genetic influences and low-frequency genetic variation. To understand these influences, we model the developmental genetic architecture of HC, showing this is genetically stable and correlated with genetic determinants of ICV. Investigating up to 46,000 children and adults of European descent, we identify association with final HC and/or final ICV+HC at 9 novel common and low-frequency loci, illustrating that genetic variation from a wide allele frequency spectrum contributes to cranial growth. The largest effects are reported for low-frequency variants within TP53, with 0.5 cm wider heads in increaser-allele carriers versus non-carriers during mid-childhood, suggesting a previously unrecognized role of TP53 transcripts in human cranial development.

    Additional information

    Supplementary Information
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Hendricks, A. E., Bochukova, E. G., Marenne, G., Keogh, J. M., Atanassova, N., Bounds, R., Wheeler, E., Mistry, V., Henning, E., Körner, A., Muddyman, D., McCarthy, S., Hinney, A., Hebebrand, J., Scott, R. A., Langenberg, C., Wareham, N. J., Surendran, P., Howson, J. M., Butterworth, A. S. and 14 moreHendricks, A. E., Bochukova, E. G., Marenne, G., Keogh, J. M., Atanassova, N., Bounds, R., Wheeler, E., Mistry, V., Henning, E., Körner, A., Muddyman, D., McCarthy, S., Hinney, A., Hebebrand, J., Scott, R. A., Langenberg, C., Wareham, N. J., Surendran, P., Howson, J. M., Butterworth, A. S., Danesh, J., Børge G, N., Nielse, S. F., Afzal, S., Papadia, S., Ashford, S., Garg, S., Palomino, R. I., Kwasniewska, A., Tachmazidou, I., O’Rahilly, S., Zeggini, E., Barroso, I., & Farooqi, I. S. (2017). Rare Variant Analysis of Human and Rodent Obesity Genes in Individuals with Severe Childhood Obesity. Scientific Reports, 7: 4394. doi:10.1038/s41598-017-03054-8.

    Abstract

    Obesity is a genetically heterogeneous disorder. Using targeted and whole-exome sequencing, we studied 32 human and 87 rodent obesity genes in 2,548 severely obese children and 1,117 controls. We identified 52 variants contributing to obesity in 2% of cases including multiple novel variants in GNAS, which were sometimes found with accelerated growth rather than short stature as described previously. Nominally significant associations were found for rare functional variants in BBS1, BBS9, GNAS, MKKS, CLOCK and ANGPTL6. The p.S284X variant in ANGPTL6 drives the association signal (rs201622589, MAF~0.1%, odds ratio = 10.13, p-value = 0.042) and results in complete loss of secretion in cells. Further analysis including additional case-control studies and population controls (N = 260,642) did not support association of this variant with obesity (odds ratio = 2.34, p-value = 2.59 × 10−3), highlighting the challenges of testing rare variant associations and the need for very large sample sizes. Further validation in cohorts with severe obesity and engineering the variants in model organisms will be needed to explore whether human variants in ANGPTL6 and other genes that lead to obesity when deleted in mice, do contribute to obesity. Such studies may yield druggable targets for weight loss therapies.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., Murray, M. M., & Golestani, N. (2017). Cortical thickness increases after simultaneous interpretation training. Neuropsychologia, 98, 212-219. doi:10.1016/j.neuropsychologia.2017.01.008.

    Abstract

    Simultaneous interpretation is a complex cognitive task that not only demands multilingual language processing, but also requires application of extreme levels of domain-general cognitive control. We used MRI to longitudinally measure cortical thickness in simultaneous interpretation trainees before and after a Master's program in conference interpreting. We compared them to multilingual control participants scanned at the same interval of time. Increases in cortical thickness were specific to trainee interpreters. Increases were observed in regions involved in lower-level, phonetic processing (left posterior superior temporal gyrus, anterior supramarginal gyrus and planum temporale), in the higher-level formulation of propositional speech (right angular gyrus) and in the conversion of items from working memory into a sequence (right dorsal premotor cortex), and finally, in domain-general executive control and attention (right parietal lobule). Findings are consistent with the linguistic requirements of simultaneous interpretation and also with the more general cognitive demands on attentional control for expert performance in simultaneous interpreting. Our findings may also reflect beneficial, potentially protective effects of simultaneous interpretation training, which has previously been shown to confer enhanced skills in certain executive and attentional domains over and above those conferred by bilingualism.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L. and 312 moreHibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L., Axelsson, T., Beecham, A. H., Beiser, A., Bernard, M., Blanton, S. H., Bohlken, M. M., Boks, M. P., Bralten, J., Brickman, A. M., Carmichael, O., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Chouraki, V., Cuellar-Partida, G., Crivello, F., den Brabander, A., Doan, N. T., Ehrlich, S., Giddaluru, S., Goldman, A. L., Gottesman, R. F., Grimm, O., Griswold, M. E., Guadalupe, T., Gutman, B. A., Hass, J., Haukvik, U. K., Hoehn, D., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Jørgensen, K. N., Mirza-Schreiber, N., Kasperaviciute, D., Kim, S., Klein, M., Krämer, B., Lee, P. H., Liewald, D. C. M., Lopez, L. M., Luciano, M., Macare, C., Marquand, A. F., Matarin, M., Mather, K. A., Mattheisen, M., McKay, D. R., Milaneschi, Y., Maniega, S. M., Nho, K., Nugent, A. C., Nyquist, P., Olde Loohuis, L. M., Oosterlaan, J., Papmeyer, M., Pirpamer, L., Pütz, B., Ramasamy, A., Richards, J. S., Risacher, S., Roiz-Santiañez, R., Rommelse, N., Ropele, S., Rose, E., Royle, N. A., Rundek, T., Sämann, P. G., Saremi, A., Satizabal, C. L., Schmaal, L., Schork, A. J., Shen, L., Shin, J., Shumskaya, E., Smith, A. V., Sprooten, E., Strike, L. T., Teumer, A., Tordesillas-Gutierrez, D., Toro, R., Trabzuni, D., Trompet, S., Vaidya, D., Van der Grond, J., Van der Lee, S. J., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, K. R., van Erp, T. G. M., Van Rooij, D., Walton, E., Westlye, L. T., Whelan, C. D., Windham, B. G., Winkler, A. M., Wittfeld, K. M., Woldehawariat, G., Wolf, C., Wolfers, T., Yanek, L. R., Yang, J., Zijdenbos, A., Zwiers, M. P., Agartz, I., Almasy, L., Ames, D., Amouyel, P., Andreassen, O. A., Arepalli, S., Assareh, A. A., Barral, S., Bastin, M. E., Becker, D. M., Becker, J. T., Bennett, D. A., Blangero, J., Van Bokhoven, H., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bulayeva, K. B., Cahn, W., Calhoun, V. D., Cannon, D. M., Cavalleri, G. L., Cheng, C.-Y., Cichon, S., Cookson, M. R., Corvin, A., Crespo-Facorro, B., Curran, J. E., Czisch, M., Dale, A. M., Davies, G. E., De Craen, A. J. M., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. i., Deary, I. J., Debette, S., DeCarli, C., Delanty, N., Depondt, C., DeStefano, A., Dillman, A., Djurovic, S., Donohoe, G., Drevets, W. C., Duggirala, R., Dyer, T. D., Enzinger, C., Erk, S., Espeseth, T., Fedko, I. O., Fernández, G., Ferrucci, L., Fisher, S. E., Fleischman, D. A., Ford, I., Fornage, M., Foroud, T. M., Fox, P. T., Francks, C., Fukunaga, M., Gibbs, J. R., Glahn, D. C., Gollub, R. L., Göring, H. H. H., Green, R. C., Gruber, O., Gudnason, V., Guelfi, S., Haberg, A. K., Hansell, N. K., Hardy, J., Hartman, C. A., Hashimoto, R., Hegenscheid, K., Heinz, A., Le Hellard, S., Hernandez, D. G., Heslenfeld, D. J., Ho, B.-C., Hoekstra, P. J., Hoffmann, W., Hofman, A., Holsboer, F., Homuth, G., Hosten, N., Hottenga, J.-J., Huentelman, M., Pol, H. E. H., Ikeda, M., Jack Jr., C. R., Jenkinson, M., Johnson, R., Jonsson, E. G., Jukema, J. W., Kahn, R. S., Kanai, R., Kloszewska, I., Knopman, D. S., Kochunov, P., Kwok, J. B., Lawrie, S. M., Lemaître, H., Liu, X., Longo, D. L., Lopez, O. L., Lovestone, S., Martinez, O., Martinot, J.-L., Mattay, V. S., McDonald, C., Mcintosh, A. M., McMahon, F., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mohnke, S., Montgomery, G. W., Morris, D. W., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Nalls, M. A., Nauck, M., Nichols, T. E., Niessen, W. J., Nöthen, M. M., Nyberg, L., Ohi, K., Olvera, R. L., Ophoff, R. A., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Potkin, S. G., Psaty, B. M., Reppermund, S., Rietschel, M., Roffman, J. L., Romanczuk-Seiferth, N., Rotter, J. I., Ryten, M., Sacco, R. L., Sachdev, P. S., Saykin, A. J., Schmidt, R., Schmidt, H., Schofield, P. R., Sigursson, S., Simmons, A., Singleton, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soininen, H., Steen, V. M., Stott, D. J., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Tsolaki, M., Tzourio, C., Uitterlinden, A. G., Hernández, M. C. V., Van der Brug, M., Van der Lugt, A., Van der Wee, N. J. A., Van Haren, N. E. M., Van Tol, M.-J., Vardarajan, B. N., Vellas, B., Veltman, D. J., Völzke, H., Walter, H., Wardlaw, J. M., Wassink, T. H., Weale, M. e., Weinberger, D. R., Weiner, M., Wen, W., Westman, E., White, T., Wong, T. Y., Wright, C. B., Zielke, R. H., Zonderman, A. B., Martin, N. G., Van Duijn, C. M., Wright, M. J., Longstreth, W. W. T., Schumann, G., Grabe, H. J., Franke, B., Launer, L. J., Medland, S. E., Seshadri, S., Thompson, P. M., & Ikram, A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8: 13624. doi:10.1038/ncomms13624.

    Abstract

    The hippocampal formation is a brain structure integrally involved in episodic memory, spatial navigation, cognition and stress responsiveness. Structural abnormalities in hippocampal volume and shape are found in several common neuropsychiatric disorders. To identify the genetic underpinnings of hippocampal structure here we perform a genome-wide association study (GWAS) of 33,536 individuals and discover six independent loci significantly associated with hippocampal volume, four of them novel. Of the novel loci, three lie within genes (ASTN2, DPP4 and MAST4) and one is found 200 kb upstream of SHH. A hippocampal subfield analysis shows that a locus within the MSRB3 gene shows evidence of a localized effect along the dentate gyrus, subiculum, CA1 and fissure. Further, we show that genetic variants associated with decreased hippocampal volume are also associated with increased risk for Alzheimer’s disease (rg=−0.155). Our findings suggest novel biological pathways through which human genetic variation influences hippocampal volume and risk for neuropsychiatric illness.

    Additional information

    ncomms13624-s1.pdf ncomms13624-s2.xlsx
  • Hintz, F., Meyer, A. S., & Huettig, F. (2017). Predictors of verb-mediated anticipatory eye movements in the visual world. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1352-1374. doi:10.1037/xlm0000388.

    Abstract

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of five potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners’ production fluency, receptive vocabulary knowledge, and non-verbal intelligence. In three eye-tracking experiments, participants looked at sets of four objects and listened to sentences where the final word was predictable or not predictable (e.g., “The man peels/draws an apple”). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and non-verbal intelligence was only a very weak predictor of anticipatory eye movements. Participants’ production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. 
  • Hirschmann, J., Schoffelen, J.-M., Schnitzler, A., & Van Gerven, M. A. J. (2017). Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus. Clinical Neurophysiology, 128, 2029-2036. doi:10.1016/j.clinph.2017.07.419.

    Abstract

    Objective: To investigate the possibility of tremor detection based on deep brain activity.
    Methods: We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10
    PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands
    was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments
    as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual
    power features.
    Results: Applying a threshold directly to band-limited power was insufficient for tremor detection (mean
    area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in
    contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained
    from a single contact pair. Within-patient training yielded better accuracy than across-patient training
    (0.84 vs. 0.78, p = 0.03), yet tremor could often be detected accurately with either approach. High frequency
    oscillations (>200 Hz) were the best performing individual feature.
    Conclusions: LFP-based markers of tremor are robust enough to allow for accurate tremor detection in
    short data segments, provided that appropriate statistical models are used.
    Significance: LFP-based markers of tremor could be useful control signals for closed-loop deep brain
    stimulation.
  • Hoedemaker, R. S., & Gordon, P. C. (2017). The onset and time course of semantic priming during rapid recognition of visual words. Journal of Experimental Psychology: Human Perception and Performance, 43(5), 881-902. doi:10.1037/xhp0000377.

    Abstract

    In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response
    tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these
    distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment;
    manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2017). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Acta Psychologica, 172, 55-63. doi:10.1016/j.actpsy.2016.11.007.

    Abstract

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, the joint-naming task showed a within-speaker CSI effect, such that naming latencies increased as a function of the number of category exemplars named previously by the participant (self-produced items). Crucially, we also observed an across-speaker CSI effect, such that naming latencies slowed as a function of the number of category members named by the participant's task partner (other-produced items). The magnitude of the across-speaker CSI effect did not vary as a function of whether or not the listening participant could see the pictures their partner was naming. The observation of across-speaker CSI suggests that the effect originates at the conceptual level of the language system, as proposed by Belke's (2013) Conceptual Accumulation account. Whereas self-produced and other-produced words both resulted in a CSI effect on naming latencies, post-experiment free recall rates were higher for self-produced than other-produced items. Together, these results suggest that both speaking and listening result in implicit learning at the conceptual level of the language system but that these effects are independent of explicit learning as indicated by item recall.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoey, E. (2017). [Review of the book Temporality in Interaction]. Studies in Language, 41(1), 232-238. doi:10.1075/sl.41.1.08hoe.
  • Hoey, E. (2017). Sequence recompletion: A practice for managing lapses in conversation. Journal of Pragmatics, 109, 47-63. doi:10.1016/j.pragma.2016.12.008.

    Abstract

    Conversational interaction occasionally lapses as topics become exhausted or as participants are left with no obvious thing to talk about next. In this article I look at episodes of ordinary conversation to examine how participants resolve issues of speakership and sequentiality in lapse environments. In particular, I examine one recurrent phenomenon—sequence recompletion—whereby participants bring to completion a sequence of talk that was already treated as complete. Using conversation analysis, I describe four methods for sequence recompletion: turn-exiting, action redoings, delayed replies, and post-sequence transitions. With this practice, participants use verbal and vocal resources to locally manage their participation framework when ending one course of action and potentially starting up a new one
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as addressee feedback in face-to-face conversation. Research on Language and Social Interaction, 50, 54-70. doi:10.1080/08351813.2017.1262143.

    Abstract

    Does blinking function as a type of feedback in conversation? To address this question, we built a corpus of Dutch conversations, identified short and long addressee blinks during extended turns, and measured their occurrence relative to the end of turn constructional units (TCUs), the location
    where feedback typically occurs. Addressee blinks were indeed timed to the
    end of TCUs. Also, long blinks were more likely than short blinks to occur
    during mutual gaze, with nods or continuers, and their occurrence was
    restricted to sequential contexts in which signaling understanding was
    particularly relevant, suggesting a special signaling capacity of long blinks.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huettig, F., Mishra, R. K., & Padakannaya, P. (2017). Editorial. Journal of Cultural Cognitive Science, 1( 1), 1. doi:10.1007/s41809-017-0006-2.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • Iacozza, S., Costa, A., & Duñabeitia, J. A. (2017). What do your eyes reveal about your foreign language? Reading emotional sentences in a native and foreign language. PLoS One, 12(10): e0186027. doi:10.1371/journal.pone.0186027.

    Abstract

    Foreign languages are often learned in emotionally neutral academic environments which differ greatly from the familiar context where native languages are acquired. This difference in learning contexts has been argued to lead to reduced emotional resonance when confronted with a foreign language. In the current study, we investigated whether the reactivity of the sympathetic nervous system in response to emotionally-charged stimuli is reduced in a foreign language. To this end, pupil sizes were recorded while reading aloud emotional sentences in the native or foreign language. Additionally, subjective ratings of emotional impact were provided after reading each sentence, allowing us to further investigate foreign language effects on explicit emotional understanding. Pupillary responses showed a larger effect of emotion in the native than in the foreign language. However, such a difference was not present for explicit ratings of emotionality. These results reveal that the sympathetic nervous system reacts differently depending on the language context, which in turns suggests a deeper emotional processing when reading in a native compared to a foreign language.

    Additional information

    pone.0186027.s001.docx
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., Sahin, H., & Gullberg, M. (2017). The expression of spatial relationships in Turkish-Dutch bilinguals. Bilingualism: Language and Cognition, 20(3), 473-493. doi:10.1017/S1366728915000875.

    Abstract

    We investigated how two groups of Turkish-Dutch bilinguals and two groups of monolingual speakers of the two languages described static topological relations. The bilingual groups differed with respect to their first (L1) and second (L2) language proficiencies and a number of sociolinguistic factors. Using an elicitation tool that covers a wide range of topological relations, we first assessed the extensions of different spatial expressions (topological relation markers, TRMs) in the Turkish and Dutch spoken by monolingual speakers. We then assessed differences in the use of TRMs between the two bilingual groups and monolingual speakers. In both bilingual groups, differences compared to monolingual speakers were mainly observed for Turkish. Dutch-dominant bilinguals showed enhanced congruence between translation-equivalent Turkish and Dutch TRMs. Turkish-dominant bilinguals extended the use of a topologically neutral locative marker. Our results can be interpreted as showing different bilingual optimization strategies (Muysken, 2013) in bilingual speakers who live in the same environment but differ with respect to L2 onset, L2 proficiency, and perceived importance of the L1.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). How robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects. Language, Cognition and Neuroscience, 32, 954-965. doi:10.1080/23273798.2016.1242761.

    Abstract

    Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language
    processing. However, widely-replicated prediction effects may not mean that prediction is
    necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading
    in their second language do not predict upcoming words as native readers do.
    Journal of
    Memory and Language, 69
    (4), 574

    588. doi:10.1016/j.jml.2013.08.001] reported ERP evidence for
    prediction in native- but not in non-native speakers. Articles mismatching an expected noun
    elicited larger negativity in the N400 time window compared to articles matching the expected
    noun in native speakers only. We attempted to replicate these findings, but found no evidence
    for prediction irrespective of language nativeness. We argue that pre-activation of phonological
    form of upcoming nouns, as evidenced in article-elicited effects, may not be a robust
    phenomenon. A view of prediction as a necessary computation in language comprehension
    must be re-evaluated.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). On predicting form and meaning in a second language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 635-652. doi:10.1037/xlm0000315.

    Abstract

    We used event-related potentials (ERP) to investigate whether Spanish−English bilinguals preactivate form and meaning of predictable words. Participants read high-cloze sentence contexts (e.g., “The student is going to the library to borrow a . . .”), followed by the predictable word (book), a word that was form-related (hook) or semantically related (page) to the predictable word, or an unrelated word (sofa). Word stimulus onset synchrony (SOA) was 500 ms (Experiment 1) or 700 ms (Experiment 2). In both experiments, all nonpredictable words elicited classic N400 effects. Form-related and unrelated words elicited similar N400 effects. Semantically related words elicited smaller N400s than unrelated words, which however, did not depend on cloze value of the predictable word. Thus, we found no N400 evidence for preactivation of form or meaning at either SOA, unlike native-speaker results (Ito, Corley et al., 2016). However, non-native speakers did show the post-N400 posterior positivity (LPC effect) for form-related words like native speakers, but only at the slower SOA. This LPC effect increased gradually with cloze value of the predictable word. We do not interpret this effect as necessarily demonstrating prediction, but rather as evincing combined effects of top-down activation (contextual meaning) and bottom-up activation (form similarity) that result in activation of unseen words that fit the context well, thereby leading to an interpretation conflict reflected in the LPC. Although there was no evidence that non-native speakers preactivate form or meaning, non-native speakers nonetheless appear to use bottom-up and top-down information to constrain incremental interpretation much like native speakers do.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). Why the A/AN prediction effect may be hard to replicate: A rebuttal to DeLong, Urbach & Kutas (2017). Language, Cognition and Neuroscience, 32(8), 974-983. doi:10.1080/23273798.2017.1323112.
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2017). Transfer from implicit to explicit phonological abilities in first and second language learners. Bilingualism: Language and Cognition, 20(4), 795-812. doi:10.1017/S1366728916000523.

    Abstract

    Children's abilities to process the phonological structure of words are important predictors of their literacy development. In the current study, we examined the interrelatedness between implicit (i.e., speech decoding) and explicit (i.e., phonological awareness) phonological abilities, and especially the role therein of lexical specificity (i.e., the ability to learn to recognize spoken words based on only minimal acoustic-phonetic differences). We tested 75 Dutch monolingual and 64 Turkish–Dutch bilingual kindergartners. SEM analyses showed that speech decoding predicted lexical specificity, which in turn predicted rhyme awareness in the first language learners but phoneme awareness in the second language learners. Moreover, in the latter group there was an impact of the second language: Dutch speech decoding and lexical specificity predicted Turkish phonological awareness, which in turn predicted Dutch phonological awareness. We conclude that language-specific phonological characteristics underlie different patterns of transfer from implicit to explicit phonological abilities in first and second language learners.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jones, G., & Rowland, C. F. (2017). Diversity not quantity in caregiver speech: Using computational modeling to isolate the effects of the quantity and the diversity of the input on vocabulary growth. Cognitive Psychology, 98, 1-21. doi:10.1016/j.cogpsych.2017.07.002.

    Abstract

    Children who hear large amounts of diverse speech learn language more quickly than children who do not. However, high correlations between the amount and the diversity of the input in speech samples makes it difficult to isolate the influence of each. We overcame this problem by controlling the input to a computational model so that amount of exposure to linguistic input (quantity) and the quality of that input (lexical diversity) were independently manipulated. Sublexical, lexical, and multi-word knowledge were charted across development (Study 1), showing that while input quantity may be important early in learning, lexical diversity is ultimately more crucial, a prediction confirmed against children’s data (Study 2). The model trained on a lexically diverse input also performed better on nonword repetition and sentence recall tests (Study 3) and was quicker to learn new words over time (Study 4). A language input that is rich in lexical diversity outperforms equivalent richness in quantity for learned sublexical and lexical knowledge, for well-established language tests, and for acquiring words that have never been encountered before.
  • Jongman, S. R. (2017). Sustained attention ability affects simple picture naming. Collabra: Psychology, 3(1): 17. doi:10.1525/collabra.84.

    Abstract

    Sustained attention has previously been shown as a requirement for language production. However, this is mostly evident for difficult conditions, such as a dual-task situation. The current study provides corroborating evidence that this relationship holds even for simple picture naming. Sustained attention ability, indexed both by participants’ reaction times and individuals’ hit rate (the proportion of correctly detected targets) on a digit discrimination task, correlated with picture naming latencies. Individuals with poor sustained attention were consistently slower and their RT distributions were more positively skewed when naming pictures compared to individuals with better sustained attention. Additionally, the need to sustain attention was manipulated by changing the speed of stimulus presentation. Research has suggested that fast event rates tax sustained attention resources to a larger degree than slow event rates. However, in this study the fast event rate did not result in increased difficulty, neither for the picture naming task nor for the sustained attention task. Instead, the results point to a speed-accuracy trade-off in the sustained attention task (lower accuracy but faster responses in the fast than in the slow event rate), and to a benefit for faster rates in the picture naming task (shorter naming latencies with no difference in accuracy). Performance on both tasks was largely comparable, supporting previous findings that sustained attention is called upon during language production
  • Jongman, S. R., Roelofs, A., Scheper, A., & Meyer, A. S. (2017). Picture naming in typically developing and language impaired children: The role of sustained attention. International Journal of Language & Communication Disorders, 52(3), 323-333. doi:10.1111/1460-6984.12275.

    Abstract

    Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality.
    Aims

    To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills.
    Methods & Procedures

    Groups of 7–9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs).
    Outcomes & Results

    Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups.
    Conclusions & Implications

    These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups.
  • Jongman, S. R., & Meyer, A. S. (2017). To plan or not to plan: Does planning for production remove facilitation from associative priming? Acta Psychologica, 181, 40-50. doi:10.1016/j.actpsy.2017.10.003.

    Abstract

    Theories of conversation propose that in order to have smooth transitions from one turn to the next, speakers already plan their response while listening to their interlocutor. Moreover, it has been argued that speakers align their linguistic representations (i.e. prime each other), thereby reducing the processing costs associated with concurrent listening and speaking. In two experiments, we assessed how identity and associative priming from spoken words onto picture naming were affected by a concurrent speech planning task. In a baseline (no name) condition, participants heard prime words that were identical, associatively related, or unrelated to target pictures presented two seconds after prime onset. Each prime was accompanied by a non-target picture and followed by its recorded name. The participant did not name the non-target picture. In the plan condition, the participants first named the non-target picture, instead of listening to the recording, and then the target. In Experiment 1, where the plan- and no-plan conditions were tested between participants, priming effects of equal strength were found in the plan and no-plan condition. In Experiment 2, where the two conditions were tested within participants, the identity priming effect was maintained, but the associative priming effect was only seen in the no-plan but not in the plan condition. In this experiment, participant had to decide at the onset of each trial whether or not to name the non-target picture, rendering the task more complex than in Experiment 1. These decision processes may have interfered with the processing of the primes. Thus, associative priming can take place during speech planning, but only if the cognitive load is not too high.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Jordens, P., & Bittner, D. (2017). Developing interlanguage: Driving forces in children learning Dutch and German. IRAL, 55(4), 365-392. doi:10.1515/iral-2017-0147.

    Abstract

    Spontaneous language learning both in children learning their mother tongue and in adults learning a second language shows that language development proceeds in a stage-wise manner. Given that a developmental stage is defined as a coherent linguistic system, utterances of language learners can be accounted for in terms of what (Selinker, Larry. 1972. Interlanguage. International Review of Applied Linguistics 10. 209-231) referred to with the term Interlanguage. This paper is a study on the early interlanguage systems of children learning Dutch and German as their mother tongue. The present child learner systems, so it is claimed, are coherent lexical systems based on types of verb-argument structure that are either agentive (as in Dutch: kannie bal pakke 'cannot ball get', or German: mag nich nase putzen 'like not nose clean') or non-agentive (as in Dutch: popje valt bijna 'doll falls nearly', or in German: ente fällt 'duck falls'). At this lexical stage, functional morphology (e. g. morphological finiteness, tense), function words (e. g. auxiliary verbs, determiners) and word order variation are absent. For these typically developing children, both in Dutch and in German, it is claimed that developmental progress is driven by the acquisition of the formal properties of topicalization. It is, furthermore, argued that this feature seems to serve as the driving force in the instantiation of the functional, i. e. informational linguistic properties of the target-language system
  • Junge, C., & Cutler, A. (2014). Early word recognition and later language skills. Brain sciences, 4(4), 532-559. doi:10.3390/brainsci4040532.

    Abstract

    Recent behavioral and electrophysiological evidence has highlighted the long-term importance for language skills of an early ability to recognize words in continuous speech. We here present further tests of this long-term link in the form of follow-up studies conducted with two (separate) groups of infants who had earlier participated in speech segmentation tasks. Each study extends prior follow-up tests: Study 1 by using a novel follow-up measure that taps into online processing, Study 2 by assessing language performance relationships over a longer time span than previously tested. Results of Study 1 show that brain correlates of speech segmentation ability at 10 months are positively related to 16-month-olds’ target fixations in a looking-while-listening task. Results of Study 2 show that infant speech segmentation ability no longer directly predicts language profiles at the age of five. However, a meta-analysis across our results and those of similar studies (Study 3) reveals that age at follow-up does not moderate effect size. Together, the results suggest that infants’ ability to recognize words in speech certainly benefits early vocabulary development; further observed relationships of later language skills to early word recognition may be consequent upon this vocabulary size effect.
  • Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.

    Abstract

    Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization.
  • Kakimoto, N., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Senda, Y., Iwamoto, Y., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2019). T2 relaxation times of the retrodiscal tissue in patients with temporomandibular joint disorders and in healthy volunteers: A comparative study. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 128(3), 311-318. doi:10.1016/j.oooo.2019.02.005.

    Abstract

    Objective. The aims of this study were to compare the temporomandibular joint (TMJ) retrodiscal tissue T2 relaxation times between patients with temporomandibular disorders (TMDs) and asymptomatic volunteers and to assess the diagnostic potential of this approach.
    Study Design. Patients with TMD (n = 173) and asymptomatic volunteers (n = 17) were examined by using a 1.5-T magnetic resonance scanner. The imaging protocol consisted of oblique sagittal, T2-weighted, 8-echo fast spin echo sequences in the closed mouth position. Retrodiscal tissue T2 relaxation times were obtained. Additionally, disc location and reduction, disc configuration, joint effusion, osteoarthritis, and bone edema or osteonecrosis were classified using MRI scans. The T2 relaxation times of each group were statistically compared.
    Results. Retrodiscal tissue T2 relaxation times were significantly longer in patient groups than in asymptomatic volunteers (P < .01). T2 relaxation times were significantly longer in all of the morphologic categories. The most important variables affecting retrodiscal tissue T2 relaxation times were disc configuration, joint effusion, and osteoarthritis.
    Conclusion. Retrodiscal tissue T2 relaxation times of patients with TMD were significantly longer than those of healthy volunteers. This finding may lead to the development of a diagnostic marker to aid in the early detection of TMDs.

Share this page