Publications

Displaying 301 - 400 of 1040
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search. Visual Cognition, 24, 226-245. doi:10.1080/13506285.2016.1221013.

    Abstract

    When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than fixating unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments, the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180-196. doi:10.1037/xhp0000102.

    Abstract

    An important question is to what extent visual attention is driven by the semantics of individual objects, rather than by their visual appearance. This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the target instruction was presented either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting towards semantically related objects compared to visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature.
  • De Groot, F., Koelewijn, T., Huettig, F., & Olivers, C. N. L. (2016). A stimulus set of words and pictures matched for visual and semantic similarity. Journal of Cognitive Psychology, 28(1), 1-15. doi:10.1080/20445911.2015.1101119.

    Abstract

    Researchers in different fields of psychology have been interested in how vision and language interact, and what type of representations are involved in such interactions. We introduce a stimulus set that facilitates such research (available online). The set consists of 100 words each of which is paired with four pictures of objects: One semantically similar object (but visually dissimilar), one visually similar object (but semantically dissimilar), and two unrelated objects. Visual and semantic similarity ratings between corresponding items are provided for every picture for Dutch and for English. In addition, visual and linguistic parameters of each picture are reported. We thus present a stimulus set from which researchers can select, on the basis of various parameters, the items most optimal for their research question.

    Files private

    Request files
  • Grove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K. and 55 moreGrove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R., Won, H., Pallesen, J., Agerbo, E., Andreassen, O. A., Anney, R., Belliveau, R., Bettella, F., Buxbaum, J. D., Bybjerg-Grauholm, J., Bækved-Hansen, M., Cerrato, F., Chambert, K., Christensen, J. H., Churchhouse, C., Dellenvall, K., Demontis, D., De Rubeis, S., Devlin, B., Djurovic, S., Dumont, A., Goldstein, J., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Hope, S., Howrigan, D. P., Huang, H., Hultman, C., Klei, L., Maller, J., Martin, J., Martin, A. R., Moran, J., Nyegaard, M., Nærland, T., Palmer, D. S., Palotie, A., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., St Pourcain, B., Qvist, P., Rehnström, K., Reichenberg, A., Reichert, J., Robinson, E. B., Roeder, K., Roussos, P., Saemundsen, E., Sandin, S., Satterstrom, F. K., Smith, G. D., Stefansson, H., Stefansson, K., Steinberg, S., Stevens, C., Sullivan, P. F., Turley, P., Walters, G. B., Xu, X., Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium, BUPGEN, Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium, Me Research Team, Geschwind, D., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Neale, B. M., Daly, M. J., & Børglum, A. D. (2019). Identification of common genetic risk variants for autism spectrum disorder. Nature Genetics, 51, 431-444. doi:10.1038/s41588-019-0344-8.

    Abstract

    Autism spectrum disorder (ASD) is a highly heritable and heterogeneous group of neurodevelopmental phenotypes diagnosed in more than 1% of children. Common genetic variants contribute substantially to ASD susceptibility, but to date no individual variants have been robustly associated with ASD. With a marked sample-size increase from a unique Danish population resource, we report a genome-wide association meta-analysis of 18,381 individuals with ASD and 27,969 controls that identified five genome-wide-significant loci. Leveraging GWAS results from three phenotypes with significantly overlapping genetic architectures (schizophrenia, major depression, and educational attainment), we identified seven additional loci shared with other traits at equally strict significance levels. Dissecting the polygenic architecture, we found both quantitative and qualitative polygenic heterogeneity across ASD subtypes. These results highlight biological insights, particularly relating to neuronal function and corticogenesis, and establish that GWAS performed at scale will be much more productive in the near term in ASD.

    Additional information

    Supplementary Text and Figures
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Guest, O., & Rougier, N. P. (2016). "What is computational reproducibility?" and "Diversity in reproducibility". IEEE CIS Newsletter on Cognitive and Developmental Systems, 13(2), 4 and 12.
  • Guest, O., Kanayet, F. J., & Love, B. C. (2019). Gerrymandering and computational redistricting. Journal of Computational Social Science, 2, 119-131. doi:10.1007/s42001-019-00053-9.

    Abstract

    Partisan gerrymandering poses a threat to democracy. Moreover, the complexity of the districting task may exceed human capacities. One potential solution is using computational models to automate the districting process by optimizing objective and open criteria, such as how spatially compact districts are. We formulated one such model that minimised pairwise distance between voters within a district. Using US Census Bureau data, we confirmed our prediction that the difference in compactness between the computed and actual districts would be greatest for states that are large and, therefore, difficult for humans to properly district given their limited capacities. The computed solutions highlighted differences in how humans and machines solve this task with machine solutions more fully optimised and displaying emergent properties not evident in human solutions. These results suggest a division of labour in which humans debate and formulate districting criteria whereas machines optimise the criteria to draw the district boundaries. We discuss how criteria can be expanded beyond notions of compactness to include other factors, such as respecting municipal boundaries, historic communities, and relevant legislation.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press.

    Abstract

    Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work pre­sented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communi­cation strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish lear­ners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implica­tions for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J. and 7 moreGunz, P., Tilot, A. K., Wittfeld, K., Teumer, A., Shapland, C. Y., Van Erp, T. G. M., Dannemann, M., Vernot, B., Neubauer, S., Guadalupe, T., Fernandez, G., Brunner, H., Enard, W., Fallon, J., Hosten, N., Völker, U., Profico, A., Di Vincenzo, F., Manzi, G., Kelso, J., St Pourcain, B., Hublin, J.-J., Franke, B., Pääbo, S., Macciardi, F., Grabe, H. J., & Fisher, S. E. (2019). Neandertal introgression sheds light on modern human endocranial globularity. Current Biology, 29(1), 120-127. doi:10.1016/j.cub.2018.10.065.

    Abstract

    One of the features that distinguishes modern humans from our extinct relatives
    and ancestors is a globular shape of the braincase [1-4]. As the endocranium
    closely mirrors the outer shape of the brain, these differences might reflect
    altered neural architecture [4,5]. However, in the absence of fossil brain tissue the
    underlying neuroanatomical changes as well as their genetic bases remain
    elusive. To better understand the biological foundations of modern human
    endocranial shape, we turn to our closest extinct relatives, the Neandertals.
    Interbreeding between modern humans and Neandertals has resulted in
    introgressed fragments of Neandertal DNA in the genomes of present-day non-
    Africans [6,7]. Based on shape analyses of fossil skull endocasts, we derive a
    measure of endocranial globularity from structural magnetic resonance imaging
    (MRI) scans of thousands of modern humans, and study the effects of
    introgressed fragments of Neandertal DNA on this phenotype. We find that
    Neandertal alleles on chromosomes 1 and 18 are associated with reduced
    endocranial globularity. These alleles influence expression of two nearby genes,
    UBR4 and PHLPP1, which are involved in neurogenesis and myelination,
    respectively. Our findings show how integration of fossil skull data with archaic
    genomics and neuroimaging can suggest developmental mechanisms that may
    contribute to the unique modern human endocranial shape.

    Additional information

    mmc1.pdf mmc2.xlsx
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (Ed.). (2019). Human language: From genes and brains to behavior. Cambridge, MA: MIT Press.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P., & Levelt, W. J. M. (2009). The speaking brain. Science, 326(5951), 372-373. doi:10.1126/science.1181675.

    Abstract

    How does intention to speak become the action of speaking? It involves the generation of a preverbal message that is tailored to the requirements of a particular language, and through a series of steps, the message is transformed into a linear sequence of speech sounds (1, 2). These steps include retrieving different kinds of information from memory (semantic, syntactic, and phonological), and combining them into larger structures, a process called unification. Despite general agreement about the steps that connect intention to articulation, there is no consensus about their temporal profile or the role of feedback from later steps (3, 4). In addition, since the discovery by the French physician Pierre Paul Broca (in 1865) of the role of the left inferior frontal cortex in speaking, relatively little progress has been made in understanding the neural infrastructure that supports speech production (5). One reason is that the characteristics of natural language are uniquely human, and thus the neurobiology of language lacks an adequate animal model. But on page 445 of this issue, Sahin et al. (6) demonstrate, by recording neuronal activity in the human brain, that different kinds of linguistic information are indeed sequentially processed within Broca's area.
  • Hagoort, P. (2019). The meaning making mechanism(s) behind the eyes and between the ears. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190301. doi:10.1098/rstb.2019.0301.

    Abstract

    In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information.
  • Hagoort, P. (2019). The neurobiology of language beyond single word processing. Science, 366(6461), 55-58. doi:10.1126/science.aax0289.

    Abstract

    In this Review, I propose a multiple-network view for the neurobiological basis of distinctly human language skills. A much more complex picture of interacting brain areas emerges than in the classical neurobiological model of language. This is because using language is more than single-word processing, and much goes on beyond the information given in the acoustic or orthographic tokens that enter primary sensory cortices. This requires the involvement of multiple networks with functionally nonoverlapping contributions

    Files private

    Request files
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hammarström, H. (2016). Commentary: There is no demonstrable effect of desiccation [Commentary on "Language evolution and climate: The case of desiccation and tone'']. Journal of Language Evolution, 1, 65-69. doi:10.1093/jole/lzv015.
  • Hammarström, H. (2016). Linguistic diversity and language evolution. Journal of Language Evolution, 1, 19-29. doi:10.1093/jole/lzw002.

    Abstract

    What would your ideas about language evolution be if there was only one language left on earth? Fortunately, our investigation need not be that impoverished. In the present article, we survey the state of knowledge regarding the kinds of language found among humans, the language inventory, population sizes, time depth, grammatical variation, and other relevant issues that a theory of language evolution should minimally take into account
  • Han, J.-I., & Verdonschot, R. G. (2019). Spoken-word production in Korean: A non-word masked priming and phonological Stroop task investigation. Quarterly Journal of Experimental Psychology, 72(4), 901-912. doi:10.1177/1747021818770989.

    Abstract

    Speech production studies have shown that phonological unit initially used to fill the metrical frame during phonological encoding is language specific, that is, a phoneme for English and Dutch, an atonal syllable for Mandarin Chinese, and a mora for Japanese. However, only a few studies chronometrically investigated speech production in Korean, and they obtained mixed results. Korean is particularly interesting as there might be both phonemic and syllabic influences during phonological encoding. The purpose of this study is to further examine the initial phonological preparation unit in Korean, employing a masked priming task (Experiment 1) and a phonological Stroop task (Experiment 2). The results showed that significant onset (and onset-plus, that is, consonant-vowel [CV]) effects were found in both experiments, but there was no compelling evidence for a prominent role for the syllable. When the prime words were presented in three different forms related to the targets, namely, without any change, with re-syllabified codas, and with nasalised codas, there were no significant differences in facilitation among the three forms. Alternatively, it is also possible that participants may not have had sufficient time to process the primes up to the point that re-syllabification or nasalisation could have been carried out. In addition, the results of a Stroop task demonstrated that the onset phoneme effect was not driven by any orthographic influence. These findings suggest that the onset segment and not the syllable is the initial (or proximate) phonological unit used in the segment-to-frame encoding process during speech planning in Korean.

    Additional information

    stimuli for experiment 1 and 2
  • Hanulikova, A. (2009). Lexical segmentation in Slovak and German. Berlin: Akademie Verlag.

    Abstract

    All humans are equipped with perceptual and articulatory mechanisms which (in healthy humans) allow them to learn to perceive and produce speech. One basic question in psycholinguistics is whether humans share similar underlying processing mechanisms for all languages, or whether these are fundamentally different due to the diversity of languages and speakers. This book provides a cross-linguistic examination of speech comprehension by investigating word recognition in users of different languages. The focus is on how listeners segment the quasi-continuous stream of sounds that they hear into a sequence of discrete words, and how a universal segmentation principle, the Possible Word Constraint, applies in the recognition of Slovak and German.
  • Hao, X., Huang, Y., Li, X., Song, Y., Kong, X., Wang, X., Yang, Z., Zhen, Z., & Liu, J. (2016). Structural and functional neural correlates of spatial navigation: A combined voxel‐based morphometry and functional connectivity study. Brain and Behavior, 6(12): e00572. doi:10.1002/brb3.572.

    Abstract

    Introduction: Navigation is a fundamental and multidimensional cognitive function that individuals rely on to move around the environment. In this study, we investigated the neural basis of human spatial navigation ability. Methods: A large cohort of participants (N > 200) was examined on their navigation ability behaviorally and structural and functional magnetic resonance imaging (MRI) were then used to explore the corresponding neural basis of spatial navigation. Results: The gray matter volume (GMV) of the bilateral parahippocampus (PHG), retrosplenial complex (RSC), entorhinal cortex (EC), hippocampus (HPC), and thalamus (THAL) was correlated with the participants’ self-reported navigational ability in general, and their sense of direction in particular. Further fMRI studies showed that the PHG, RSC, and EC selectively responded to visually presented scenes, whereas the HPC and THAL showed no selectivity, suggesting a functional division of labor among these regions in spatial navigation. The resting-state functional connectivity analysis further revealed a hierarchical neural network for navigation constituted by these regions, which can be further categorized into three relatively independent components (i.e., scene recognition component, cognitive map component, and the component of heading direction for locomotion, respectively). Conclusions: Our study combined multi-modality imaging data to illustrate that multiple brain regions may work collaboratively to extract, integrate, store, and orientate spatial information to guide navigation behaviors.

    Additional information

    brb3572-sup-0001-FigS1-S4.docx
  • Harmon, Z., Idemaru, K., & Kapatsinski, V. (2019). Learning mechanisms in cue reweighting. Cognition, 189, 76-88. doi:10.1016/j.cognition.2019.03.011.

    Abstract

    Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare the predictions of supervised error-driven learning (Rescorla & Wagner, 1972) and reinforcement learning (Sutton & Barto, 1998) using computational simulations. Supervised learning predicts downweighting of an informative cue when the learner receives evidence that it is no longer informative. In contrast, reinforcement learning suggests that a reduction in cue weight requires positive evidence for the informativeness of an alternative cue. Experimental evidence supports the latter prediction, implicating reinforcement learning as the mechanism behind the effect of feedback on cue weighting in speech perception. Native English listeners were exposed to either bimodal or unimodal VOT distributions spanning the unaspirated/aspirated boundary (bear/pear). VOT is the primary cue to initial stop voicing in English. However, lexical feedback in training indicated that VOT was no longer predictive of voicing. Reduction in the weight of VOT was observed only when participants could use an alternative cue, F0, to predict voicing. Frequency distributions had no effect on learning. Overall, the results suggest that attention shifting in learning the phonetic cues to phonological categories is accomplished using simple reinforcement learning principles that also guide the choice of actions in other domains.
  • Harneit, A., Braun, U., Geiger, L. S., Zang, Z., Hakobjan, M., Van Donkelaar, M. M. J., Schweiger, J. I., Schwarz, K., Gan, G., Erk, S., Heinz, A., Romanczuk‐Seiferth, N., Witt, S., Rietschel, M., Walter, H., Franke, B., Meyer‐Lindenberg, A., & Tost, H. (2019). MAOA-VNTR genotype affects structural and functional connectivity in distributed brain networks. Human Brain Mapping, 40(18), 5202-5212. doi:10.1002/hbm.24766.

    Abstract

    Previous studies have linked the low expression variant of a variable number of tandem repeat polymorphism in the monoamine oxidase A gene (MAOA‐L) to the risk for impulsivity and aggression, brain developmental abnormalities, altered cortico‐limbic circuit function, and an exaggerated neural serotonergic tone. However, the neurobiological effects of this variant on human brain network architecture are incompletely understood. We studied healthy individuals and used multimodal neuroimaging (sample size range: 219–284 across modalities) and network‐based statistics (NBS) to probe the specificity of MAOA‐L‐related connectomic alterations to cortical‐limbic circuits and the emotion processing domain. We assessed the spatial distribution of affected links across several neuroimaging tasks and data modalities to identify potential alterations in network architecture. Our results revealed a distributed network of node links with a significantly increased connectivity in MAOA‐L carriers compared to the carriers of the high expression (H) variant. The hyperconnectivity phenotype primarily consisted of between‐lobe (“anisocoupled”) network links and showed a pronounced involvement of frontal‐temporal connections. Hyperconnectivity was observed across functional magnetic resonance imaging (fMRI) of implicit emotion processing (pFWE = .037), resting‐state fMRI (pFWE = .022), and diffusion tensor imaging (pFWE = .044) data, while no effects were seen in fMRI data of another cognitive domain, that is, spatial working memory (pFWE = .540). These observations are in line with prior research on the MAOA‐L variant and complement these existing data by novel insights into the specificity and spatial distribution of the neurogenetic effects. Our work highlights the value of multimodal network connectomic approaches for imaging genetics.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2016). Taking perspective: Personal pronouns affect experiential aspects of literary reading. PLoS One, 11(5): e0154732. doi:10.1371/journal.pone.0154732.

    Abstract

    Personal pronouns have been shown to influence cognitive perspective taking during comprehension. Studies using single sentences found that 3rd person pronouns facilitate the construction of a mental model from an observer’s perspective, whereas 2nd person pronouns support an actor’s perspective. The direction of the effect for 1st person pronouns seems to depend on the situational context. In the present study, we investigated how personal pronouns influence discourse comprehension when people read fiction stories and if this has consequences for affective components like emotion during reading or appreciation of the story. We wanted to find out if personal pronouns affect immersion and arousal, as well as appreciation of fiction. In a natural reading paradigm, we measured electrodermal activity and story immersion, while participants read literary stories with 1st and 3rd person pronouns referring to the protagonist. In addition, participants rated and ranked the stories for appreciation. Our results show that stories with 1st person pronouns lead to higher immersion. Two factors—transportation into the story world and mental imagery during reading—in particular showed higher scores for 1st person as compared to 3rd person pronoun stories. In contrast, arousal as measured by electrodermal activity seemed tentatively higher for 3rd person pronoun stories. The two measures of appreciation were not affected by the pronoun manipulation. Our findings underscore the importance of perspective for language processing, and additionally show which aspects of the narrative experience are influenced by a change in perspective.
  • Haun, D. B. M., & Call, J. (2009). Great apes’ capacities to recognize relational similarity. Cognition, 110, 147-159. doi:10.1016/j.cognition.2008.10.012.

    Abstract

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding analogy and metaphor. In the current study, we investigated the ability to recognize relational similarities by testing five species of great apes, including human children in a spatial task. We found that all species performed better if related elements are connected by logico-causal as opposed to non-causal relations. Further, we find that only children above 4 years of age, bonobos and chimpanzees, unlike younger children, gorillas and orangutans display some mastery of reasoning by non-causal relational similarity. We conclude that recognizing relational similarity is not in its entirety unique to the human species. The lack of a capability for language does not prohibit recognition of simple relational similarities. The data are discussed in the light of the phylogenetic tree of relatedness of the great apes.
  • Haun, D. B. M., & Rapold, C. J. (2009). Variation in memory for body movements across cultures. Current Biology, 19(23), R1068-R1069. doi:10.1016/j.cub.2009.10.041.

    Abstract

    There has been considerable controversy over the existence of cognitive differences across human cultures: some claim that human cognition is essentially universal [1,2], others that it reflects cultural specificities [3,4]. One domain of interest has been spatial cognition [5,6]. Despite the global universality of physical space, cultures vary as to how space is coded in their language. Some, for example, do not use egocentric ‘left, right, front, back’ constructions to code spatial relations, instead using allocentric notions like ‘north, south, east, west’ [4,6]: “The spoon is north of the bowl!” Whether or not spatial cognition also varies across cultures remains a contested question [7,8]. Here we investigate whether memory for movements of one's own body differs between cultures with contrastive strategies for coding spatial relations. Our results show that the ways in which we memorize movements of our own body differ in line with culture-specific preferences for how to conceive of spatial relations.
  • Havik, E., Roberts, L., Van Hout, R., Schreuder, R., & Haverkort, M. (2009). Processing subject-object ambiguities in L2 Dutch: A self-paced reading study with German L2 learners of Dutch. Language Learning, 59(1), 73-112. doi:10.1111/j.1467-9922.2009.00501.x.

    Abstract

    The results of two self-paced reading experiments are reported, which investigated the on-line processing of subject-object ambiguities in Dutch relative clause constructions like Dat is de vrouw die de meisjes heeft/hebben gezien by German advanced second language (L2) learners of Dutch. Native speakers of both Dutch and German have been shown to have a preference for a subject versus an object reading of such temporarily ambiguous sentences, and so we provided an ideal opportunity for the transfer of first language (L1) processing preferences to take place. We also investigated whether the participants' working memory span would affect their processing of the experimental items. The results suggest that processing decisions may be affected by working memory when task demands are high and in this case, the high working memory span learners patterned like the native speakers of lower working memory. However, when reading for comprehension alone, and when only structural information was available to guide parsing decisions, working memory span had no effect on the L2 learners' on-line processing, and this differed from the native speakers' even though the L1 and the L2 are highly comparable.
  • Haworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L. and 15 moreHaworth, S., Shapland, C. Y., Hayward, C., Prins, B. P., Felix, J. F., Medina-Gomez, C., Rivadeneira, F., Wang, C., Ahluwalia, T. S., Vrijheid, M., Guxens, M., Sunyer, J., Tachmazidou, I., Walter, K., Iotchkova, V., Jackson, A., Cleal, L., Huffmann, J., Min, J. L., Sass, L., Timmers, P. R. H. J., UK10K consortium, Davey Smith, G., Fisher, S. E., Wilson, J. F., Cole, T. J., Fernandez-Orth, D., Bønnelykke, K., Bisgaard, H., Pennell, C. E., Jaddoe, V. W. V., Dedoussis, G., Timpson, N. J., Zeggini, E., Vitart, V., & St Pourcain, B. (2019). Low-frequency variation in TP53 has large effects on head circumference and intracranial volume. Nature Communications, 10: 357. doi:10.1038/s41467-018-07863-x.

    Abstract

    Cranial growth and development is a complex process which affects the closely related traits of head circumference (HC) and intracranial volume (ICV). The underlying genetic influences affecting these traits during the transition from childhood to adulthood are little understood, but might include both age-specific genetic influences and low-frequency genetic variation. To understand these influences, we model the developmental genetic architecture of HC, showing this is genetically stable and correlated with genetic determinants of ICV. Investigating up to 46,000 children and adults of European descent, we identify association with final HC and/or final ICV+HC at 9 novel common and low-frequency loci, illustrating that genetic variation from a wide allele frequency spectrum contributes to cranial growth. The largest effects are reported for low-frequency variants within TP53, with 0.5 cm wider heads in increaser-allele carriers versus non-carriers during mid-childhood, suggesting a previously unrecognized role of TP53 transcripts in human cranial development.

    Additional information

    Supplementary Information
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Heidlmayr, K., Doré-Mazars, K., Aparicio, X., & Isel, F. (2016). Multiple language use influences oculomotor task performance: Neurophysiological evidence of a shared substrate between language and motor control. PLoS One, 11(11): e0165029. doi:10.1371/journal.pone.0165029.

    Abstract

    In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism.
  • Hendriks, L., Witteman, M. J., Frietman, L. C. G., Westerhof, G., Van Baaren, R. B., Engels, R. C. M. E., & Dijksterhuis, A. J. (2009). Imitation can reduce malnutrition in residents in assisted living facilities [Letter to the editor]. Journal of the American Geriatrics Society, 571(1), 187-188. doi:10.1111/j.1532-5415.2009.02074.x.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hintz, F., Meyer, A. S., & Huettig, F. (2016). Encouraging prediction during production facilitates subsequent comprehension: Evidence from interleaved object naming in sentence context and sentence reading. Quarterly Journal of Experimental Psychology, 69(6), 1056-1063. doi:10.1080/17470218.2015.1131309.

    Abstract

    Many studies have shown that a supportive context facilitates language comprehension. A currently influential view is that language production may support prediction in language comprehension. Experimental evidence for this, however, is relatively sparse. Here we explored whether encouraging prediction in a language production task encourages the use of predictive contexts in an interleaved comprehension task. In Experiment 1a, participants listened to the first part of a sentence and provided the final word by naming aloud a picture. The picture name was predictable or not predictable from the sentence context. Pictures were named faster when they could be predicted than when this was not the case. In Experiment 1b the same sentences, augmented by a final spill-over region, were presented in a self-paced reading task. No difference in reading times for predictive vs. non-predictive sentences was found. In Experiment 2, reading and naming trials were intermixed. In the naming task, the advantage for predictable picture names was replicated. More importantly, now reading times for the spill-over region were considerable faster for predictive vs. non-predictive sentences. We conjecture that these findings fit best with the notion that prediction in the service of language production encourages the use of predictive contexts in comprehension. Further research is required to identify the exact mechanisms by which production exerts its influence on comprehension.
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hogekamp, Z., Blomster, J. B., Bursalioglu, A., Calin, M. C., Çetinçelik, M., Haastrup, L., & Van den Berg, Y. H. M. (2016). Examining the Importance of the Teachers' Emotional Support for Students' Social Inclusion Using the One-with-Many Design. Frontiers in Psychology, 7: 1014. doi:10.3389/fpsyg.2016.01014.

    Abstract

    The importance of high quality teacher–student relationships for students' well-being has been long documented. Nonetheless, most studies focus either on teachers' perceptions of provided support or on students' perceptions of support. The degree to which teachers and students agree is often neither measured nor taken into account. In the current study, we will therefore use a dyadic analysis strategy called the one-with-many design. This design takes into account the nestedness of the data and looks at the importance of reciprocity when examining the influence of teacher support for students' academic and social functioning. Two samples of teachers and their students from Grade 4 (age 9–10 years) have been recruited in primary schools, located in Turkey and Romania. By using the one-with-many design we can first measure to what degree teachers' perceptions of support are in line with students' experiences. Second, this level of consensus is taken into account when examining the influence of teacher support for students' social well-being and academic functioning.
  • Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior, 33, 73-88.
  • Holler, J., & Wilkin, K. (2009). Communicating common ground: how mutually shared knowledge influences the representation of semantic information in speech and gesture in a narrative task. Language and Cognitive Processes, 24, 267-289.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (Eds.). (2016). Turn-Taking in Human Communicative Interaction. Lausanne: Frontiers Media. doi:10.3389/978-2-88919-825-2.

    Abstract

    The core use of language is in face-to-face conversation. This is characterized by rapid turn-taking. This turn-taking poses a number central puzzles for the psychology of language.

    Consider, for example, that in large corpora the gap between turns is on the order of 100 to 300 ms, but the latencies involved in language production require minimally between 600ms (for a single word) or 1500 ms (for as simple sentence). This implies that participants in conversation are predicting the ends of the incoming turn and preparing in advance. But how is this done? What aspects of this prediction are done when? What happens when the prediction is wrong? What stops participants coming in too early? If the system is running on prediction, why is there consistently a mode of 100 to 300 ms in response time?

    The timing puzzle raises further puzzles: it seems that comprehension must run parallel with the preparation for production, but it has been presumed that there are strict cognitive limitations on more than one central process running at a time. How is this bottleneck overcome? Far from being 'easy' as some psychologists have suggested, conversation may be one of the most demanding cognitive tasks in our everyday lives. Further questions naturally arise: how do children learn to master this demanding task, and what is the developmental trajectory in this domain?

    Research shows that aspects of turn-taking such as its timing are remarkably stable across languages and cultures, but the word order of languages varies enormously. How then does prediction of the incoming turn work when the verb (often the informational nugget in a clause) is at the end? Conversely, how can production work fast enough in languages that have the verb at the beginning, thereby requiring early planning of the whole clause? What happens when one changes modality, as in sign languages -- with the loss of channel constraints is turn-taking much freer? And what about face-to-face communication amongst hearing individuals -- do gestures, gaze, and other body behaviors facilitate turn-taking? One can also ask the phylogenetic question: how did such a system evolve? There seem to be parallels (analogies) in duetting bird species, and in a variety of monkey species, but there is little evidence of anything like this among the great apes.

    All this constitutes a neglected set of problems at the heart of the psychology of language and of the language sciences. This research topic welcomes contributions from right across the board, for example from psycholinguists, developmental psychologists, students of dialogue and conversation analysis, linguists interested in the use of language, phoneticians, corpus analysts and comparative ethologists or psychologists. We welcome contributions of all sorts, for example original research papers, opinion pieces, and reviews of work in subfields that may not be fully understood in other subfields.
  • Hoppenbrouwers, G., Seuren, P. A. M., & Weijters, A. (Eds.). (1985). Meaning and the lexicon. Dordrecht: Foris.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Huang, L., Zhou, G., Liu, Z., Dang, X., Yang, Z., Kong, X., Wang, X., Song, Y., Zhen, Z., & Liu, J. (2016). A Multi-Atlas Labeling Approach for Identifying Subject-Specific Functional Regions of Interest. PLoS One, 11(1): e0146868. doi:10.1371/journal.pone.0146868.

    Abstract

    The functional region of interest (fROI) approach has increasingly become a favored methodology in functional magnetic resonance imaging (fMRI) because it can circumvent inter-subject anatomical and functional variability, and thus increase the sensitivity and functional resolution of fMRI analyses. The standard fROI method requires human experts to meticulously examine and identify subject-specific fROIs within activation clusters. This process is time-consuming and heavily dependent on experts’ knowledge. Several algorithmic approaches have been proposed for identifying subject-specific fROIs; however, these approaches cannot easily incorporate prior knowledge of inter-subject variability. In the present study, we improved the multi-atlas labeling approach for defining subject-specific fROIs. In particular, we used a classifier-based atlas-encoding scheme and an atlas selection procedure to account for the large spatial variability across subjects. Using a functional atlas database for face recognition, we showed that with these two features, our approach efficiently circumvented inter-subject anatomical and functional variability and thus improved labeling accuracy. Moreover, in comparison with a single-atlas approach, our multi-atlas labeling approach showed better performance in identifying subject-specific fROIs.

    Additional information

    S1_Fig.tif S2_Fig.tif
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Hubers, F., Snijders, T. M., & De Hoop, H. (2016). How the brain processes violations of the grammatical norm: An fMRI study. Brain and Language, 163, 22-31. doi:10.1016/j.bandl.2016.08.006.

    Abstract

    Native speakers of Dutch do not always adhere to prescriptive grammar rules in their daily speech. These grammatical norm violations can elicit emotional reactions in language purists, mostly high-educated people, who claim that for them these constructions are truly ungrammatical. However, linguists generally assume that grammatical norm violations are in fact truly grammatical, especially when they occur frequently in a language. In an fMRI study we investigated the processing of grammatical norm violations in the brains of language purists, and compared them with truly grammatical and truly ungrammatical sentences. Grammatical norm violations were found to be unique in that their processing resembled not only the processing of truly grammatical sentences (in left medial Superior Frontal Gyrus and Angular Gyrus), but also that of truly ungrammatical sentences (in Inferior Frontal Gyrus), despite what theories of grammar would usually lead us to believe
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80-93. doi:10.1080/23273798.2015.1047459.

    Abstract

    It is now well established that anticipation of up-coming input is a key characteristic of spoken language comprehension. Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed however has hardly been explored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32 to 77 years of age received spoken instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM" - look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use gender information from the article to predict the upcoming target object. The average participant anticipated the target objects well in advance of the critical noun. Multiple regression analyses showed that working memory and processing speed had the largest mediating effects: Enhanced working memory abilities and faster processing speed supported anticipatory spoken language processing. These findings suggest that models of predictive language processing must take mediating factors such as working memory and processing speed into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual-spatial representations.
  • Huettig, F., & Mani, N. (2016). Is prediction necessary to understand language? Probably not. Language, Cognition and Neuroscience, 31(1), 19-31. doi:10.1080/23273798.2015.1072223.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. In the present opinion paper we evaluate this proposal. We first critically discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, we discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Hugh-Jones, D., Verweij, K. J. H., St Pourcain, B., & Abdellaoui, A. (2016). Assortative mating on educational attainment leads to genetic spousal resemblance for causal alleles. Intelligence, 59, 103-108. doi:10.1016/j.intell.2016.08.005.

    Abstract

    We examined whether assortative mating for educational attainment (“like marries like”) can be detected in the genomes of ~ 1600 UK spouse pairs of European descent. Assortative mating on heritable traits like educational attainment increases the genetic variance and heritability of the trait in the population, which may increase social inequalities. We test for genetic assortative mating in the UK on educational attainment, a phenotype that is indicative of socio-economic status and has shown substantial levels of assortative mating. We use genome-wide allelic effect sizes from a large genome-wide association study on educational attainment (N ~ 300 k) to create polygenic scores that are predictive of educational attainment in our independent sample (r = 0.23, p < 2 × 10− 16). The polygenic scores significantly predict partners' educational outcome (r = 0.14, p = 4 × 10− 8 and r = 0.19, p = 2 × 10− 14, for prediction from males to females and vice versa, respectively), and are themselves significantly correlated between spouses (r = 0.11, p = 7 × 10− 6). Our findings provide molecular genetic evidence for genetic assortative mating on education in the UK
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Hulten, A., Vihla, M., Laine, M., & Salmelin, R. (2009). Accessing newly learned names and meanings in the native language. Human Brain Mapping, 30, 979-989. doi:10.1002/hbm.20561.

    Abstract

    Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture-naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Humphries, S., Holler, J., Crawford, T. J., Herrera, E., & Poliakoff, E. (2016). A third-person perspective on co-speech action gestures in Parkinson’s disease. Cortex, 78, 44-54. doi:10.1016/j.cortex.2016.02.009.

    Abstract

    A combination of impaired motor and cognitive function in Parkinson’s disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Hwang, S.-O., Tomita, N., Morgan, H., Ergin, R., İlkbaşaran, D., Seegers, S., Lepic, R., & Padden, C. (2016). Of the body and the hands: patterned iconicity for semantic categories. Language and Cognition, 9(4), 573-602. doi:10.1017/langcog.2016.28.

    Abstract

    This paper examines how gesturers and signers use their bodies to express concepts such as instrumentality and humanness. Comparing across eight sign languages (American, Japanese, German, Israeli, and Kenyan Sign Languages, Ha Noi Sign Language of Vietnam, Central Taurus Sign Language of Turkey, and Al-Sayyid Bedouin Sign Language of Israel) and the gestures of American non-signers, we find recurring patterns for naming entities in three semantic categories (tools, animals, and fruits & vegetables). These recurring patterns are captured in a classification system that identifies iconic strategies based on how the body is used together with the hands. Across all groups, tools are named with manipulation forms, where the head and torso represent those of a human agent. Animals tend to be identified with personification forms, where the body serves as a map for a comparable non-human body. Fruits & vegetables tend to be identified with object forms, where the hands act independently from the rest of the body to represent static features of the referent. We argue that these iconic patterns are rooted in using the body for communication, and provide a basis for understanding how meaningful communication emerges quickly in gesture and persists in emergent and established sign languages.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • Iliadis, S. I., Sylvén, S., Hellgren, C., Olivier, J. D., Schijven, D., Comasco, E., Chrousos, G. P., Sundström Poromaa, I., & Skalkidou, A. (2016). Mid-pregnancy corticotropin-releasing hormone levels in association with postpartum depressive symptoms. Depression and Anxiety, 33(11), 1023-1030. doi:10.1002/da.22529.

    Abstract

    Background Peripartum depression is a common cause of pregnancy- and postpartum-related morbidity. The production of corticotropin-releasing hormone (CRH) from the placenta alters the profile of hypothalamus–pituitary–adrenal axis hormones and may be associated with postpartum depression. The purpose of this study was to assess, in nondepressed pregnant women, the possible association between CRH levels in pregnancy and depressive symptoms postpartum. Methods A questionnaire containing demographic data and the Edinburgh Postnatal Depression Scale (EPDS) was filled in gestational weeks 17 and 32, and 6 week postpartum. Blood samples were collected in week 17 for assessment of CRH. A logistic regression model was constructed, using postpartum EPDS score as the dependent variable and log-transformed CRH levels as the independent variable. Confounding factors were included in the model. Subanalyses after exclusion of study subjects with preterm birth, newborns small for gestational age (SGA), and women on corticosteroids were performed. Results Five hundred thirty-five women without depressive symptoms during pregnancy were included. Logistic regression showed an association between high CRH levels in gestational week 17 and postpartum depressive symptoms, before and after controlling for several confounders (unadjusted OR = 1.11, 95% CI 1.01–1.22; adjusted OR = 1.13, 95% CI 1.02–1.26; per 0.1 unit increase in log CRH). Exclusion of women with preterm birth and newborns SGA as well as women who used inhalation corticosteroids during pregnancy did not alter the results. Conclusions This study suggests an association between high CRH levels in gestational week 17 and the development of postpartum depressive symptoms, among women without depressive symptoms during pregnancy.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2016). On putative shortcomings and dangerous future avenues: response to Strijkers & Costa. Language, Cognition and Neuroscience, 31(4), 517-520. doi:10.1080/23273798.2015.1128554.
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Isaac, A., Wang, S., Van der Meij, L., Schlobach, S., Zinn, C., & Matthezing, H. (2009). Evaluating thesaurus alignments for semantic interoperability in the library domain. IEEE Intelligent Systems, 24(2), 76-86.

    Abstract

    Thesaurus alignments play an important role in realising efficient access to heterogeneous Cultural Heritage data. Current technology, however, provides only limited value for such access as it fails to bridge the gap between theoretical study and user needs that stem from practical application requirements. In this paper, we explore common real-world problems of a library, and identify solutions that would greatly benefit from a more application embedded study, development, and evaluation of matching technology.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Ito, A., Corley, M., Pickering, M. J., Martin, A. E., & Nieuwland, M. S. (2016). Predicting form and meaning: Evidence from brain potentials. Journal of Memory and Language, 86, 157-171. doi:10.1016/j.jml.2015.10.007.

    Abstract

    We used ERPs to investigate the pre-activation of form and meaning in language comprehension. Participants read high-cloze sentence contexts (e.g., “The student is going to the library to borrow a…”), followed by a word that was predictable (book), form-related (hook) or semantically related (page) to the predictable word, or unrelated (sofa). At a 500 ms SOA (Experiment 1), semantically related words, but not form-related words, elicited a reduced N400 compared to unrelated words. At a 700 ms SOA (Experiment 2), semantically related words and form-related words elicited reduced N400 effects, but the effect for form-related words occurred in very high-cloze sentences only. At both SOAs, form-related words elicited an enhanced, post-N400 posterior positivity (Late Positive Component effect). The N400 effects suggest that readers can pre-activate meaning and form information for highly predictable words, but form pre-activation is more limited than meaning pre-activation. The post-N400 LPC effect suggests that participants detected the form similarity between expected and encountered input. Pre-activation of word forms crucially depends upon the time that readers have to make predictions, in line with production-based accounts of linguistic prediction.
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Jaeger, T. F., & Norcliffe, E. (2009). The cross-linguistic study of sentence production. Language and Linguistics Compass, 3, 866-887. doi:10.1111/j.1749-818x.2009.00147.x.

    Abstract

    The mechanisms underlying language production are often assumed to be universal, and hence not contingent on a speaker’s language. This assumption is problematic for at least two reasons. Given the typological diversity of the world’s languages, only a small subset of languages has actually been studied psycholinguistically. And, in some cases, these investigations have returned results that at least superficially raise doubt about the assumption of universal production mechanisms. The goal of this paper is to illustrate the need for more psycholinguistic work on a typologically more diverse set of languages. We summarize cross-linguistic work on sentence production (specifically: grammatical encoding), focusing on examples where such work has improved our theoretical understanding beyond what studies on English alone could have achieved. But cross-linguistic research has much to offer beyond the testing of existing hypotheses: it can guide the development of theories by revealing the full extent of the human ability to produce language structures. We discuss the potential for interdisciplinary collaborations, and close with a remark on the impact of language endangerment on psycholinguistic research on understudied languages.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2009). Neighbourhood density effects in auditory nonword processing in aphasic listeners. Clinical Linguistics and Phonetics, 23(3), 196-207. doi:10.1080/02699200802394989.

    Abstract

    This study investigates neighbourhood density effects on lexical decision performance (both accuracy and response times) of aphasic patients. Given earlier results on lexical activation and deactivation in Broca's and Wernicke's aphasia, the prediction was that smaller neighbourhood density effects would be found for Broca's aphasic patients, compared to age-matched non-brain-damaged control participants, whereas enlarged density effects were expected for Wernicke's aphasic patients. The results showed density effects for all three groups of listeners, and overall differences in performance between groups, but no significant interaction between neighbourhood density and listener group. Several factors are discussed to account for the present results.
  • Janse, E. (2009). Processing of fast speech by elderly listeners. Journal of the Acoustical Society of America, 125(4), 2361-2373. doi:10.1121/1.3082117.

    Abstract

    This study investigates the relative contributions of auditory and cognitive factors to the common finding that an increase in speech rate affects elderly listeners more than young listeners. Since a direct relation between non-auditory factors, such as age-related cognitive slowing, and fast speech performance has been difficult to demonstrate, the present study took an on-line, rather than off-line, approach and focused on processing time. Elderly and young listeners were presented with speech at two rates of time compression and were asked to detect pre-assigned target words as quickly as possible. A number of auditory and cognitive measures were entered in a statistical model as predictors of elderly participants’ fast speech performance: hearing acuity, an information processing rate measure, and two measures of reading speed. The results showed that hearing loss played a primary role in explaining elderly listeners’ increased difficulty with fast speech. However, non-auditory factors such as reading speed and the extent to which participants were affected by
    increased rate of presentation in a visual analog of the listening experiment also predicted fast
    speech performance differences among the elderly participants. These on-line results confirm that slowed information processing is indeed part of elderly listeners’ problem keeping up with fast language
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced speech and use of phonetic context in listeners with age-related hearing impairment [Abstract]. Journal of the Acoustical Society of America, 125(4), 2535.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, R., Nolfi, S., Haselager, W. F. G., & Sprinkhuizen-Kuyper, I. G. (2016). Cyclic Incrementality in Competitive Coevolution: Evolvability through Pseudo-Baldwinian Switching-Genes. Artificial Life, 22(3), 319-352. doi:10.1162/ARTL_a_00208.

    Abstract

    Coevolving systems are notoriously difficult to understand. This is largely due to the Red Queen effect that dictates heterospecific fitness interdependence. In simulation studies of coevolving systems, master tournaments are often used to obtain more informed fitness measures by testing evolved individuals against past and future opponents. However, such tournaments still contain certain ambiguities. We introduce the use of a phenotypic cluster analysis to examine the distribution of opponent categories throughout an evolutionary sequence. This analysis, adopted from widespread usage in the bioinformatics community, can be applied to master tournament data. This allows us to construct behavior-based category trees, obtaining a hierarchical classification of phenotypes that are suspected to interleave during cyclic evolution. We use the cluster data to establish the existence of switching-genes that control opponent specialization, suggesting the retention of dormant genetic adaptations, that is, genetic memory. Our overarching goal is to reiterate how computer simulations may have importance to the broader understanding of evolutionary dynamics in general. We emphasize a further shift from a component-driven to an interaction-driven perspective in understanding coevolving systems. As yet, it is unclear how the sudden development of switching-genes relates to the gradual emergence of genetic adaptability. Likely, context genes gradually provide the appropriate genetic environment wherein the switching-gene effect can be exploited
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Järvikivi, J., Pyykkönen, P., & Niemi, J. (2009). Exploiting degrees of inflectional ambiguity: Stem form and the time course of morphological processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), 221-237. doi:10.1037/a0014355.

    Abstract

    The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant facilitation for unambiguous words only with a long 300-ms SOA. Experiment 2 showed that all potential readings of ambiguous inflections were activated under a short SOA. Whereas the prime-target form overlap did not affect the results under a short SOA, it significantly modulated the results with a long SOA. Experiment 3 confirmed that the results from masked priming were modulated by the morphological structure of the words but not by the prime-target form overlap alone. The results support approaches in which early prelexical morphological processing is driven by morph-based segmentation and form is used to cue selection between 2 candidates only during later processing.

    Files private

    Request files
  • Jaspers, D., & Seuren, P. A. M. (2016). The Square of opposition in catholic hands: A chapter in the history of 20th-century logic. Logique et Analyse, 59(233), 1-35.

    Abstract

    The present study describes how three now almost forgotten mid-20th-century logicians, the American Paul Jacoby and the Frenchmen Augustin Sesmat and Robert Blanché, all three ardent Catholics, tried to restore traditional predicate logic to a position of respectability by expanding the classic Square of Opposition to a hexagon of logical relations, showing the logical and cognitive advantages of such an expansion. The nature of these advantages is discussed in the context of modern research regarding the relations between logic, language, and cognition. It is desirable to call attention to these attempts, as they are, though almost totally forgotten, highly relevant against the backdrop of the clash between modern and traditional logic. It is argued that this clash was and is unnecessary, as both forms of predicate logic are legitimate, each in its own right. The attempts by Jacoby, Sesmat, and Blanché are, moreover, of interest to the history of logic in a cultural context in that, in their own idiosyncratic ways, they fit into the general pattern of the Catholic cultural revival that took place roughly between the years 1840 and 1960. The Catholic Church had put up stiff resistance to modern mathematical logic, considering it dehumanizing and a threat to Catholic doctrine. Both the wider cultural context and the specific implications for logic are described and analyzed, in conjunction with the more general philosophical and doctrinal issues involved.
  • Jesse, A., & Janse, E. (2009). Seeing a speaker's face helps stream segregation for younger and elderly adults [Abstract]. Journal of the Acoustical Society of America, 125(4), 2361.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jiang, T., Zhang, W., Wen, W., Zhu, H., Du, H., Zhu, X., Gao, X., Zhang, H., Dong, Q., & Chen, C. (2016). Reevaluating the two-representation model of numerical magnitude processing. Memory & Cognition, 44, 162-170. doi:10.3758/s13421-015-0542-2.

    Abstract

    One debate in mathematical cognition centers on the single-representation model versus the two-representation model. Using an improved number Stroop paradigm (i.e., systematically manipulating physical size distance), in the present study we tested the predictions of the two models for number magnitude processing. The results supported the single-representation model and, more importantly, explained how a design problem (failure to manipulate physical size distance) and an analytical problem (failure to consider the interaction between congruity and task-irrelevant numerical distance) might have contributed to the evidence used to support the two-representation model. This study, therefore, can help settle the debate between the single-representation and two-representation models. © 2015 The Author(s)

Share this page