Publications

Displaying 301 - 400 of 955
  • Güldemann, T., & Hammarström, H. (2020). Geographical axis effects in large-scale linguistic distributions. In M. Crevels, & P. Muysken (Eds.), Language Dispersal, Diversification, and Contact. Oxford: Oxford University Press.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2016). MUC (Memory, Unification, Control): A Model on the Neurobiology of Language Beyond Single Word Processing. In G. Hickok, & S. Small (Eds.), Neurobiology of language (pp. 339-347). Amsterdam: Elsever. doi:10.1016/B978-0-12-407794-2.00028-6.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content.
  • Hagoort, P. (2020). Taal. In O. Van den Heuvel, Y. Van der Werf, B. Schmand, & B. Sabbe (Eds.), Leerboek neurowetenschappen voor de klinische psychiatrie (pp. 234-239). Amsterdam: Boom Uitgevers.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hagoort, P. (2016). Zij zijn ons brein. In J. Brockman (Ed.), Machines die denken: Invloedrijke denkers over de komst van kunstmatige intelligentie (pp. 184-186). Amsterdam: Maven Publishing.
  • Hahn, L. E., Ten Buuren, M., Snijders, T. M., & Fikkert, P. (2020). Learning words in a second language while cycling and listening to children’s songs: The Noplica Energy Center. International Journal of Music in Early Childhood, 15(1), 95-108. doi:10.1386/ijmec_00014_1.

    Abstract

    Children’s songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring children’s songs in a playhouse. The playhouse is designed by the Noplica foundation (www.noplica.nl) to advance language learning through unsupervised play. We present data from three experiments that serve to scientifically proof the functionality of one game of the playhouse: the Energy Center. For this game, children move three hand-bikes mounted on a panel within the playhouse. Once the children cycle, a song starts playing that is accompanied by musical instruments. In our experiments, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our experiments were run in the field, one at a Dutch and one at an Indian pre-school. The third experiment features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from the songs they heard. We will pick up upon these promising observations during future studies.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2020). Six-month-old infants recognize phrases in song and speech. Infancy, 25(5), 699-718. doi:10.1111/infa.12357.

    Abstract

    Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well‐attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six‐month‐old Dutch infants (n = 80) were tested in the song or speech modality in the head‐turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well‐formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well‐formed sequence, but only in a more fine‐grained analysis. The preference for well‐formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.

    Additional information

    infa12357-sup-0001-supinfo.zip
  • Hammarström, H. (2016). Commentary: There is no demonstrable effect of desiccation [Commentary on "Language evolution and climate: The case of desiccation and tone'']. Journal of Language Evolution, 1, 65-69. doi:10.1093/jole/lzv015.
  • Hammarström, H. (2016). Linguistic diversity and language evolution. Journal of Language Evolution, 1, 19-29. doi:10.1093/jole/lzw002.

    Abstract

    What would your ideas about language evolution be if there was only one language left on earth? Fortunately, our investigation need not be that impoverished. In the present article, we survey the state of knowledge regarding the kinds of language found among humans, the language inventory, population sizes, time depth, grammatical variation, and other relevant issues that a theory of language evolution should minimally take into account
  • Hao, X., Huang, Y., Li, X., Song, Y., Kong, X., Wang, X., Yang, Z., Zhen, Z., & Liu, J. (2016). Structural and functional neural correlates of spatial navigation: A combined voxel‐based morphometry and functional connectivity study. Brain and Behavior, 6(12): e00572. doi:10.1002/brb3.572.

    Abstract

    Introduction: Navigation is a fundamental and multidimensional cognitive function that individuals rely on to move around the environment. In this study, we investigated the neural basis of human spatial navigation ability. Methods: A large cohort of participants (N > 200) was examined on their navigation ability behaviorally and structural and functional magnetic resonance imaging (MRI) were then used to explore the corresponding neural basis of spatial navigation. Results: The gray matter volume (GMV) of the bilateral parahippocampus (PHG), retrosplenial complex (RSC), entorhinal cortex (EC), hippocampus (HPC), and thalamus (THAL) was correlated with the participants’ self-reported navigational ability in general, and their sense of direction in particular. Further fMRI studies showed that the PHG, RSC, and EC selectively responded to visually presented scenes, whereas the HPC and THAL showed no selectivity, suggesting a functional division of labor among these regions in spatial navigation. The resting-state functional connectivity analysis further revealed a hierarchical neural network for navigation constituted by these regions, which can be further categorized into three relatively independent components (i.e., scene recognition component, cognitive map component, and the component of heading direction for locomotion, respectively). Conclusions: Our study combined multi-modality imaging data to illustrate that multiple brain regions may work collaboratively to extract, integrate, store, and orientate spatial information to guide navigation behaviors.

    Additional information

    brb3572-sup-0001-FigS1-S4.docx
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Harmon, Z., & Kapatsinski, V. (2020). The best-laid plan of mice and men: Competition between top-down and preceding-item cues in plan execution. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 1674-1680). Montreal, QB: Cognitive Science Society.

    Abstract

    There is evidence that the process of executing a planned utterance involves the use of both preceding-context and top-down cues. Utterance-initial words are cued only by the top-down plan. In contrast, non-initial words are cued both by top-down cues and preceding-context cues. Co-existence of both cue types raises the question of how they interact during learning. We argue that this interaction is competitive: items that tend to be preceded by predictive preceding-context cues are harder to activate from the plan without this predictive context. A novel computational model of this competition is developed. The model is tested on a corpus of repetition disfluencies and shown to account for the influences on patterns of restarts during production. In particular, this model predicts a novel Initiation Effect: following an interruption, speakers re-initiate production from words that tend to occur in utterance-initial position, even when they are not initial in the interrupted utterance.
  • Harmon, Z., & Kapatsinski, V. (2016). Fuse to be used: A weak cue’s guide to attracting attention. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 520-525). Austin, TX: Cognitive Science Society.

    Abstract

    Several studies examined cue competition in human learning by testing learners on a combination of conflicting cues rooting for different outcomes, with each cue perfectly predicting its outcome. A common result has been that learners faced with cue conflict choose the outcome associated with the rare cue (the Inverse Base Rate Effect, IBRE). Here, we investigate cue competition including IBRE with sentences containing cues to meanings in a visual world. We do not observe IBRE. Instead we find that position in the sentence strongly influences cue salience. Faced with conflict between an initial cue and a non-initial cue, learners choose the outcome associated with the initial cue, whether frequent or rare. However, a frequent configuration of non-initial cues that are not sufficiently salient on their own can overcome a competing salient initial cue rooting for a different meaning. This provides a possible explanation for certain recurring patterns in language change.
  • Harmon, Z., & Kapatsinski, V. (2016). Determinants of lengths of repetition disfluencies: Probabilistic syntactic constituency in speech production. In R. Burkholder, C. Cisneros, E. R. Coppess, J. Grove, E. A. Hanink, H. McMahan, C. Meyer, N. Pavlou, Ö. Sarıgül, A. R. Singerman, & A. Zhang (Eds.), Proceedings of the Fiftieth Annual Meeting of the Chicago Linguistic Society (pp. 237-248). Chicago: Chicago Linguistic Society.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2016). Taking perspective: Personal pronouns affect experiential aspects of literary reading. PLoS One, 11(5): e0154732. doi:10.1371/journal.pone.0154732.

    Abstract

    Personal pronouns have been shown to influence cognitive perspective taking during comprehension. Studies using single sentences found that 3rd person pronouns facilitate the construction of a mental model from an observer’s perspective, whereas 2nd person pronouns support an actor’s perspective. The direction of the effect for 1st person pronouns seems to depend on the situational context. In the present study, we investigated how personal pronouns influence discourse comprehension when people read fiction stories and if this has consequences for affective components like emotion during reading or appreciation of the story. We wanted to find out if personal pronouns affect immersion and arousal, as well as appreciation of fiction. In a natural reading paradigm, we measured electrodermal activity and story immersion, while participants read literary stories with 1st and 3rd person pronouns referring to the protagonist. In addition, participants rated and ranked the stories for appreciation. Our results show that stories with 1st person pronouns lead to higher immersion. Two factors—transportation into the story world and mental imagery during reading—in particular showed higher scores for 1st person as compared to 3rd person pronoun stories. In contrast, arousal as measured by electrodermal activity seemed tentatively higher for 3rd person pronoun stories. The two measures of appreciation were not affected by the pronoun manipulation. Our findings underscore the importance of perspective for language processing, and additionally show which aspects of the narrative experience are influenced by a change in perspective.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research - A primer. Infancy, 25(5), 734-754. doi:10.1111/infa.12353.

    Abstract

    Preregistration, the act of specifying a research plan in advance, is becoming more common in scientific research. Infant researchers contend with unique problems that might make preregistration particularly challenging. Infants are a hard‐to‐reach population, usually yielding small sample sizes, they can only complete a limited number of trials, and they can be excluded based on hard‐to‐predict complications (e.g., parental interference, fussiness). In addition, as effects themselves potentially change with age and population, it is hard to calculate an a priori effect size. At the same time, these very factors make preregistration in infant studies a valuable tool. A priori examination of the planned study, including the hypotheses, sample size, and resulting statistical power, increases the credibility of single studies and adds value to the field. Preregistration might also improve explicit decision making to create better studies. We present an in‐depth discussion of the issues uniquely relevant to infant researchers, and ways to contend with them in preregistration and study planning. We provide recommendations to researchers interested in following current best practices.

    Additional information

    Preprint version on OSF
  • De Heer Kloots, M., Carlson, D., Garcia, M., Kotz, S., Lowry, A., Poli-Nardi, L., de Reus, K., Rubio-García, A., Sroka, M., Varola, M., & Ravignani, A. (2020). Rhythmic perception, production and interactivity in harbour and grey seals. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 59-62). Nijmegen: The Evolution of Language Conferences.
  • Heidlmayr, K., Kihlstedt, M., & Isel, F. (2020). A review on the electroencephalography markers of Stroop executive control processes. Brain and Cognition, 146: 105637. doi:10.1016/j.bandc.2020.105637.

    Abstract

    The present article on executive control addresses the issue of the locus of the Stroop effect by examining neurophysiological components marking conflict monitoring, interference suppression, and conflict resolution. Our goal was to provide an overview of a series of determining neurophysiological findings including neural source reconstruction data on distinct executive control processes and sub-processes involved in the Stroop task. Consistently, a fronto-central N2 component is found to reflect conflict monitoring processes, with its main neural generator being the anterior cingulate cortex (ACC). Then, for cognitive control tasks that involve a linguistic component like the Stroop task, the N2 is followed by a centro-posterior N400 and subsequently a late sustained potential (LSP). The N400 is mainly generated by the ACC and the prefrontal cortex (PFC) and is thought to reflect interference suppression, whereas the LSP plausibly reflects conflict resolution processes. The present overview shows that ERP constitute a reliable methodological tool for tracing with precision the time course of different executive processes and sub-processes involved in experimental tasks involving a cognitive conflict. Future research should shed light on the fine-grained mechanisms of control respectively involved in linguistic and non-linguistic tasks.
  • Heidlmayr, K., Doré-Mazars, K., Aparicio, X., & Isel, F. (2016). Multiple language use influences oculomotor task performance: Neurophysiological evidence of a shared substrate between language and motor control. PLoS One, 11(11): e0165029. doi:10.1371/journal.pone.0165029.

    Abstract

    In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2020). No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Cortex, 130, 111-126. doi:10.1016/j.cortex.2020.04.035.

    Abstract

    Speakers and listeners usually interact in larger discourses than single words or even single sentences. The goal of the present study was to identify the neural bases reflecting how the mental representation of the situation denoted in a multi-sentence discourse (situation model) is constructed and shared between speakers and listeners. An fMRI study using a variant of the ambiguous text paradigm was designed. Speakers (n=15) produced ambiguous texts in the scanner and listeners (n=27) subsequently listened to these texts in different states of ambiguity: preceded by a highly informative, intermediately informative or no title at all. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation analyses between the speakers’ and the listeners’ hemodynamic time courses were performed. Critically, only the processing of disambiguated, coherent discourse with an intelligible situation model representation involved (shared) activation in bilateral lateral parietal and medial prefrontal regions. This shared spatiotemporal pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval in medial prefrontal regions and the binding of retrieved information in the lateral parietal cortex constitutes a core mechanism underlying the communication of complex conceptual representations.

    Additional information

    supplementary data
  • Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & De Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11: 321. doi:10.1038/s41467-019-13996-4.

    Abstract

    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.

    Additional information

    Supplementary information
  • Heinrich, T., Ravignani, A., & Hanke, F. H. (2020). Visual timing abilities of a harbour seal (Phoca vitulina) and a South African fur seal (Arctocephalus pusillus pusillus) for sub- and supra-second time intervals. Animal Cognition, 23(5), 851-859. doi:10.1007/s10071-020-01390-3.

    Abstract

    Timing is an essential parameter influencing many behaviours. A previous study demonstrated a high sensitivity of a phocid, the harbour seal (Phoca vitulina), in discriminating time intervals. In the present study, we compared the harbour seal’s timing abilities with the timing abilities of an otariid, the South African fur seal (Arctocephalus pusillus pusillus). This comparison seemed essential as phocids and otariids differ in many respects and might, thus, also differ regarding their timing abilities. We determined time difference thresholds for sub- and suprasecond time intervals marked by a white circle on a black background displayed for a specific time interval on a monitor using a staircase method. Contrary to our expectation, the timing abilities of the fur seal and the harbour seal were comparable. Over a broad range of time intervals, 0.8–7 s in the fur seal and 0.8–30 s in the harbour seal, the difference thresholds followed Weber’s law. In this range, both animals could discriminate time intervals differing only by 12 % and 14 % on average. Timing might, thus be a fundamental cue for pinnipeds in general to be used in various contexts, thereby complementing information provided by classical sensory systems. Future studies will help to clarify if timing is indeed involved in foraging decisions or the estimation of travel speed or distance.

    Additional information

    supplementary material
  • Hendricks, I., Lefever, E., Croijmans, I., Majid, A., & Van den Bosch, A. (2016). Very quaffable and great fun: Applying NLP to wine reviews. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 306-312). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We automatically predict properties of
    wines on the basis of smell and flavor de-
    scriptions from experts’ wine reviews. We
    show wine experts are capable of describ-
    ing their smell and flavor experiences in
    wine reviews in a sufficiently consistent
    manner, such that we can use their descrip-
    tions to predict properties of a wine based
    solely on language. The experimental re-
    sults show promising F-scores when using
    lexical and semantic information to predict
    the color, grape variety, country of origin,
    and price of a wine. This demonstrates,
    contrary to popular opinion, that wine ex-
    perts’ reviews really are informative.
  • Henson, R. N., Suri, S., Knights, E., Rowe, J. B., Kievit, R. A., Lyall, D. M., Chan, D., Eising, E., & Fisher, S. E. (2020). Effect of apolipoprotein E polymorphism on cognition and brain in the Cambridge Centre for Ageing and Neuroscience cohort. Brain and Neuroscience Advances, 4: 2398212820961704. doi:10.1177/2398212820961704.

    Abstract

    Polymorphisms in the apolipoprotein E (APOE) gene have been associated with individual differences in cognition, brain structure and brain function. For example, the ε4 allele has been associated with cognitive and brain impairment in old age and increased risk of dementia, while the ε2 allele has been claimed to be neuroprotective. According to the ‘antagonistic pleiotropy’ hypothesis, these polymorphisms have different effects across the lifespan, with ε4, for example, postulated to confer benefits on cognitive and brain functions earlier in life. In this stage 2 of the Registered Report – https://osf.io/bufc4, we report the results from the cognitive and brain measures in the Cambridge Centre for Ageing and Neuroscience cohort (www.cam-can.org). We investigated the antagonistic pleiotropy hypothesis by testing for allele-by-age interactions in approximately 600 people across the adult lifespan (18–88 years), on six outcome variables related to cognition, brain structure and brain function (namely, fluid intelligence, verbal memory, hippocampal grey-matter volume, mean diffusion within white matter and resting-state connectivity measured by both functional magnetic resonance imaging and magnetoencephalography). We found no evidence to support the antagonistic pleiotropy hypothesis. Indeed, Bayes factors supported the null hypothesis in all cases, except for the (linear) interaction between age and possession of the ε4 allele on fluid intelligence, for which the evidence for faster decline in older ages was ambiguous. Overall, these pre-registered analyses question the antagonistic pleiotropy of APOE polymorphisms, at least in healthy adults.

    Additional information

    supplementary material
  • Hestvik, A., Shinohara, Y., Durvasula, K., Verdonschot, R. G., & Sakai, H. (2020). Abstractness of human speech sound representations. Brain Research, 1732: 146664. doi:10.1016/j.brainres.2020.146664.

    Abstract

    We argue, based on a study of brain responses to speech sound differences in Japanese, that memory encoding of functional speech sounds-phonemes-are highly abstract. As an example, we provide evidence for a theory where the consonants/p t k b d g/ are not only made up of symbolic features but are underspecified with respect to voicing or laryngeal features, and that languages differ with respect to which feature value is underspecified. In a previous study we showed that voiced stops are underspecified in English [Hestvik, A., & Durvasula, K. (2016). Neurobiological evidence for voicing underspecification in English. Brain and Language], as shown by asymmetries in Mismatch Negativity responses to /t/ and /d/. In the current study, we test the prediction that the opposite asymmetry should be observed in Japanese, if voiceless stops are underspecified in that language. Our results confirm this prediction. This matches a linguistic architecture where phonemes are highly abstract and do not encode actual physical characteristics of the corresponding speech sounds, but rather different subsets of abstract distinctive features.
  • Hildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A. and 11 moreHildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A., Davis, N., Reilly, S., Delatycki, M., Liégeois, F. J., Connelly, A., Gecz, J., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2020). Severe childhood speech disorder: Gene discovery highlights transcriptional dysregulation. Neurology, 94(20), e2148-e2167. doi:10.1212/WNL.0000000000009441.

    Abstract

    Objective
    Determining the genetic basis of speech disorders provides insight into the neurobiology of
    human communication. Despite intensive investigation over the past 2 decades, the etiology of
    most speech disorders in children remains unexplained. To test the hypothesis that speech
    disorders have a genetic etiology, we performed genetic analysis of children with severe speech
    disorder, specifically childhood apraxia of speech (CAS).
    Methods
    Precise phenotyping together with research genome or exome analysis were performed on
    children referred with a primary diagnosis of CAS. Gene coexpression and gene set enrichment
    analyses were conducted on high-confidence gene candidates.
    Results
    Thirty-four probands ascertained for CAS were studied. In 11/34 (32%) probands, we identified
    highly plausible pathogenic single nucleotide (n = 10; CDK13, EBF3, GNAO1, GNB1,
    DDX3X, MEIS2, POGZ, SETBP1, UPF2, ZNF142) or copy number (n = 1; 5q14.3q21.1 locus)
    variants in novel genes or loci for CAS. Testing of parental DNA was available for 9 probands
    and confirmed that the variants had arisen de novo. Eight genes encode proteins critical for
    regulation of gene transcription, and analyses of transcriptomic data found CAS-implicated
    genes were highly coexpressed in the developing human brain.
    Conclusion
    We identify the likely genetic etiology in 11 patients with CAS and implicate 9 genes for the first
    time. We find that CAS is often a sporadic monogenic disorder, and highly genetically heterogeneous.
    Highly penetrant variants implicate shared pathways in broad transcriptional
    regulation, highlighting the key role of transcriptional regulation in normal speech development.
    CAS is a distinctive, socially debilitating clinical disorder, and understanding its
    molecular basis is the first step towards identifying precision medicine approaches.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.

    Abstract

    Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

    Additional information

    Supplemental Material
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.

    Abstract

    Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations.
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2016). Encouraging prediction during production facilitates subsequent comprehension: Evidence from interleaved object naming in sentence context and sentence reading. Quarterly Journal of Experimental Psychology, 69(6), 1056-1063. doi:10.1080/17470218.2015.1131309.

    Abstract

    Many studies have shown that a supportive context facilitates language comprehension. A currently influential view is that language production may support prediction in language comprehension. Experimental evidence for this, however, is relatively sparse. Here we explored whether encouraging prediction in a language production task encourages the use of predictive contexts in an interleaved comprehension task. In Experiment 1a, participants listened to the first part of a sentence and provided the final word by naming aloud a picture. The picture name was predictable or not predictable from the sentence context. Pictures were named faster when they could be predicted than when this was not the case. In Experiment 1b the same sentences, augmented by a final spill-over region, were presented in a self-paced reading task. No difference in reading times for predictive vs. non-predictive sentences was found. In Experiment 2, reading and naming trials were intermixed. In the naming task, the advantage for predictable picture names was replicated. More importantly, now reading times for the spill-over region were considerable faster for predictive vs. non-predictive sentences. We conjecture that these findings fit best with the notion that prediction in the service of language production encourages the use of predictive contexts in comprehension. Further research is required to identify the exact mechanisms by which production exerts its influence on comprehension.
  • Hintz, F., & Scharenborg, O. (2016). Neighbourhood density influences word recognition in native and non-native speech recognition in noise. In H. Van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments (SPIRE) workshop (pp. 46-47). Groningen.
  • Hintz, F., & Scharenborg, O. (2016). The effect of background noise on the activation of phonological and semantic information during spoken-word recognition. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2816-2820).

    Abstract

    During spoken-word recognition, listeners experience phonological competition between multiple word candidates, which increases, relative to optimal listening conditions, when speech is masked by noise. Moreover, listeners activate semantic word knowledge during the word’s unfolding. Here, we replicated the effect of background noise on phonological competition and investigated to which extent noise affects the activation of semantic information in phonological competitors. Participants’ eye movements were recorded when they listened to sentences containing a target word and looked at three types of displays. The displays either contained a picture of the target word, or a picture of a phonological onset competitor, or a picture of a word semantically related to the onset competitor, each along with three unrelated distractors. The analyses revealed that, in noise, fixations to the target and to the phonological onset competitor were delayed and smaller in magnitude compared to the clean listening condition, most likely reflecting enhanced phonological competition. No evidence for the activation of semantic information in the phonological competitors was observed in noise and, surprisingly, also not in the clear. We discuss the implications of the lack of an effect and differences between the present and earlier studies.
  • Hoeksema, N., Villanueva, S., Mengede, J., Salazar-Casals, A., Rubio-García, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2020). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 162-164). Nijmegen: The Evolution of Language Conferences.
  • Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences.
  • Hofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H. and 79 moreHofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H., Saba, Y., Pirpamer, L., Seiler, S., Becker, J. T., Carmichael, O., Rotter, J. I., Psaty, B. M., Lopez, O. L., Amin, N., Van der Lee, S. J., Yang, Q., Himali, J. J., Maillard, P., Beiser, A. S., DeCarli, C., Karama, S., Lewis, L., Harris, M., Bastin, M. E., Deary, I. J., Witte, A. V., Beyer, F., Loeffler, M., Mather, K. A., Schofield, P. R., Thalamuthu, A., Kwok, J. B., Wright, M. J., Ames, D., Trollor, J., Jiang, J., Brodaty, H., Wen, W., Vernooij, M. W., Hofman, A., Uitterlinden, A. G., Niessen, W. J., Wittfeld, K., Bülow, R., Völker, U., Pausova, Z., Pike, G. B., Maingault, S., Crivello, F., Tzourio, C., Amouyel, P., Mazoyer, B., Neale, M. C., Franz, C. E., Lyons, M. J., Panizzon, M. S., Andreassen, O. A., Dale, A. M., Logue, M., Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Stein, J. L., Thompson, P. M., Medland, S. E., ENIGMA-consortium, Sachdev, P. S., Kremen, W. S., Wardlaw, J. M., Villringer, A., Van Duijn, C. M., Grabe, H. J., Longstreth, W. T., Fornage, M., Paus, T., Debette, S., Ikram, M. A., Schmidt, H., Schmidt, R., & Seshadri, S. (2020). Genetic correlations and genome-wide associations of cortical structure in general population samples of 22,824 adults. Nature Communications, 11: 4796. doi:10.1038/s41467-020-18367-y.
  • Hogekamp, Z., Blomster, J. B., Bursalioglu, A., Calin, M. C., Çetinçelik, M., Haastrup, L., & Van den Berg, Y. H. M. (2016). Examining the Importance of the Teachers' Emotional Support for Students' Social Inclusion Using the One-with-Many Design. Frontiers in Psychology, 7: 1014. doi:10.3389/fpsyg.2016.01014.

    Abstract

    The importance of high quality teacher–student relationships for students' well-being has been long documented. Nonetheless, most studies focus either on teachers' perceptions of provided support or on students' perceptions of support. The degree to which teachers and students agree is often neither measured nor taken into account. In the current study, we will therefore use a dyadic analysis strategy called the one-with-many design. This design takes into account the nestedness of the data and looks at the importance of reciprocity when examining the influence of teacher support for students' academic and social functioning. Two samples of teachers and their students from Grade 4 (age 9–10 years) have been recruited in primary schools, located in Turkey and Romania. By using the one-with-many design we can first measure to what degree teachers' perceptions of support are in line with students' experiences. Second, this level of consensus is taken into account when examining the influence of teacher support for students' social well-being and academic functioning.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (Eds.). (2016). Turn-Taking in Human Communicative Interaction. Lausanne: Frontiers Media. doi:10.3389/978-2-88919-825-2.

    Abstract

    The core use of language is in face-to-face conversation. This is characterized by rapid turn-taking. This turn-taking poses a number central puzzles for the psychology of language.

    Consider, for example, that in large corpora the gap between turns is on the order of 100 to 300 ms, but the latencies involved in language production require minimally between 600ms (for a single word) or 1500 ms (for as simple sentence). This implies that participants in conversation are predicting the ends of the incoming turn and preparing in advance. But how is this done? What aspects of this prediction are done when? What happens when the prediction is wrong? What stops participants coming in too early? If the system is running on prediction, why is there consistently a mode of 100 to 300 ms in response time?

    The timing puzzle raises further puzzles: it seems that comprehension must run parallel with the preparation for production, but it has been presumed that there are strict cognitive limitations on more than one central process running at a time. How is this bottleneck overcome? Far from being 'easy' as some psychologists have suggested, conversation may be one of the most demanding cognitive tasks in our everyday lives. Further questions naturally arise: how do children learn to master this demanding task, and what is the developmental trajectory in this domain?

    Research shows that aspects of turn-taking such as its timing are remarkably stable across languages and cultures, but the word order of languages varies enormously. How then does prediction of the incoming turn work when the verb (often the informational nugget in a clause) is at the end? Conversely, how can production work fast enough in languages that have the verb at the beginning, thereby requiring early planning of the whole clause? What happens when one changes modality, as in sign languages -- with the loss of channel constraints is turn-taking much freer? And what about face-to-face communication amongst hearing individuals -- do gestures, gaze, and other body behaviors facilitate turn-taking? One can also ask the phylogenetic question: how did such a system evolve? There seem to be parallels (analogies) in duetting bird species, and in a variety of monkey species, but there is little evidence of anything like this among the great apes.

    All this constitutes a neglected set of problems at the heart of the psychology of language and of the language sciences. This research topic welcomes contributions from right across the board, for example from psycholinguists, developmental psychologists, students of dialogue and conversation analysis, linguists interested in the use of language, phoneticians, corpus analysts and comparative ethologists or psychologists. We welcome contributions of all sorts, for example original research papers, opinion pieces, and reviews of work in subfields that may not be fully understood in other subfields.
  • Hörpel, S. G., & Firzlaff, U. (2020). Post-natal development of the envelope following response to amplitude modulated sounds in the bat Phyllostomus discolor. Hearing Research, 388: 107904. doi:10.1016/j.heares.2020.107904.

    Abstract

    Bats use a large repertoire of calls for social communication, which are often characterized by temporal amplitude and frequency modulations. As bats are considered to be among the few mammalian species capable of vocal learning, the perception of temporal sound modulations should be crucial for juvenile bats to develop social communication abilities. However, the post-natal development of auditory processing of temporal modulations has not been investigated in bats, so far. Here we use the minimally invasive technique of recording auditory brainstem responses to measure the envelope following response (EFR) to sinusoidally amplitude modulated noise (range of modulation frequencies: 11–130 Hz) in three juveniles (p8-p72) of the bat, Phyllostomus discolor. In two out of three animals, we show that although amplitude modulation processing is basically developed at p8, EFRs maturated further over a period of about two weeks until p33. Maturation of the EFR generally took longer for higher modulation frequencies (87–130 Hz) than for lower modulation frequencies (11–58 Hz).
  • Hostetter, A. B., Pouw, W., & Wakefield, E. M. (2020). Learning from gesture and action: An investigation of memory for where objects went and how they got there. Cognitive Science, 44(9): e12889. doi:10.1111/cogs.12889.

    Abstract

    Speakers often use gesture to demonstrate how to perform actions—for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.

    Additional information

    additional analyses Open Data OSF
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Houwing, D. J., Schuttel, K., Struik, E. L., Arling, C., Ramsteijn, A. S., Heinla, I., & Olivier, J. D. (2020). Perinatal fluoxetine treatment and dams’ early life stress history alter affective behavior in rat offspring depending on serotonin transporter genotype and sex. Behavioural Brain Research, 392: 112657. doi:10.1016/j.bbr.2020.112657.

    Abstract

    Many women diagnosed with a major depression continue or initiate antidepressant treatment during pregnancy. Both maternal stress and selective serotonin inhibitor (SSRI) antidepressant treatment during pregnancy have been associated with changes in offspring behavior, including increased anxiety and depressive-like behavior. Our aim was to investigate the effects of the SSRI fluoxetine (FLX), with and without the presence of a maternal depression, on affective behavior in male and female rat offspring. As reduced serotonin transporter (SERT) availability has been associated with altered behavioral outcome, both offspring with normal (SERT+/+) and reduced (SERT+/−) SERT expression were included. For our animal model of maternal depression, SERT+/− dams exposed to early life stress were used. Perinatal FLX treatment and early life stress in dams (ELSD) had sex- and genotype-specific effects on affective behavior in the offspring. In female offspring, perinatal FLX exposure interacted with SERT genotype to increase anxiety and depressive-like behavior in SERT+/+, but not SERT+/−, females. In male offspring, ELSD reduced anxiety and interacted with SERT genotype to decrease depressive-like behavior in SERT+/−, but not SERT+/+, males. Altogether, SERT+/+ female offspring appear to be more sensitive than SERT+/− females to the effects of perinatal FLX exposure, while SERT+/− male offspring appear more sensitive than SERT+/+ males to the effects of ELSD on affective behavior. Our data suggest a role for offspring SERT genotype and sex in FLX and ELSD-induced effects on affective behavior, thereby contributing to our understanding of the effects of perinatal SSRI treatment on offspring behavior later in life.
  • Howe, L. J., Hemani, G., Lesseur, C., Gaborieau, V., Ludwig, K. U., Mangold, E., Brennan, P., Ness, A. R., St Pourcain, B., Smith, G. D., & Lewis, S. J. (2020). Evaluating shared genetic influences on nonsyndromic cleft lip/palate and oropharyngeal neoplasms. Genetic Epidemiology, 44(8), 924-933. doi:10.1002/gepi.22343.

    Abstract

    It has been hypothesised that nonsyndromic cleft lip/palate (nsCL/P) and cancer may share aetiological risk factors. Population studies have found inconsistent evidence for increased incidence of cancer in nsCL/P cases, but several genes (e.g.,CDH1,AXIN2) have been implicated in the aetiologies of both phenotypes. We aimed to evaluate shared genetic aetiology between nsCL/P and oral cavity/oropharyngeal cancers (OC/OPC), which affect similar anatomical regions. Using a primary sample of 5,048 OC/OPC cases and 5,450 controls of European ancestry and a replication sample of 750 cases and 336,319 controls from UK Biobank, we estimate genetic overlap using nsCL/P polygenic risk scores (PRS) with Mendelian randomization analyses performed to evaluate potential causal mechanisms. In the primary sample, we found strong evidence for an association between a nsCL/P PRS and increased odds of OC/OPC (per standard deviation increase in score, odds ratio [OR]: 1.09; 95% confidence interval [CI]: 1.04, 1.13;p = .000053). Although confidence intervals overlapped with the primary estimate, we did not find confirmatory evidence of an association between the PRS and OC/OPC in UK Biobank (OR 1.02; 95% CI: 0.95, 1.10;p = .55). Mendelian randomization analyses provided evidence that major nsCL/P risk variants are unlikely to influence OC/OPC. Our findings suggest possible shared genetic influences on nsCL/P and OC/OPC.

    Additional information

    Supporting information
  • Howells, H., Puglisi, G., Leonetti, A., Vigano, L., Fornia, L., Simone, L., Forkel, S. J., Rossi, M., Riva, M., Cerri, G., & Bello, L. (2020). The role of left fronto-parietal tracts in hand selection: Evidence from neurosurgery. Cortex, 128, 297-311. doi:10.1016/j.cortex.2020.03.018.

    Abstract

    Strong right-hand preference on the population level is a uniquely human feature, although its neural basis is still not clearly defined. Recent behavioural and neuroimaging literature suggests that hand preference may be related to the orchestrated function and size of fronto-parietal white matter tracts bilaterally. Lesions to these tracts induced during tumour resection may provide an opportunity to test this hypothesis. In the present study, a cohort of seventeen neurosurgical patients with left hemisphere brain tumours were recruited to investigate whether resection of certain white matter tracts affects the choice of hand selected for the execution of a goal-directed task (assembly of jigsaw puzzles). Patients performed the puzzles, but also tests for basic motor ability, selective attention and visuo-constructional ability, preoperatively and one month after surgery. An atlas-based disconnectome analysis was conducted to evaluate whether resection of tracts was significantly associated with changes in hand selection. Diffusion tractography was also used to dissect fronto-parietal tracts (the superior longitudinal fasciculus) and the corticospinal tract. Results showed a shift in hand selection despite the absence of any motor or cognitive deficits, which was significantly associated with frontal and parietal resections rather than other lobes. In particular, the shift in hand selection was significantly associated with the resection of dorsal rather than ventral fronto-parietal white matter connections. Dorsal white matter pathways contribute bilaterally to control of goal-directed hand movements. We show that unilateral lesions, that may unbalance the cooperation of the two hemispheres, can alter the choice of hand selected to accomplish movements.
  • Huang, L., Zhou, G., Liu, Z., Dang, X., Yang, Z., Kong, X., Wang, X., Song, Y., Zhen, Z., & Liu, J. (2016). A Multi-Atlas Labeling Approach for Identifying Subject-Specific Functional Regions of Interest. PLoS One, 11(1): e0146868. doi:10.1371/journal.pone.0146868.

    Abstract

    The functional region of interest (fROI) approach has increasingly become a favored methodology in functional magnetic resonance imaging (fMRI) because it can circumvent inter-subject anatomical and functional variability, and thus increase the sensitivity and functional resolution of fMRI analyses. The standard fROI method requires human experts to meticulously examine and identify subject-specific fROIs within activation clusters. This process is time-consuming and heavily dependent on experts’ knowledge. Several algorithmic approaches have been proposed for identifying subject-specific fROIs; however, these approaches cannot easily incorporate prior knowledge of inter-subject variability. In the present study, we improved the multi-atlas labeling approach for defining subject-specific fROIs. In particular, we used a classifier-based atlas-encoding scheme and an atlas selection procedure to account for the large spatial variability across subjects. Using a functional atlas database for face recognition, we showed that with these two features, our approach efficiently circumvented inter-subject anatomical and functional variability and thus improved labeling accuracy. Moreover, in comparison with a single-atlas approach, our multi-atlas labeling approach showed better performance in identifying subject-specific fROIs.

    Additional information

    S1_Fig.tif S2_Fig.tif
  • Hubers, F., Redl, T., De Vos, H., Reinarz, L., & De Hoop, H. (2020). Processing prescriptively incorrect comparative particles: Evidence from sentence-matching and eye-tracking. Frontiers in Psychology, 11: 186. doi:10.3389/fpsyg.2020.00186.

    Abstract

    Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan ‘lit. better as John’). From a theoretical linguist’s point of view, these so-called grammatical norm violations are perfectly grammatical, even though they are not part of the language’s prescriptive grammar. In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether grammatical norm violations are processed as truly grammatical, as truly ungrammatical, or whether they fall in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that grammatical norm violations are not processed on a par with ungrammatical sentences, the results of all three experiments clearly show that grammatical norm violations cannot be considered grammatical, either.

    Additional information

    Supplementary Material
  • Hubers, F., Trompenaars, T., Collin, S., De Schepper, K., & De hoop, H. (2020). Hypercorrection as a by-product of education. Applied Linguistics, 41(4), 552-574. doi:10.1093/applin/amz001.

    Abstract

    Prescriptive grammar rules are taught in education, generally to ban the use of certain frequently encountered constructions in everyday language. This may lead to hypercorrection, meaning that the prescribed form in one construction is extended to another one in which it is in fact prohibited by prescriptive grammar. We discuss two such cases in Dutch: the hypercorrect use of the comparative particle dan ‘than’ in equative constructions, and the hypercorrect use of the accusative pronoun hen ‘them’ for a dative object. In two experiments, high school students of three educational levels were tested on their use of these hypercorrect forms (nexp1 = 162, nexp2 = 159). Our results indicate an overall large amount of hypercorrection across all levels of education, including pre-university level students who otherwise perform better in constructions targeted by prescriptive grammar rules. We conclude that while teaching prescriptive grammar rules to high school students seems to increase their use of correct forms in certain constructions, this comes at a cost of hypercorrection in others.
  • Hubers, F., Snijders, T. M., & De Hoop, H. (2016). How the brain processes violations of the grammatical norm: An fMRI study. Brain and Language, 163, 22-31. doi:10.1016/j.bandl.2016.08.006.

    Abstract

    Native speakers of Dutch do not always adhere to prescriptive grammar rules in their daily speech. These grammatical norm violations can elicit emotional reactions in language purists, mostly high-educated people, who claim that for them these constructions are truly ungrammatical. However, linguists generally assume that grammatical norm violations are in fact truly grammatical, especially when they occur frequently in a language. In an fMRI study we investigated the processing of grammatical norm violations in the brains of language purists, and compared them with truly grammatical and truly ungrammatical sentences. Grammatical norm violations were found to be unique in that their processing resembled not only the processing of truly grammatical sentences (in left medial Superior Frontal Gyrus and Angular Gyrus), but also that of truly ungrammatical sentences (in Inferior Frontal Gyrus), despite what theories of grammar would usually lead us to believe
  • Hubers, F. (2020). Two of a kind: Idiomatic expressions by native speakers and second language learners. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.

    Abstract

    A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.

    Additional information

    Data files and script
  • Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80-93. doi:10.1080/23273798.2015.1047459.

    Abstract

    It is now well established that anticipation of up-coming input is a key characteristic of spoken language comprehension. Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed however has hardly been explored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32 to 77 years of age received spoken instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM" - look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use gender information from the article to predict the upcoming target object. The average participant anticipated the target objects well in advance of the critical noun. Multiple regression analyses showed that working memory and processing speed had the largest mediating effects: Enhanced working memory abilities and faster processing speed supported anticipatory spoken language processing. These findings suggest that models of predictive language processing must take mediating factors such as working memory and processing speed into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual-spatial representations.
  • Huettig, F., & Mani, N. (2016). Is prediction necessary to understand language? Probably not. Language, Cognition and Neuroscience, 31(1), 19-31. doi:10.1080/23273798.2015.1072223.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. In the present opinion paper we evaluate this proposal. We first critically discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, we discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Hugh-Jones, D., Verweij, K. J. H., St Pourcain, B., & Abdellaoui, A. (2016). Assortative mating on educational attainment leads to genetic spousal resemblance for causal alleles. Intelligence, 59, 103-108. doi:10.1016/j.intell.2016.08.005.

    Abstract

    We examined whether assortative mating for educational attainment (“like marries like”) can be detected in the genomes of ~ 1600 UK spouse pairs of European descent. Assortative mating on heritable traits like educational attainment increases the genetic variance and heritability of the trait in the population, which may increase social inequalities. We test for genetic assortative mating in the UK on educational attainment, a phenotype that is indicative of socio-economic status and has shown substantial levels of assortative mating. We use genome-wide allelic effect sizes from a large genome-wide association study on educational attainment (N ~ 300 k) to create polygenic scores that are predictive of educational attainment in our independent sample (r = 0.23, p < 2 × 10− 16). The polygenic scores significantly predict partners' educational outcome (r = 0.14, p = 4 × 10− 8 and r = 0.19, p = 2 × 10− 14, for prediction from males to females and vice versa, respectively), and are themselves significantly correlated between spouses (r = 0.11, p = 7 × 10− 6). Our findings provide molecular genetic evidence for genetic assortative mating on education in the UK
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2020). Age-related changes in attentional refocusing during simulated driving. Brain sciences, 10(8): 530. doi:10.3390/brainsci10080530.

    Abstract

    We recently reported that refocusing attention between temporal and spatial tasks becomes more difficult with increasing age, which could impair daily activities such as driving (Callaghan et al., 2017). Here, we investigated the extent to which difficulties in refocusing attention extend to naturalistic settings such as simulated driving. A total of 118 participants in five age groups (18–30; 40–49; 50–59; 60–69; 70–91 years) were compared during continuous simulated driving, where they repeatedly switched from braking due to traffic ahead (a spatially focal yet temporally complex task) to reading a motorway road sign (a spatially more distributed task). Sequential-Task (switching) performance was compared to Single-Task performance (road sign only) to calculate age-related switch-costs. Electroencephalography was recorded in 34 participants (17 in the 18–30 and 17 in the 60+ years groups) to explore age-related changes in the neural oscillatory signatures of refocusing attention while driving. We indeed observed age-related impairments in attentional refocusing, evidenced by increased switch-costs in response times and by deficient modulation of theta and alpha frequencies. Our findings highlight virtual reality (VR) and Neuro-VR as important methodologies for future psychological and gerontological research.

    Additional information

    supplementary file
  • Humphries, S., Holler, J., Crawford, T. J., Herrera, E., & Poliakoff, E. (2016). A third-person perspective on co-speech action gestures in Parkinson’s disease. Cortex, 78, 44-54. doi:10.1016/j.cortex.2016.02.009.

    Abstract

    A combination of impaired motor and cognitive function in Parkinson’s disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production
  • Hwang, S.-O., Tomita, N., Morgan, H., Ergin, R., İlkbaşaran, D., Seegers, S., Lepic, R., & Padden, C. (2016). Of the body and the hands: patterned iconicity for semantic categories. Language and Cognition, 9(4), 573-602. doi:10.1017/langcog.2016.28.

    Abstract

    This paper examines how gesturers and signers use their bodies to express concepts such as instrumentality and humanness. Comparing across eight sign languages (American, Japanese, German, Israeli, and Kenyan Sign Languages, Ha Noi Sign Language of Vietnam, Central Taurus Sign Language of Turkey, and Al-Sayyid Bedouin Sign Language of Israel) and the gestures of American non-signers, we find recurring patterns for naming entities in three semantic categories (tools, animals, and fruits & vegetables). These recurring patterns are captured in a classification system that identifies iconic strategies based on how the body is used together with the hands. Across all groups, tools are named with manipulation forms, where the head and torso represent those of a human agent. Animals tend to be identified with personification forms, where the body serves as a map for a comparable non-human body. Fruits & vegetables tend to be identified with object forms, where the hands act independently from the rest of the body to represent static features of the referent. We argue that these iconic patterns are rooted in using the body for communication, and provide a basis for understanding how meaningful communication emerges quickly in gesture and persists in emergent and established sign languages.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.

    Abstract

    An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning.
  • Iacozza, S. (2020). Exploring social biases in language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Iliadis, S. I., Sylvén, S., Hellgren, C., Olivier, J. D., Schijven, D., Comasco, E., Chrousos, G. P., Sundström Poromaa, I., & Skalkidou, A. (2016). Mid-pregnancy corticotropin-releasing hormone levels in association with postpartum depressive symptoms. Depression and Anxiety, 33(11), 1023-1030. doi:10.1002/da.22529.

    Abstract

    Background Peripartum depression is a common cause of pregnancy- and postpartum-related morbidity. The production of corticotropin-releasing hormone (CRH) from the placenta alters the profile of hypothalamus–pituitary–adrenal axis hormones and may be associated with postpartum depression. The purpose of this study was to assess, in nondepressed pregnant women, the possible association between CRH levels in pregnancy and depressive symptoms postpartum. Methods A questionnaire containing demographic data and the Edinburgh Postnatal Depression Scale (EPDS) was filled in gestational weeks 17 and 32, and 6 week postpartum. Blood samples were collected in week 17 for assessment of CRH. A logistic regression model was constructed, using postpartum EPDS score as the dependent variable and log-transformed CRH levels as the independent variable. Confounding factors were included in the model. Subanalyses after exclusion of study subjects with preterm birth, newborns small for gestational age (SGA), and women on corticosteroids were performed. Results Five hundred thirty-five women without depressive symptoms during pregnancy were included. Logistic regression showed an association between high CRH levels in gestational week 17 and postpartum depressive symptoms, before and after controlling for several confounders (unadjusted OR = 1.11, 95% CI 1.01–1.22; adjusted OR = 1.13, 95% CI 1.02–1.26; per 0.1 unit increase in log CRH). Exclusion of women with preterm birth and newborns SGA as well as women who used inhalation corticosteroids during pregnancy did not alter the results. Conclusions This study suggests an association between high CRH levels in gestational week 17 and the development of postpartum depressive symptoms, among women without depressive symptoms during pregnancy.
  • Indefrey, P. (2016). On putative shortcomings and dangerous future avenues: response to Strijkers & Costa. Language, Cognition and Neuroscience, 31(4), 517-520. doi:10.1080/23273798.2015.1128554.
  • Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 845-865). Cambridge, MA: MIT Press.

    Abstract

    This chapter reviews the findings of 58 word production experiments using different tasks and neuroimaging techniques. The reported cerebral activation sites are coded in a common anatomic reference system. Based on a functional model of language production, the different word production tasks are analyzed in terms of their processing components. This approach allows a distinction between the core process of word production and preceding task-specific processes (lead-in processes) such as visual or auditory stimulus recognition. The core process of word production is subserved by a left-lateralized perisylvian/thalamic language production network. Within this network there seems to be functional specialization for the processing stages of word production. In addition, this chapter includes a discussion of the available evidence on syntactic production, self-monitoring, and the time course of word production.
  • Ingvar, M., & Petersson, K. M. (2000). Functional maps and brain networks. In A. W. Toga (Ed.), Brain mapping: The systems (pp. 111-140). San Diego: Academic Press.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.
  • Irizarri van Suchtelen, P. (2016). Spanish as a heritage language in the Netherlands. A cognitive linguistic exploration. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Isbilen, E. S., McCauley, S. M., Kidd, E., & Christiansen, M. H. (2020). Statistically induced chunking recall: A memory‐based approach to statistical learning. Cognitive Science, 44(7): e12848. doi:10.1111/cogs.12848.

    Abstract

    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking , may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word‐like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two‐alternative forced‐choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.
  • Ito, A., Corley, M., Pickering, M. J., Martin, A. E., & Nieuwland, M. S. (2016). Predicting form and meaning: Evidence from brain potentials. Journal of Memory and Language, 86, 157-171. doi:10.1016/j.jml.2015.10.007.

    Abstract

    We used ERPs to investigate the pre-activation of form and meaning in language comprehension. Participants read high-cloze sentence contexts (e.g., “The student is going to the library to borrow a…”), followed by a word that was predictable (book), form-related (hook) or semantically related (page) to the predictable word, or unrelated (sofa). At a 500 ms SOA (Experiment 1), semantically related words, but not form-related words, elicited a reduced N400 compared to unrelated words. At a 700 ms SOA (Experiment 2), semantically related words and form-related words elicited reduced N400 effects, but the effect for form-related words occurred in very high-cloze sentences only. At both SOAs, form-related words elicited an enhanced, post-N400 posterior positivity (Late Positive Component effect). The N400 effects suggest that readers can pre-activate meaning and form information for highly predictable words, but form pre-activation is more limited than meaning pre-activation. The post-N400 LPC effect suggests that participants detected the form similarity between expected and encountered input. Pre-activation of word forms crucially depends upon the time that readers have to make predictions, in line with production-based accounts of linguistic prediction.
  • Jacoby, N., Margulis, E. H., Clayton, M., Hannon, E., Honing, H., Iversen, J., Klein, T. R., Mehr, S. A., Pearson, L., Peretz, I., Perlman, M., Polak, R., Ravignani, A., Savage, P. E., Steingo, G., Stevens, C. J., Trainor, L., Trehub, S., Veal, M., & Wald-Fuhrmann, M. (2020). Cross-cultural work in music cognition: Challenges, insights, and recommendations. Music Perception, 37(3), 185-195. doi:10.1525/mp.2020.37.3.185.

    Abstract

    Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of “music” and “culture.”
  • Janse, E., Sennema, A., & Slis, A. (2000). Fast speech timing in Dutch: The durational correlates of lexical stress and pitch accent. In Proceedings of the VIth International Conference on Spoken Language Processing, Vol. III (pp. 251-254).

    Abstract

    n this study we investigated the durational correlates of lexical stress and pitch accent at normal and fast speech rate in Dutch. Previous literature on English shows that durations of lexically unstressed vowels are reduced more than stressed vowels when speakers increase their speech rate. We found that the same holds for Dutch, irrespective of whether the unstressed vowel is schwa or a "full" vowel. In the same line, we expected that vowels in words without a pitch accent would be shortened relatively more than vowels in words with a pitch accent. This was not the case: if anything, the accented vowels were shortened relatively more than the unaccented vowels. We conclude that duration is an important cue for lexical stress, but not for pitch accent.
  • Janse, E. (2000). Intelligibility of time-compressed speech: Three ways of time-compression. In Proceedings of the VIth International Conference on Spoken Language Processing, vol. III (pp. 786-789).

    Abstract

    Studies on fast speech have shown that word-level timing of fast speech differs from that of normal rate speech in that unstressed syllables are shortened more than stressed syllables as speech rate increases. An earlier experiment showed that the intelligibility of time-compressed speech could not be improved by making its temporal organisation closer to natural fast speech. To test the hypothesis that segmental intelligibility is more important than prosodic timing in listening to timecompressed speech, the intelligibility of bisyllabic words was tested in three time-compression conditions: either stressed and unstressed syllable were compressed to the same degree, or the stressed syllable was compressed more than the unstressed syllable, or the reverse. As was found before, imitating wordlevel timing of fast speech did not improve intelligibility over linear compression. However, the results did not confirm the hypothesis either: there was no difference in intelligibility between the three compression conditions. We conclude that segmental intelligibility plays an important role, but further research is necessary to decide between the contributions of prosody and segmental intelligibility to the word-level intelligibility of time-compressed speech.
  • Janssen, R., Nolfi, S., Haselager, W. F. G., & Sprinkhuizen-Kuyper, I. G. (2016). Cyclic Incrementality in Competitive Coevolution: Evolvability through Pseudo-Baldwinian Switching-Genes. Artificial Life, 22(3), 319-352. doi:10.1162/ARTL_a_00208.

    Abstract

    Coevolving systems are notoriously difficult to understand. This is largely due to the Red Queen effect that dictates heterospecific fitness interdependence. In simulation studies of coevolving systems, master tournaments are often used to obtain more informed fitness measures by testing evolved individuals against past and future opponents. However, such tournaments still contain certain ambiguities. We introduce the use of a phenotypic cluster analysis to examine the distribution of opponent categories throughout an evolutionary sequence. This analysis, adopted from widespread usage in the bioinformatics community, can be applied to master tournament data. This allows us to construct behavior-based category trees, obtaining a hierarchical classification of phenotypes that are suspected to interleave during cyclic evolution. We use the cluster data to establish the existence of switching-genes that control opponent specialization, suggesting the retention of dormant genetic adaptations, that is, genetic memory. Our overarching goal is to reiterate how computer simulations may have importance to the broader understanding of evolutionary dynamics in general. We emphasize a further shift from a component-driven to an interaction-driven perspective in understanding coevolving systems. As yet, it is unclear how the sudden development of switching-genes relates to the gradual emergence of genetic adaptability. Likely, context genes gradually provide the appropriate genetic environment wherein the switching-gene effect can be exploited
  • Janssen, R., Winter, B., Dediu, D., Moisik, S. R., & Roberts, S. G. (2016). Nonlinear biases in articulation constrain the design space of language. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/86.html.

    Abstract

    In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?
    The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.
    While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.
    Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).
    Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.
    We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Janzen, G., Herrmann, T., Katz, S., & Schweizer, K. (2000). Oblique Angled Intersections and Barriers: Navigating through a Virtual Maze. In Spatial Cognition II (pp. 277-294). Berlin: Springer.

    Abstract

    The configuration of a spatial layout has a substantial effect on the acquisition and the representation of the environment. In four experiments, we investigated navigation difficulties arising at oblique angled intersections. In the first three studies we investigated specific arrow-fork configurations. In dependence on the branch subjects use to enter the intersection different decision latencies and numbers of errors arise. If subjects see the intersection as a fork, it is more difficult to find the correct way as if it is seen as an arrow. In a fourth study we investigated different heuristics people use while making a detour around a barrier. Detour behaviour varies with the perspective. If subjects learn and navigate through the maze in a field perspective they use a heuristic of preferring right angled paths. If they have a view from above and acquire their knowledge in an observer perspective they use oblique angled paths more often.

    Files private

    Request files
  • Jaspers, D., & Seuren, P. A. M. (2016). The Square of opposition in catholic hands: A chapter in the history of 20th-century logic. Logique et Analyse, 59(233), 1-35.

    Abstract

    The present study describes how three now almost forgotten mid-20th-century logicians, the American Paul Jacoby and the Frenchmen Augustin Sesmat and Robert Blanché, all three ardent Catholics, tried to restore traditional predicate logic to a position of respectability by expanding the classic Square of Opposition to a hexagon of logical relations, showing the logical and cognitive advantages of such an expansion. The nature of these advantages is discussed in the context of modern research regarding the relations between logic, language, and cognition. It is desirable to call attention to these attempts, as they are, though almost totally forgotten, highly relevant against the backdrop of the clash between modern and traditional logic. It is argued that this clash was and is unnecessary, as both forms of predicate logic are legitimate, each in its own right. The attempts by Jacoby, Sesmat, and Blanché are, moreover, of interest to the history of logic in a cultural context in that, in their own idiosyncratic ways, they fit into the general pattern of the Catholic cultural revival that took place roughly between the years 1840 and 1960. The Catholic Church had put up stiff resistance to modern mathematical logic, considering it dehumanizing and a threat to Catholic doctrine. Both the wider cultural context and the specific implications for logic are described and analyzed, in conjunction with the more general philosophical and doctrinal issues involved.
  • Jebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D. and 9 moreJebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D., Dechmann, D., Locatelli, A. G., Puechmaille, S. J., Fedrigo, O., Jarvis, E. D., Hiller, M., Vernes, S. C., Myers, E. W., & Teeling, E. C. (2020). Six reference-quality genomes reveal evolution of bat adaptations. Nature, 583, 578-584. doi:10.1038/s41586-020-2486-3.

    Abstract

    Bats possess extraordinary adaptations, including flight, echolocation, extreme longevity and unique immunity. High-quality genomes are crucial for understanding the molecular basis and evolution of these traits. Here we incorporated long-read sequencing and state-of-the-art scaffolding protocols1 to generate, to our knowledge, the first reference-quality genomes of six bat species (Rhinolophus ferrumequinum, Rousettus aegyptiacus, Phyllostomus discolor, Myotis myotis, Pipistrellus kuhlii and Molossus molossus). We integrated gene projections from our ‘Tool to infer Orthologs from Genome Alignments’ (TOGA) software with de novo and homology gene predictions as well as short- and long-read transcriptomics to generate highly complete gene annotations. To resolve the phylogenetic position of bats within Laurasiatheria, we applied several phylogenetic methods to comprehensive sets of orthologous protein-coding and noncoding regions of the genome, and identified a basal origin for bats within Scrotifera. Our genome-wide screens revealed positive selection on hearing-related genes in the ancestral branch of bats, which is indicative of laryngeal echolocation being an ancestral trait in this clade. We found selection and loss of immunity-related genes (including pro-inflammatory NF-κB regulators) and expansions of anti-viral APOBEC3 genes, which highlights molecular mechanisms that may contribute to the exceptional immunity of bats. Genomic integrations of diverse viruses provide a genomic record of historical tolerance to viral infection in bats. Finally, we found and experimentally validated bat-specific variation in microRNAs, which may regulate bat-specific gene-expression programs. Our reference-quality bat genomes provide the resources required to uncover and validate the genomic basis of adaptations of bats, and stimulate new avenues of research that are directly relevant to human health and disease

    Additional information

    41586_2020_2486_MOESM1_ESM.pdf
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jessop, A., & Chang, F. (2020). Thematic role information is maintained in the visual object-tracking system. Quarterly journal of experimental psychology, 73(1), 146-163. doi:10.1177%2F1747021819882842.

    Abstract

    Thematic roles characterise the functions of participants in events, but there is no agreement on how these roles are identified in the real world. In three experiments, we examined how role identification in push events is supported by the visual object-tracking system. Participants saw one to three push events in visual scenes with nine identical randomly moving circles. After a period of random movement, two circles from one of the push events and a foil object were given different colours and the participants had to identify their roles in the push with an active sentence, such as red pushed blue. It was found that the participants could track the agent and patient targets and generate descriptions that identified their roles at above chance levels, even under difficult conditions, such as when tracking multiple push events (Experiments 1–3), fixating their gaze (Experiment 1), performing a concurrent speeded-response task (Experiment 2), and when tracking objects that were temporarily invisible (Experiment 3). The results were consistent with previous findings of an average tracking capacity limit of four objects, individual differences in this capacity, and the use of attentional strategies. The studies demonstrated that thematic role information can be maintained when tracking the identity of visually identical objects, then used to map role fillers (e.g., the agent of a push event) into their appropriate sentence positions. This suggests that thematic role features are stored temporarily in the visual object-tracking system.
  • Jiang, T., Zhang, W., Wen, W., Zhu, H., Du, H., Zhu, X., Gao, X., Zhang, H., Dong, Q., & Chen, C. (2016). Reevaluating the two-representation model of numerical magnitude processing. Memory & Cognition, 44, 162-170. doi:10.3758/s13421-015-0542-2.

    Abstract

    One debate in mathematical cognition centers on the single-representation model versus the two-representation model. Using an improved number Stroop paradigm (i.e., systematically manipulating physical size distance), in the present study we tested the predictions of the two models for number magnitude processing. The results supported the single-representation model and, more importantly, explained how a design problem (failure to manipulate physical size distance) and an analytical problem (failure to consider the interaction between congruity and task-irrelevant numerical distance) might have contributed to the evidence used to support the two-representation model. This study, therefore, can help settle the debate between the single-representation and two-representation models. © 2015 The Author(s)
  • St. John-Saaltink, E. (2016). When the past influences the present: Modulations of the sensory response by prior knowledge and task set. PhD Thesis, Radboud University, Nijmegen.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2000). The development of word recognition: The use of the possible-word constraint by 12-month-olds. In L. Gleitman, & A. Joshi (Eds.), Proceedings of CogSci 2000 (pp. 1034). London: Erlbaum.
  • Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.

    Abstract

    There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention.
  • Jongman, S. R. (2016). Sustained attention in language production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.

    Abstract

    In conversation, speech planning can overlap with listening to the interlocutor. It has been
    postulated that once there is enough information to formulate a response, planning is initiated
    and the response is maintained in working memory. Concurrently, the auditory input is
    monitored for the turn end such that responses can be launched promptly. In three EEG
    experiments, we aimed to identify the neural signature of phonological planning and monitoring
    by comparing delayed responding to not responding (reading aloud, repetition and lexical
    decision). These comparisons consistently resulted in a sustained positivity and beta power
    reduction over posterior regions. We argue that these effects reflect attention to the sequence
    end. Phonological planning and maintenance were not detected in the neural signature even
    though it is highly likely these were taking place. This suggests that EEG must be used cautiously
    to identify response planning when the neural signal is overridden by attention effects
  • Jordanoska, I. (2020). The pragmatics of sentence final and second position particles in Wolof. PhD Thesis, University of Vienna, Vienna.
  • Kartushina, N., Hervais-Adelman, A., Frauenfelder, U. H., & Golestani, N. (2016). Mutual influences between native and non-native vowels in production: Evidence from short-term visual articulatory feedback training. Journal of Phonetics, 57, 21-39. doi:10.1016/j.wocn.2016.05.001.

    Abstract

    We studied mutual influences between native and non-native vowel production during learning, i.e., before and after short-term visual articulatory feedback training with non-native sounds. Monolingual French speakers were trained to produce two non-native vowels: the Danish /ɔ/, which is similar to the French /o/, and the Russian /ɨ/, which is dissimilar from French vowels. We examined relationships between the production of French and non-native vowels before training, and the effects of training with non-native vowels on the production of French ones. We assessed for each participant the acoustic position and compactness of the trained vowels, and of the French /o/, /ø/, /y/ and /i/ vowels, which are acoustically closest to the trained vowels. Before training, the compactness of the French vowels was positively related to the accuracy and compactness in the production of non-native vowels. After training, French speakers’ accuracy and stability in the production of the two trained vowels improved on average by 19% and 37.5%, respectively. Interestingly, the production of native vowels was also affected by this learning process, with a drift towards non-native vowels. The amount of phonetic drift appears to depend on the degree of similarity between the native and non-native sounds.
  • Kastens, K. (2020). The Jerome Bruner Library treasure. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen (pp. 29-34). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects
  • Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.

    Abstract

    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.
  • Kavaklioglu, T., Ajmal, M., Hameed, A., & Francks, C. (2016). Whole exome sequencing for handedness in a large and highly consanguineous family. Neuropsychologia, 93, part B, 342-349. doi:10.1016/j.neuropsychologia.2015.11.010.

    Abstract

    Pinpointing genes involved in non-right-handedness has the potential to clarify developmental contributions to human brain lateralization. Major-gene models have been considered for human handedness which allow for phenocopy and reduced penetrance, i.e. an imperfect correspondence between genotype and phenotype. However, a recent genome-wide association scan did not detect any common polymorphisms with substantial genetic effects. Previous linkage studies in families have also not yielded significant findings. Genetic heterogeneity and/or polygenicity are therefore indicated, but it remains possible that relatively rare, or even unique, major-genetic effects may be detectable in certain extended families with many non-right-handed members. Here we applied whole exome sequencing to 17 members from a single, large consanguineous family from Pakistan. Multipoint linkage analysis across all autosomes did not yield clear candidate genomic regions for involvement in the trait and single-point analysis of exomic variation did not yield clear candidate mutations/genes. Any genetic contribution to handedness in this unusual family is therefore likely to have a complex etiology, as at the population level.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G. (1979). A study of syntactic bookkeeping during sentence production. In H. Ueckert, & D. Rhenius (Eds.), Komplexe menschliche Informationsverarbeitung (pp. 361-368). Bern: Hans Huber.

    Abstract

    It is an important feature of the human sentence production system that semantic and syntactic processes may overlap in time and do not proceed strictly serially. That is, the process of building the syntactic form of an utterance does not always wait until the complete semantic content for that utterance has been decided upon. On the contrary, speakers will often start pronouncing the first words of a sentence while still working on further details of its semantic content. An important advantage is memory economy. Semantic and syntactic fragments do not have to occupy working memory until complete semantic and syntactic structures for an utterance have been computed. Instead, each semantic and syntactic fragment is processed as soon as possible and is kept in working memory for a minimum period of time. This raises the question of how the sentence production system can maintain syntactic coherence across syntactic fragments. Presumably there are processes of "syntactic bookkeeping" which (1) store in working memory those syntactic properties of a fragmentary sentence which are needed to eliminate ungrammatical continuations, and (2) check whether a prospective continuation is indeed compatible with the sentence constructed so far. In reaction time experiments where subjects described, under time pressure, simple static pictures of an action performed by an actor, the second aspect of syntactic bookkeeping could be demonstrated. This evidence is used for modelling bookkeeping processes as part of a computational sentence generator which aims at simulating the syntactic operations people carry out during spontaneous speech.

Share this page