Publications

Displaying 301 - 400 of 985
  • Gullberg, M., & Indefrey, P. (Eds.). (2006). The cognitive neuroscience of second language acquisition. Michigan: Blackwell.

    Abstract

    The papers in this volume explore the cognitive neuroscience of second language acquisition from the perspectives of critical/sensitive periods, maturational effects, individual differences, neural regions involved, and processing characteristics. The research methodologies used include functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and event related potentials (ERP). Questions addressed include: Which brain areas are reliably activated in second language processing? Are they the same or different from those activated in first language acquisition and use? What are the behavioral consequences of individual differences among brains? What are the consequences of anatomical and physiological differences, learner proficiency effects, critical/sensitive periods? What role does degeneracy, in which two different neural systems can produce the same behavioral output, play? What does it mean that learners' brains respond to linguistic distinctions that cannot be recognized or produced yet? The studies in this volume provide initial answers to all of these questions.
  • Gullberg, M., & Holmqvist, K. (2006). What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition, 14(1), 53-82.

    Abstract

    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.
  • Gullberg, M. (2006). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56(1), 155-196. doi:10.1111/j.0023-8333.2006.00344.x.

    Abstract

    The production of cohesive discourse, especially maintained reference, poses problems for early second language (L2) speakers. This paper considers a communicative account of overexplicit L2 discourse by focusing on the interdependence between spoken and gestural cohesion, the latter being expressed by anchoring of referents in gesture space. Specifically, this study investigates whether overexplicit maintained reference in speech (lexical noun phrases [NPs]) and gesture (anaphoric gestures) constitutes an interactional communication strategy. We examine L2 speech and gestures of 16 Dutch learners of French retelling stories to addressees under two visibility conditions. The results indicate that the overexplicit properties of L2 speech are not motivated by interactional strategic concerns. The results for anaphoric gestures are more complex. Although their presence is not interactionally
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Hagoort, P. (2006). What we cannot learn from neuroanatomy about language learning and language processing [Commentary on Uylings]. Language Learning, 56(suppl. 1), 91-97. doi:10.1111/j.1467-9922.2006.00356.x.
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2006). Event-related potentials from the user's perspective [Review of the book An introduction to the event-related potential technique by Steven J. Luck]. Nature Neuroscience, 9(4), 463-463. doi:10.1038/nn0406-463.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hahn, L. E., Ten Buuren, M., Snijders, T. M., & Fikkert, P. (2020). Learning words in a second language while cycling and listening to children’s songs: The Noplica Energy Center. International Journal of Music in Early Childhood, 15(1), 95-108. doi:10.1386/ijmec_00014_1.

    Abstract

    Children’s songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring children’s songs in a playhouse. The playhouse is designed by the Noplica foundation (www.noplica.nl) to advance language learning through unsupervised play. We present data from three experiments that serve to scientifically proof the functionality of one game of the playhouse: the Energy Center. For this game, children move three hand-bikes mounted on a panel within the playhouse. Once the children cycle, a song starts playing that is accompanied by musical instruments. In our experiments, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our experiments were run in the field, one at a Dutch and one at an Indian pre-school. The third experiment features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from the songs they heard. We will pick up upon these promising observations during future studies.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2020). Six-month-old infants recognize phrases in song and speech. Infancy, 25(5), 699-718. doi:10.1111/infa.12357.

    Abstract

    Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well‐attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six‐month‐old Dutch infants (n = 80) were tested in the song or speech modality in the head‐turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well‐formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well‐formed sequence, but only in a more fine‐grained analysis. The preference for well‐formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.

    Additional information

    infa12357-sup-0001-supinfo.zip
  • Hald, L. A., Bastiaansen, M. C. M., & Hagoort, P. (2006). EEG theta and gamma responses to semantic violations in online sentence processing. Brain and Language, 96(1), 90-105. doi:10.1016/j.bandl.2005.06.007.

    Abstract

    We explore the nature of the oscillatory dynamics in the EEG of subjects reading sentences that contain a semantic violation. More specifically, we examine whether increases in theta (≈3–7 Hz) and gamma (around 40 Hz) band power occur in response to sentences that were either semantically correct or contained a semantically incongruent word (semantic violation). ERP results indicated a classical N400 effect. A wavelet-based time-frequency analysis revealed a theta band power increase during an interval of 300–800 ms after critical word onset, at temporal electrodes bilaterally for both sentence conditions, and over midfrontal areas for the semantic violations only. In the gamma frequency band, a predominantly frontal power increase was observed during the processing of correct sentences. This effect was absent following semantic violations. These results provide a characterization of the oscillatory brain dynamics, and notably of both theta and gamma oscillations, that occur during language comprehension.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Harbusch, K., & Kempen, G. (2006). ELLEIPO: A module that computes coordinative ellipsis for language generators that don't. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006) (pp. 115-118).

    Abstract

    Many current sentence generators lack the ability to compute elliptical versions of coordinated clauses in accordance with the rules for Gapping, Forward and Backward Conjunction Reduction, and SGF (Subject Gap in clauses with Finite/ Fronted verb). We describe a module (implemented in JAVA, with German and Dutch as target languages) that takes non-elliptical coordinated clauses as input and returns all reduced versions licensed by coordinative ellipsis. It is loosely based on a new psycholinguistic theory of coordinative ellipsis proposed by Kempen. In this theory, coordinative ellipsis is not supposed to result from the application of declarative grammar rules for clause formation but from a procedural component that interacts with the sentence generator and may block the overt expression of certain constituents.
  • Harbusch, K., Kempen, G., Van Breugel, C., & Koch, U. (2006). A generation-oriented workbench for performance grammar: Capturing linear order variability in German and Dutch. In Proceedings of the 4th International Natural Language Generation Conference (pp. 9-11).

    Abstract

    We describe a generation-oriented workbench for the Performance Grammar (PG) formalism, highlighting the treatment of certain word order and movement constraints in Dutch and German. PG enables a simple and uniform treatment of a heterogeneous collection of linear order phenomena in the domain of verb constructions (variably known as Cross-serial Dependencies, Verb Raising, Clause Union, Extraposition, Third Construction, Particle Hopping, etc.). The central data structures enabling this feature are clausal “topologies”: one-dimensional arrays associated with clauses, whose cells (“slots”) provide landing sites for the constituents of the clause. Movement operations are enabled by unification of lateral slots of topologies at adjacent levels of the clause hierarchy. The PGW generator assists the grammar developer in testing whether the implemented syntactic knowledge allows all and only the well-formed permutations of constituents.
  • Harmon, Z., & Kapatsinski, V. (2020). The best-laid plan of mice and men: Competition between top-down and preceding-item cues in plan execution. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 1674-1680). Montreal, QB: Cognitive Science Society.

    Abstract

    There is evidence that the process of executing a planned utterance involves the use of both preceding-context and top-down cues. Utterance-initial words are cued only by the top-down plan. In contrast, non-initial words are cued both by top-down cues and preceding-context cues. Co-existence of both cue types raises the question of how they interact during learning. We argue that this interaction is competitive: items that tend to be preceded by predictive preceding-context cues are harder to activate from the plan without this predictive context. A novel computational model of this competition is developed. The model is tested on a corpus of repetition disfluencies and shown to account for the influences on patterns of restarts during production. In particular, this model predicts a novel Initiation Effect: following an interruption, speakers re-initiate production from words that tend to occur in utterance-initial position, even when they are not initial in the interrupted utterance.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Haun, D. B. M., Call, J., Janzen, G., & Levinson, S. C. (2006). Evolutionary psychology of spatial representations in the hominidae. Current Biology, 16(17), 1736-1740. doi:10.1016/j.cub.2006.07.049.

    Abstract

    Comparatively little is known about the inherited primate background underlying human cognition, the human cognitive “wild-type.” Yet it is possible to trace the evolution of human cognitive abilities and tendencies by contrasting the skills of our nearest cousins, not just chimpanzees, but all the extant great apes, thus showing what we are likely to have inherited from the common ancestor [1]. By looking at human infants early in cognitive development, we can also obtain insights into native cognitive biases in our species [2]. Here, we focus on spatial memory, a central cognitive domain. We show, first, that all nonhuman great apes and 1-year-old human infants exhibit a preference for place over feature strategies for spatial memory. This suggests the common ancestor of all great apes had the same preference. We then examine 3-year-old human children and find that this preference reverses. Thus, the continuity between our species and the other great apes is masked early in human ontogeny. These findings, based on both phylogenetic and ontogenetic contrasts, open up the prospect of a systematic evolutionary psychology resting upon the cladistics of cognitive preferences.
  • Haun, D. B. M., Rapold, C. J., Call, J., Janzen, G., & Levinson, S. C. (2006). Cognitive cladistics and cultural override in Hominid spatial cognition. Proceedings of the National Academy of Sciences of the United States of America, 103(46), 17568-17573. doi:10.1073/pnas.0607999103.

    Abstract

    Current approaches to human cognition often take a strong nativist stance based on Western adult performance, backed up where possible by neonate and infant research and almost never by comparative research across the Hominidae. Recent research suggests considerable cross-cultural differences in cognitive strategies, including relational thinking, a domain where infant research is impossible because of lack of cognitive maturation. Here, we apply the same paradigm across children and adults of different cultures and across all nonhuman great ape genera. We find that both child and adult spatial cognition systematically varies with language and culture but that, nevertheless, there is a clear inherited bias for one spatial strategy in the great apes. It is reasonable to conclude, we argue, that language and culture mask the native tendencies in our species. This cladistic approach suggests that the correct perspective on human cognition is neither nativist uniformitarian nor ‘‘blank slate’’ but recognizes the powerful impact that language and culture can have on our shared primate cognitive biases.
  • Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research - A primer. Infancy, 25(5), 734-754. doi:10.1111/infa.12353.

    Abstract

    Preregistration, the act of specifying a research plan in advance, is becoming more common in scientific research. Infant researchers contend with unique problems that might make preregistration particularly challenging. Infants are a hard‐to‐reach population, usually yielding small sample sizes, they can only complete a limited number of trials, and they can be excluded based on hard‐to‐predict complications (e.g., parental interference, fussiness). In addition, as effects themselves potentially change with age and population, it is hard to calculate an a priori effect size. At the same time, these very factors make preregistration in infant studies a valuable tool. A priori examination of the planned study, including the hypotheses, sample size, and resulting statistical power, increases the credibility of single studies and adds value to the field. Preregistration might also improve explicit decision making to create better studies. We present an in‐depth discussion of the issues uniquely relevant to infant researchers, and ways to contend with them in preregistration and study planning. We provide recommendations to researchers interested in following current best practices.

    Additional information

    Preprint version on OSF
  • De Heer Kloots, M., Carlson, D., Garcia, M., Kotz, S., Lowry, A., Poli-Nardi, L., de Reus, K., Rubio-García, A., Sroka, M., Varola, M., & Ravignani, A. (2020). Rhythmic perception, production and interactivity in harbour and grey seals. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 59-62). Nijmegen: The Evolution of Language Conferences.
  • Heidlmayr, K., Kihlstedt, M., & Isel, F. (2020). A review on the electroencephalography markers of Stroop executive control processes. Brain and Cognition, 146: 105637. doi:10.1016/j.bandc.2020.105637.

    Abstract

    The present article on executive control addresses the issue of the locus of the Stroop effect by examining neurophysiological components marking conflict monitoring, interference suppression, and conflict resolution. Our goal was to provide an overview of a series of determining neurophysiological findings including neural source reconstruction data on distinct executive control processes and sub-processes involved in the Stroop task. Consistently, a fronto-central N2 component is found to reflect conflict monitoring processes, with its main neural generator being the anterior cingulate cortex (ACC). Then, for cognitive control tasks that involve a linguistic component like the Stroop task, the N2 is followed by a centro-posterior N400 and subsequently a late sustained potential (LSP). The N400 is mainly generated by the ACC and the prefrontal cortex (PFC) and is thought to reflect interference suppression, whereas the LSP plausibly reflects conflict resolution processes. The present overview shows that ERP constitute a reliable methodological tool for tracing with precision the time course of different executive processes and sub-processes involved in experimental tasks involving a cognitive conflict. Future research should shed light on the fine-grained mechanisms of control respectively involved in linguistic and non-linguistic tasks.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2020). No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Cortex, 130, 111-126. doi:10.1016/j.cortex.2020.04.035.

    Abstract

    Speakers and listeners usually interact in larger discourses than single words or even single sentences. The goal of the present study was to identify the neural bases reflecting how the mental representation of the situation denoted in a multi-sentence discourse (situation model) is constructed and shared between speakers and listeners. An fMRI study using a variant of the ambiguous text paradigm was designed. Speakers (n=15) produced ambiguous texts in the scanner and listeners (n=27) subsequently listened to these texts in different states of ambiguity: preceded by a highly informative, intermediately informative or no title at all. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation analyses between the speakers’ and the listeners’ hemodynamic time courses were performed. Critically, only the processing of disambiguated, coherent discourse with an intelligible situation model representation involved (shared) activation in bilateral lateral parietal and medial prefrontal regions. This shared spatiotemporal pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval in medial prefrontal regions and the binding of retrieved information in the lateral parietal cortex constitutes a core mechanism underlying the communication of complex conceptual representations.

    Additional information

    supplementary data
  • Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & De Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11: 321. doi:10.1038/s41467-019-13996-4.

    Abstract

    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.

    Additional information

    Supplementary information
  • Heinemann, T. (2006). Will you or can't you? Displaying entitlement in interrogative requests. Journal of Pragmatics, 38(7), 1081-1104. doi:10.1016/j.pragma.2005.09.013.

    Abstract

    Interrogative structures such as ‘Could you pass the salt? and ‘Couldn’t you pass the salt?’ can be used for making requests. A study of such pairs within a conversation analytic framework suggests that these are not used interchangeably, and that they have different impacts on the interaction. Focusing on Danish interactions between elderly care recipients and their home help assistants, I demonstrate how the care recipient displays different degrees of stance towards whether she is entitled to make a request or not, depending on whether she formats her request as a positive or a negative interrogative. With a positive interrogative request, the care recipient orients to her request as one she is not entitled to make. This is underscored by other features, such as the use of mitigating devices and the choice of verb. When accounting for this type of request, the care recipient ties the request to the specific situation she is in, at the moment in which the request is produced. In turn, the home help assistant orients to the lack of entitlement by resisting the request. With a negative interrogative request, the care recipient, in contrast, orients to her request as one she is entitled to make. This is strengthened by the choice of verb and the lack of mitigating devices. When such requests are accounted for, the requested task is treated as something that should be routinely performed, and hence as something the home help assistant has neglected to do. In turn, the home help assistant orients to the display of entitlement by treating the request as unproblematic, and by complying with it immediately.
  • Heinrich, T., Ravignani, A., & Hanke, F. H. (2020). Visual timing abilities of a harbour seal (Phoca vitulina) and a South African fur seal (Arctocephalus pusillus pusillus) for sub- and supra-second time intervals. Animal Cognition, 23(5), 851-859. doi:10.1007/s10071-020-01390-3.

    Abstract

    Timing is an essential parameter influencing many behaviours. A previous study demonstrated a high sensitivity of a phocid, the harbour seal (Phoca vitulina), in discriminating time intervals. In the present study, we compared the harbour seal’s timing abilities with the timing abilities of an otariid, the South African fur seal (Arctocephalus pusillus pusillus). This comparison seemed essential as phocids and otariids differ in many respects and might, thus, also differ regarding their timing abilities. We determined time difference thresholds for sub- and suprasecond time intervals marked by a white circle on a black background displayed for a specific time interval on a monitor using a staircase method. Contrary to our expectation, the timing abilities of the fur seal and the harbour seal were comparable. Over a broad range of time intervals, 0.8–7 s in the fur seal and 0.8–30 s in the harbour seal, the difference thresholds followed Weber’s law. In this range, both animals could discriminate time intervals differing only by 12 % and 14 % on average. Timing might, thus be a fundamental cue for pinnipeds in general to be used in various contexts, thereby complementing information provided by classical sensory systems. Future studies will help to clarify if timing is indeed involved in foraging decisions or the estimation of travel speed or distance.

    Additional information

    supplementary material
  • Henson, R. N., Suri, S., Knights, E., Rowe, J. B., Kievit, R. A., Lyall, D. M., Chan, D., Eising, E., & Fisher, S. E. (2020). Effect of apolipoprotein E polymorphism on cognition and brain in the Cambridge Centre for Ageing and Neuroscience cohort. Brain and Neuroscience Advances, 4: 2398212820961704. doi:10.1177/2398212820961704.

    Abstract

    Polymorphisms in the apolipoprotein E (APOE) gene have been associated with individual differences in cognition, brain structure and brain function. For example, the ε4 allele has been associated with cognitive and brain impairment in old age and increased risk of dementia, while the ε2 allele has been claimed to be neuroprotective. According to the ‘antagonistic pleiotropy’ hypothesis, these polymorphisms have different effects across the lifespan, with ε4, for example, postulated to confer benefits on cognitive and brain functions earlier in life. In this stage 2 of the Registered Report – https://osf.io/bufc4, we report the results from the cognitive and brain measures in the Cambridge Centre for Ageing and Neuroscience cohort (www.cam-can.org). We investigated the antagonistic pleiotropy hypothesis by testing for allele-by-age interactions in approximately 600 people across the adult lifespan (18–88 years), on six outcome variables related to cognition, brain structure and brain function (namely, fluid intelligence, verbal memory, hippocampal grey-matter volume, mean diffusion within white matter and resting-state connectivity measured by both functional magnetic resonance imaging and magnetoencephalography). We found no evidence to support the antagonistic pleiotropy hypothesis. Indeed, Bayes factors supported the null hypothesis in all cases, except for the (linear) interaction between age and possession of the ε4 allele on fluid intelligence, for which the evidence for faster decline in older ages was ambiguous. Overall, these pre-registered analyses question the antagonistic pleiotropy of APOE polymorphisms, at least in healthy adults.

    Additional information

    supplementary material
  • Herbst, L. E. (2006). The influence of language dominance on bilingual VOT: A case study. In Proceedings of the 4th University of Cambridge Postgraduate Conference on Language Research (CamLing 2006) (pp. 91-98). Cambridge: Cambridge University Press.

    Abstract

    Longitudinally collected VOT data from an early English-Italian bilingual who became increasingly English-dominant was analyzed. Stops in English were always produced with significantly longer VOT than in Italian. However, the speaker did not show any significant change in the VOT production in either language over time, despite the clear dominance of English in his every day language use later in his life. The results indicate that – unlike L2 learners – early bilinguals may remain unaffected by language use with respect to phonetic realization.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Hestvik, A., Shinohara, Y., Durvasula, K., Verdonschot, R. G., & Sakai, H. (2020). Abstractness of human speech sound representations. Brain Research, 1732: 146664. doi:10.1016/j.brainres.2020.146664.

    Abstract

    We argue, based on a study of brain responses to speech sound differences in Japanese, that memory encoding of functional speech sounds-phonemes-are highly abstract. As an example, we provide evidence for a theory where the consonants/p t k b d g/ are not only made up of symbolic features but are underspecified with respect to voicing or laryngeal features, and that languages differ with respect to which feature value is underspecified. In a previous study we showed that voiced stops are underspecified in English [Hestvik, A., & Durvasula, K. (2016). Neurobiological evidence for voicing underspecification in English. Brain and Language], as shown by asymmetries in Mismatch Negativity responses to /t/ and /d/. In the current study, we test the prediction that the opposite asymmetry should be observed in Japanese, if voiceless stops are underspecified in that language. Our results confirm this prediction. This matches a linguistic architecture where phonemes are highly abstract and do not encode actual physical characteristics of the corresponding speech sounds, but rather different subsets of abstract distinctive features.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.

    Abstract

    The use of virtual reality (VR) as a methodological tool is
    becoming increasingly popular in behavioural research due
    to its seemingly limitless possibilities. This new method has
    not been used frequently in the field of psycholinguistics,
    however, possibly due to the assumption that humancomputer
    interaction does not accurately reflect human-human
    interaction. In the current study we compare participants’
    language behaviour in a syntactic priming task with human
    versus avatar partners. Our study shows comparable priming
    effects between human and avatar partners (Human: 12.3%;
    Avatar: 12.6% for passive sentences) suggesting that VR is a
    valid platform for conducting language research and studying
    dialogue interactions.
  • Hildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A. and 11 moreHildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A., Davis, N., Reilly, S., Delatycki, M., Liégeois, F. J., Connelly, A., Gecz, J., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2020). Severe childhood speech disorder: Gene discovery highlights transcriptional dysregulation. Neurology, 94(20), e2148-e2167. doi:10.1212/WNL.0000000000009441.

    Abstract

    Objective
    Determining the genetic basis of speech disorders provides insight into the neurobiology of
    human communication. Despite intensive investigation over the past 2 decades, the etiology of
    most speech disorders in children remains unexplained. To test the hypothesis that speech
    disorders have a genetic etiology, we performed genetic analysis of children with severe speech
    disorder, specifically childhood apraxia of speech (CAS).
    Methods
    Precise phenotyping together with research genome or exome analysis were performed on
    children referred with a primary diagnosis of CAS. Gene coexpression and gene set enrichment
    analyses were conducted on high-confidence gene candidates.
    Results
    Thirty-four probands ascertained for CAS were studied. In 11/34 (32%) probands, we identified
    highly plausible pathogenic single nucleotide (n = 10; CDK13, EBF3, GNAO1, GNB1,
    DDX3X, MEIS2, POGZ, SETBP1, UPF2, ZNF142) or copy number (n = 1; 5q14.3q21.1 locus)
    variants in novel genes or loci for CAS. Testing of parental DNA was available for 9 probands
    and confirmed that the variants had arisen de novo. Eight genes encode proteins critical for
    regulation of gene transcription, and analyses of transcriptomic data found CAS-implicated
    genes were highly coexpressed in the developing human brain.
    Conclusion
    We identify the likely genetic etiology in 11 patients with CAS and implicate 9 genes for the first
    time. We find that CAS is often a sporadic monogenic disorder, and highly genetically heterogeneous.
    Highly penetrant variants implicate shared pathways in broad transcriptional
    regulation, highlighting the key role of transcriptional regulation in normal speech development.
    CAS is a distinctive, socially debilitating clinical disorder, and understanding its
    molecular basis is the first step towards identifying precision medicine approaches.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.

    Abstract

    Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

    Additional information

    Supplemental Material
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.

    Abstract

    Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations.
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoeks, J. C. J., Hendriks, P., Vonk, W., Brown, C. M., & Hagoort, P. (2006). Processing the noun phrase versus sentence coordination ambiguity: Thematic information does not completely eliminate processing difficulty. Quarterly Journal of Experimental Psychology, 59, 1581-1899. doi:10.1080/17470210500268982.

    Abstract

    When faced with the noun phrase (NP) versus sentence (S) coordination ambiguity as in, for example, The thief shot the jeweller and the cop hellip, readers prefer the reading with NP-coordination (e.g., "The thief shot the jeweller and the cop yesterday") over one with two conjoined sentences (e.g., "The thief shot the jeweller and the cop panicked"). A corpus study is presented showing that NP-coordinations are produced far more often than S-coordinations, which in frequency-based accounts of parsing might be taken to explain the NP-coordination preference. In addition, we describe an eye-tracking experiment investigating S-coordinated sentences such as Jasper sanded the board and the carpenter laughed, where the poor thematic fit between carpenter and sanded argues against NP-coordination. Our results indicate that information regarding poor thematic fit was used rapidly, but not without leaving some residual processing difficulty. This is compatible with claims that thematic information can reduce but not completely eliminate garden-path effects.
  • Hoeksema, N., Villanueva, S., Mengede, J., Salazar-Casals, A., Rubio-García, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2020). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 162-164). Nijmegen: The Evolution of Language Conferences.
  • Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H. and 79 moreHofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H., Saba, Y., Pirpamer, L., Seiler, S., Becker, J. T., Carmichael, O., Rotter, J. I., Psaty, B. M., Lopez, O. L., Amin, N., Van der Lee, S. J., Yang, Q., Himali, J. J., Maillard, P., Beiser, A. S., DeCarli, C., Karama, S., Lewis, L., Harris, M., Bastin, M. E., Deary, I. J., Witte, A. V., Beyer, F., Loeffler, M., Mather, K. A., Schofield, P. R., Thalamuthu, A., Kwok, J. B., Wright, M. J., Ames, D., Trollor, J., Jiang, J., Brodaty, H., Wen, W., Vernooij, M. W., Hofman, A., Uitterlinden, A. G., Niessen, W. J., Wittfeld, K., Bülow, R., Völker, U., Pausova, Z., Pike, G. B., Maingault, S., Crivello, F., Tzourio, C., Amouyel, P., Mazoyer, B., Neale, M. C., Franz, C. E., Lyons, M. J., Panizzon, M. S., Andreassen, O. A., Dale, A. M., Logue, M., Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Stein, J. L., Thompson, P. M., Medland, S. E., ENIGMA-consortium, Sachdev, P. S., Kremen, W. S., Wardlaw, J. M., Villringer, A., Van Duijn, C. M., Grabe, H. J., Longstreth, W. T., Fornage, M., Paus, T., Debette, S., Ikram, M. A., Schmidt, H., Schmidt, R., & Seshadri, S. (2020). Genetic correlations and genome-wide associations of cortical structure in general population samples of 22,824 adults. Nature Communications, 11: 4796. doi:10.1038/s41467-020-18367-y.
  • Hoffmann, C. W. G., Sadakata, M., Chen, A., Desain, P., & McQueen, J. M. (2014). Within-category variance and lexical tone discrimination in native and non-native speakers. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 45-49). Nijmegen: Radboud University Nijmegen.

    Abstract

    In this paper, we show how acoustic variance within lexical tones in disyllabic Mandarin Chinese pseudowords affects discrimination abilities in both native and non-native speakers of Mandarin Chinese. Within-category acoustic variance did not hinder native speakers in discriminating between lexical tones, whereas it precludes Dutch native speakers from reaching native level performance. Furthermore, the influence of acoustic variance was not uniform but asymmetric, dependent on the presentation order of the lexical tones to be discriminated. An exploratory analysis using an active adaptive oddball paradigm was used to quantify the extent of the perceptual asymmetry. We discuss two possible mechanisms underlying this asymmetry and propose possible paradigms to investigate these mechanisms
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., & Stevens, R. (2006). How speakers represent size information in referential communication for knowing and unknowing recipients. In D. Schlangen, & R. Fernandez (Eds.), Brandial '06 Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, Germany, September 11-13.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Hoppenbrouwers, G., Seuren, P. A. M., & Weijters, A. (Eds.). (1985). Meaning and the lexicon. Dordrecht: Foris.
  • Hörpel, S. G., & Firzlaff, U. (2020). Post-natal development of the envelope following response to amplitude modulated sounds in the bat Phyllostomus discolor. Hearing Research, 388: 107904. doi:10.1016/j.heares.2020.107904.

    Abstract

    Bats use a large repertoire of calls for social communication, which are often characterized by temporal amplitude and frequency modulations. As bats are considered to be among the few mammalian species capable of vocal learning, the perception of temporal sound modulations should be crucial for juvenile bats to develop social communication abilities. However, the post-natal development of auditory processing of temporal modulations has not been investigated in bats, so far. Here we use the minimally invasive technique of recording auditory brainstem responses to measure the envelope following response (EFR) to sinusoidally amplitude modulated noise (range of modulation frequencies: 11–130 Hz) in three juveniles (p8-p72) of the bat, Phyllostomus discolor. In two out of three animals, we show that although amplitude modulation processing is basically developed at p8, EFRs maturated further over a period of about two weeks until p33. Maturation of the EFR generally took longer for higher modulation frequencies (87–130 Hz) than for lower modulation frequencies (11–58 Hz).
  • Hostetter, A. B., Pouw, W., & Wakefield, E. M. (2020). Learning from gesture and action: An investigation of memory for where objects went and how they got there. Cognitive Science, 44(9): e12889. doi:10.1111/cogs.12889.

    Abstract

    Speakers often use gesture to demonstrate how to perform actions—for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.

    Additional information

    additional analyses Open Data OSF
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Houwing, D. J., Schuttel, K., Struik, E. L., Arling, C., Ramsteijn, A. S., Heinla, I., & Olivier, J. D. (2020). Perinatal fluoxetine treatment and dams’ early life stress history alter affective behavior in rat offspring depending on serotonin transporter genotype and sex. Behavioural Brain Research, 392: 112657. doi:10.1016/j.bbr.2020.112657.

    Abstract

    Many women diagnosed with a major depression continue or initiate antidepressant treatment during pregnancy. Both maternal stress and selective serotonin inhibitor (SSRI) antidepressant treatment during pregnancy have been associated with changes in offspring behavior, including increased anxiety and depressive-like behavior. Our aim was to investigate the effects of the SSRI fluoxetine (FLX), with and without the presence of a maternal depression, on affective behavior in male and female rat offspring. As reduced serotonin transporter (SERT) availability has been associated with altered behavioral outcome, both offspring with normal (SERT+/+) and reduced (SERT+/−) SERT expression were included. For our animal model of maternal depression, SERT+/− dams exposed to early life stress were used. Perinatal FLX treatment and early life stress in dams (ELSD) had sex- and genotype-specific effects on affective behavior in the offspring. In female offspring, perinatal FLX exposure interacted with SERT genotype to increase anxiety and depressive-like behavior in SERT+/+, but not SERT+/−, females. In male offspring, ELSD reduced anxiety and interacted with SERT genotype to decrease depressive-like behavior in SERT+/−, but not SERT+/+, males. Altogether, SERT+/+ female offspring appear to be more sensitive than SERT+/− females to the effects of perinatal FLX exposure, while SERT+/− male offspring appear more sensitive than SERT+/+ males to the effects of ELSD on affective behavior. Our data suggest a role for offspring SERT genotype and sex in FLX and ELSD-induced effects on affective behavior, thereby contributing to our understanding of the effects of perinatal SSRI treatment on offspring behavior later in life.
  • Howe, L. J., Hemani, G., Lesseur, C., Gaborieau, V., Ludwig, K. U., Mangold, E., Brennan, P., Ness, A. R., St Pourcain, B., Smith, G. D., & Lewis, S. J. (2020). Evaluating shared genetic influences on nonsyndromic cleft lip/palate and oropharyngeal neoplasms. Genetic Epidemiology, 44(8), 924-933. doi:10.1002/gepi.22343.

    Abstract

    It has been hypothesised that nonsyndromic cleft lip/palate (nsCL/P) and cancer may share aetiological risk factors. Population studies have found inconsistent evidence for increased incidence of cancer in nsCL/P cases, but several genes (e.g.,CDH1,AXIN2) have been implicated in the aetiologies of both phenotypes. We aimed to evaluate shared genetic aetiology between nsCL/P and oral cavity/oropharyngeal cancers (OC/OPC), which affect similar anatomical regions. Using a primary sample of 5,048 OC/OPC cases and 5,450 controls of European ancestry and a replication sample of 750 cases and 336,319 controls from UK Biobank, we estimate genetic overlap using nsCL/P polygenic risk scores (PRS) with Mendelian randomization analyses performed to evaluate potential causal mechanisms. In the primary sample, we found strong evidence for an association between a nsCL/P PRS and increased odds of OC/OPC (per standard deviation increase in score, odds ratio [OR]: 1.09; 95% confidence interval [CI]: 1.04, 1.13;p = .000053). Although confidence intervals overlapped with the primary estimate, we did not find confirmatory evidence of an association between the PRS and OC/OPC in UK Biobank (OR 1.02; 95% CI: 0.95, 1.10;p = .55). Mendelian randomization analyses provided evidence that major nsCL/P risk variants are unlikely to influence OC/OPC. Our findings suggest possible shared genetic influences on nsCL/P and OC/OPC.

    Additional information

    Supporting information
  • Howells, H., Puglisi, G., Leonetti, A., Vigano, L., Fornia, L., Simone, L., Forkel, S. J., Rossi, M., Riva, M., Cerri, G., & Bello, L. (2020). The role of left fronto-parietal tracts in hand selection: Evidence from neurosurgery. Cortex, 128, 297-311. doi:10.1016/j.cortex.2020.03.018.

    Abstract

    Strong right-hand preference on the population level is a uniquely human feature, although its neural basis is still not clearly defined. Recent behavioural and neuroimaging literature suggests that hand preference may be related to the orchestrated function and size of fronto-parietal white matter tracts bilaterally. Lesions to these tracts induced during tumour resection may provide an opportunity to test this hypothesis. In the present study, a cohort of seventeen neurosurgical patients with left hemisphere brain tumours were recruited to investigate whether resection of certain white matter tracts affects the choice of hand selected for the execution of a goal-directed task (assembly of jigsaw puzzles). Patients performed the puzzles, but also tests for basic motor ability, selective attention and visuo-constructional ability, preoperatively and one month after surgery. An atlas-based disconnectome analysis was conducted to evaluate whether resection of tracts was significantly associated with changes in hand selection. Diffusion tractography was also used to dissect fronto-parietal tracts (the superior longitudinal fasciculus) and the corticospinal tract. Results showed a shift in hand selection despite the absence of any motor or cognitive deficits, which was significantly associated with frontal and parietal resections rather than other lobes. In particular, the shift in hand selection was significantly associated with the resection of dorsal rather than ventral fronto-parietal white matter connections. Dorsal white matter pathways contribute bilaterally to control of goal-directed hand movements. We show that unilateral lesions, that may unbalance the cooperation of the two hemispheres, can alter the choice of hand selected to accomplish movements.
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hubers, F., Redl, T., De Vos, H., Reinarz, L., & De Hoop, H. (2020). Processing prescriptively incorrect comparative particles: Evidence from sentence-matching and eye-tracking. Frontiers in Psychology, 11: 186. doi:10.3389/fpsyg.2020.00186.

    Abstract

    Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan ‘lit. better as John’). From a theoretical linguist’s point of view, these so-called grammatical norm violations are perfectly grammatical, even though they are not part of the language’s prescriptive grammar. In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether grammatical norm violations are processed as truly grammatical, as truly ungrammatical, or whether they fall in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that grammatical norm violations are not processed on a par with ungrammatical sentences, the results of all three experiments clearly show that grammatical norm violations cannot be considered grammatical, either.

    Additional information

    Supplementary Material
  • Hubers, F., Trompenaars, T., Collin, S., De Schepper, K., & De hoop, H. (2020). Hypercorrection as a by-product of education. Applied Linguistics, 41(4), 552-574. doi:10.1093/applin/amz001.

    Abstract

    Prescriptive grammar rules are taught in education, generally to ban the use of certain frequently encountered constructions in everyday language. This may lead to hypercorrection, meaning that the prescribed form in one construction is extended to another one in which it is in fact prohibited by prescriptive grammar. We discuss two such cases in Dutch: the hypercorrect use of the comparative particle dan ‘than’ in equative constructions, and the hypercorrect use of the accusative pronoun hen ‘them’ for a dative object. In two experiments, high school students of three educational levels were tested on their use of these hypercorrect forms (nexp1 = 162, nexp2 = 159). Our results indicate an overall large amount of hypercorrection across all levels of education, including pre-university level students who otherwise perform better in constructions targeted by prescriptive grammar rules. We conclude that while teaching prescriptive grammar rules to high school students seems to increase their use of correct forms in certain constructions, this comes at a cost of hypercorrection in others.
  • Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.

    Abstract

    A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.

    Additional information

    Data files and script
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huettig, F., Quinlan, P. T., McDonald, S. A., & Altmann, G. T. M. (2006). Models of high-dimensional semantic space predict language-mediated eye movements in the visual world. Acta Psychologica, 121(1), 65-80. doi:10.1016/j.actpsy.2005.06.002.

    Abstract

    In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813–839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2020). Age-related changes in attentional refocusing during simulated driving. Brain sciences, 10(8): 530. doi:10.3390/brainsci10080530.

    Abstract

    We recently reported that refocusing attention between temporal and spatial tasks becomes more difficult with increasing age, which could impair daily activities such as driving (Callaghan et al., 2017). Here, we investigated the extent to which difficulties in refocusing attention extend to naturalistic settings such as simulated driving. A total of 118 participants in five age groups (18–30; 40–49; 50–59; 60–69; 70–91 years) were compared during continuous simulated driving, where they repeatedly switched from braking due to traffic ahead (a spatially focal yet temporally complex task) to reading a motorway road sign (a spatially more distributed task). Sequential-Task (switching) performance was compared to Single-Task performance (road sign only) to calculate age-related switch-costs. Electroencephalography was recorded in 34 participants (17 in the 18–30 and 17 in the 60+ years groups) to explore age-related changes in the neural oscillatory signatures of refocusing attention while driving. We indeed observed age-related impairments in attentional refocusing, evidenced by increased switch-costs in response times and by deficient modulation of theta and alpha frequencies. Our findings highlight virtual reality (VR) and Neuro-VR as important methodologies for future psychological and gerontological research.

    Additional information

    supplementary file
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.

    Abstract

    An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning.
  • Indefrey, P. (2006). A meta-analysis of hemodynamic studies on first and second language processing: Which suggested differences can we trust and what do they mean? Language Learning, 56(suppl. 1), 279-304. doi:10.1111/j.1467-9922.2006.00365.x.

    Abstract

    This article presents the results of a meta-analysis of 30 hemodynamic experiments comparing first language (L1) and second language (L2) processing in a range of tasks. The results suggest that reliably stronger activation during L2 processing is found (a) only for task-specific subgroups of L2 speakers and (b) within some, but not all regions that are also typically activated in native language processing. A tentative interpretation based on the functional roles of frontal and temporal regions is suggested.
  • Indefrey, P., & Gullberg, M. (2006). Introduction. Language Learning, 56(suppl. 1), 1-8. doi:10.1111/j.1467-9922.2006.00352.x.

    Abstract

    This volume is a harvest of articles from the first conference in a series on the cognitive neuroscience of language. The first conference focused on the cognitive neuroscience of second language acquisition (henceforth SLA). It brought together experts from as diverse fields as second language acquisition, bilingualism, cognitive neuroscience, and neuroanatomy. The articles and discussion articles presented here illustrate state-of-the-art findings and represent a wide range of theoretical approaches to classic as well as newer SLA issues. The theoretical themes cover age effects in SLA related to the so-called Critical Period Hypothesis and issues of ultimate attainment and focus both on age effects pertaining to childhood and to aging. Other familiar SLA topics are the effects of proficiency and learning as well as issues concerning the difference between the end product and the process that yields that product, here discussed in terms of convergence and degeneracy. A topic more related to actual usage of a second language once acquired concerns how multilingual speakers control and regulate their two languages.
  • Indefrey, P. (2006). It is time to work toward explicit processing models for native and second language speakers. Journal of Applied Psycholinguistics, 27(1), 66-69. doi:10.1017/S0142716406060103.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Isbilen, E. S., McCauley, S. M., Kidd, E., & Christiansen, M. H. (2020). Statistically induced chunking recall: A memory‐based approach to statistical learning. Cognitive Science, 44(7): e12848. doi:10.1111/cogs.12848.

    Abstract

    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking , may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word‐like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two‐alternative forced‐choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.
  • Jacoby, N., Margulis, E. H., Clayton, M., Hannon, E., Honing, H., Iversen, J., Klein, T. R., Mehr, S. A., Pearson, L., Peretz, I., Perlman, M., Polak, R., Ravignani, A., Savage, P. E., Steingo, G., Stevens, C. J., Trainor, L., Trehub, S., Veal, M., & Wald-Fuhrmann, M. (2020). Cross-cultural work in music cognition: Challenges, insights, and recommendations. Music Perception, 37(3), 185-195. doi:10.1525/mp.2020.37.3.185.

    Abstract

    Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of “music” and “culture.”
  • Janse, E. (2006). Auditieve woordherkenning bij afasie: Waarneming van mismatch items. Afasiologie, 28(4), 64-67.
  • Janse, E., Sennema, A., & Slis, A. (2000). Fast speech timing in Dutch: The durational correlates of lexical stress and pitch accent. In Proceedings of the VIth International Conference on Spoken Language Processing, Vol. III (pp. 251-254).

    Abstract

    n this study we investigated the durational correlates of lexical stress and pitch accent at normal and fast speech rate in Dutch. Previous literature on English shows that durations of lexically unstressed vowels are reduced more than stressed vowels when speakers increase their speech rate. We found that the same holds for Dutch, irrespective of whether the unstressed vowel is schwa or a "full" vowel. In the same line, we expected that vowels in words without a pitch accent would be shortened relatively more than vowels in words with a pitch accent. This was not the case: if anything, the accented vowels were shortened relatively more than the unaccented vowels. We conclude that duration is an important cue for lexical stress, but not for pitch accent.
  • Janse, E. (2000). Intelligibility of time-compressed speech: Three ways of time-compression. In Proceedings of the VIth International Conference on Spoken Language Processing, vol. III (pp. 786-789).

    Abstract

    Studies on fast speech have shown that word-level timing of fast speech differs from that of normal rate speech in that unstressed syllables are shortened more than stressed syllables as speech rate increases. An earlier experiment showed that the intelligibility of time-compressed speech could not be improved by making its temporal organisation closer to natural fast speech. To test the hypothesis that segmental intelligibility is more important than prosodic timing in listening to timecompressed speech, the intelligibility of bisyllabic words was tested in three time-compression conditions: either stressed and unstressed syllable were compressed to the same degree, or the stressed syllable was compressed more than the unstressed syllable, or the reverse. As was found before, imitating wordlevel timing of fast speech did not improve intelligibility over linear compression. However, the results did not confirm the hypothesis either: there was no difference in intelligibility between the three compression conditions. We conclude that segmental intelligibility plays an important role, but further research is necessary to decide between the contributions of prosody and segmental intelligibility to the word-level intelligibility of time-compressed speech.
  • Janse, E. (2006). Lexical competition effects in aphasia: Deactivation of lexical candidates in spoken word processing. Brain and Language, 97, 1-11. doi:10.1016/j.bandl.2005.06.011.

    Abstract

    Research has shown that Broca’s and Wernicke’s aphasic patients show different impairments in auditory lexical processing. The results of an experiment with form-overlapping primes showed an inhibitory effect of form-overlap for control adults and a weak inhibition trend for Broca’s aphasic patients, but a facilitatory effect of form-overlap was found for Wernicke’s aphasic participants. This suggests that Wernicke’s aphasic patients are mainly impaired in suppression of once-activated word candidates and selection of one winning candidate, which may be related to their problems in auditory language comprehension.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Janzen, G. (2006). Memory for object location and route direction in virtual large-scale space. Ouarterly Journal of Experimental Psychology, 59(3), 493-508. doi:10.1080/02724980443000746.

    Abstract

    In everyday life people have to deal with tasks such as finding a novel path to a certain goal location, finding one’s way back, finding a short cut, or making a detour. In all of these tasks people acquire route knowledge. For finding the same way back they have to remember locations of objects like buildings and additionally direction changes. In three experiments using recognition tasks as well as conscious and unconscious spatial priming paradigms memory processes underlying wayfinding behaviour were investigated. Participants learned a route through a virtual environment with objects either placed at intersections (i.e., decision points) where another route could be chosen or placed along the route (non-decision points). Analyses indicate first that objects placed at decision points are recognized faster than other objects. Second, they indicate that the direction in which a route is travelled is represented only at locations that are relevant for wayfinding (e.g., decision points). The results point out the efficient way in which memory for object location and memory for route direction interact.
  • Jebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D. and 9 moreJebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D., Dechmann, D., Locatelli, A. G., Puechmaille, S. J., Fedrigo, O., Jarvis, E. D., Hiller, M., Vernes, S. C., Myers, E. W., & Teeling, E. C. (2020). Six reference-quality genomes reveal evolution of bat adaptations. Nature, 583, 578-584. doi:10.1038/s41586-020-2486-3.

    Abstract

    Bats possess extraordinary adaptations, including flight, echolocation, extreme longevity and unique immunity. High-quality genomes are crucial for understanding the molecular basis and evolution of these traits. Here we incorporated long-read sequencing and state-of-the-art scaffolding protocols1 to generate, to our knowledge, the first reference-quality genomes of six bat species (Rhinolophus ferrumequinum, Rousettus aegyptiacus, Phyllostomus discolor, Myotis myotis, Pipistrellus kuhlii and Molossus molossus). We integrated gene projections from our ‘Tool to infer Orthologs from Genome Alignments’ (TOGA) software with de novo and homology gene predictions as well as short- and long-read transcriptomics to generate highly complete gene annotations. To resolve the phylogenetic position of bats within Laurasiatheria, we applied several phylogenetic methods to comprehensive sets of orthologous protein-coding and noncoding regions of the genome, and identified a basal origin for bats within Scrotifera. Our genome-wide screens revealed positive selection on hearing-related genes in the ancestral branch of bats, which is indicative of laryngeal echolocation being an ancestral trait in this clade. We found selection and loss of immunity-related genes (including pro-inflammatory NF-κB regulators) and expansions of anti-viral APOBEC3 genes, which highlights molecular mechanisms that may contribute to the exceptional immunity of bats. Genomic integrations of diverse viruses provide a genomic record of historical tolerance to viral infection in bats. Finally, we found and experimentally validated bat-specific variation in microRNAs, which may regulate bat-specific gene-expression programs. Our reference-quality bat genomes provide the resources required to uncover and validate the genomic basis of adaptations of bats, and stimulate new avenues of research that are directly relevant to human health and disease

    Additional information

    41586_2020_2486_MOESM1_ESM.pdf
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jessop, A., & Chang, F. (2020). Thematic role information is maintained in the visual object-tracking system. Quarterly journal of experimental psychology, 73(1), 146-163. doi:10.1177%2F1747021819882842.

    Abstract

    Thematic roles characterise the functions of participants in events, but there is no agreement on how these roles are identified in the real world. In three experiments, we examined how role identification in push events is supported by the visual object-tracking system. Participants saw one to three push events in visual scenes with nine identical randomly moving circles. After a period of random movement, two circles from one of the push events and a foil object were given different colours and the participants had to identify their roles in the push with an active sentence, such as red pushed blue. It was found that the participants could track the agent and patient targets and generate descriptions that identified their roles at above chance levels, even under difficult conditions, such as when tracking multiple push events (Experiments 1–3), fixating their gaze (Experiment 1), performing a concurrent speeded-response task (Experiment 2), and when tracking objects that were temporarily invisible (Experiment 3). The results were consistent with previous findings of an average tracking capacity limit of four objects, individual differences in this capacity, and the use of attentional strategies. The studies demonstrated that thematic role information can be maintained when tracking the identity of visually identical objects, then used to map role fillers (e.g., the agent of a push event) into their appropriate sentence positions. This suggests that thematic role features are stored temporarily in the visual object-tracking system.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2000). The development of word recognition: The use of the possible-word constraint by 12-month-olds. In L. Gleitman, & A. Joshi (Eds.), Proceedings of CogSci 2000 (pp. 1034). London: Erlbaum.
  • Jones, S., Nyberg, L., Sandblom, J., Stigsdotter Neely, A., Ingvar, M., Petersson, K. M., & Bäckman, L. (2006). Cognitive and neural plasticity in aging: General and task-specific limitations. Neuroscience and Biobehavioral Reviews, 30(6), 864-871. doi:10.1016/j.neubiorev.2006.06.012.

    Abstract

    There is evidence for cognitive as well as neural plasticity across the adult life span, although aging is associated with certain constraints on plasticity. In the current paper, we argue that the age-related reduction in cognitive plasticity may be due to (a) deficits in general processing resources, and (b) failure to engage in task-relevant cognitive operations. Memory-training research suggests that age-related processing deficits (e.g., executive functions, speed) hinder older adults from utilizing mnemonic techniques as efficiently as the young, and that this age difference is reflected by diminished frontal activity during mnemonic use. Additional constraints on memory plasticity in old age are related to difficulties that are specific to the task, such as creating visual images, as well as in binding together the information to be remembered. These deficiencies are paralleled by reduced activity in occipito-parietal and medial–temporal regions, respectively. Future attempts to optimize intervention-related gains in old age should consider targeting both general processing and task-specific origins of age-associated reductions in cognitive plasticity.
  • Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.

    Abstract

    There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.

    Abstract

    In conversation, speech planning can overlap with listening to the interlocutor. It has been
    postulated that once there is enough information to formulate a response, planning is initiated
    and the response is maintained in working memory. Concurrently, the auditory input is
    monitored for the turn end such that responses can be launched promptly. In three EEG
    experiments, we aimed to identify the neural signature of phonological planning and monitoring
    by comparing delayed responding to not responding (reading aloud, repetition and lexical
    decision). These comparisons consistently resulted in a sustained positivity and beta power
    reduction over posterior regions. We argue that these effects reflect attention to the sequence
    end. Phonological planning and maintenance were not detected in the neural signature even
    though it is highly likely these were taking place. This suggests that EEG must be used cautiously
    to identify response planning when the neural signal is overridden by attention effects
  • Jung, D., Klessa, K., Duray, Z., Oszkó, B., Sipos, M., Szeverényi, S., Várnai, Z., Trilsbeek, P., & Váradi, T. (2014). Languagesindanger.eu - Including multimedia language resources to disseminate knowledge and create educational material on less-resourced languages. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 530-535).

    Abstract

    The present paper describes the development of the languagesindanger.eu interactive website as an example of including multimedia language resources to disseminate knowledge and create educational material on less-resourced languages. The website is a product of INNET (Innovative networking in infrastructure for endangered languages), European FP7 project. Its main functions can be summarized as related to the three following areas: (1) raising students' awareness of language endangerment and arouse their interest in linguistic diversity, language maintenance and language documentation; (2) informing both students and teachers about these topics and show ways how they can enlarge their knowledge further with a special emphasis on information about language archives; (3) helping teachers include these topics into their classes. The website has been localized into five language versions with the intention to be accessible to both scientific and non-scientific communities such as (primarily) secondary school teachers and students, beginning university students of linguistics, journalists, the interested public, and also members of speech communities who speak minority languages
  • Junge, C., & Cutler, A. (2014). Early word recognition and later language skills. Brain sciences, 4(4), 532-559. doi:10.3390/brainsci4040532.

    Abstract

    Recent behavioral and electrophysiological evidence has highlighted the long-term importance for language skills of an early ability to recognize words in continuous speech. We here present further tests of this long-term link in the form of follow-up studies conducted with two (separate) groups of infants who had earlier participated in speech segmentation tasks. Each study extends prior follow-up tests: Study 1 by using a novel follow-up measure that taps into online processing, Study 2 by assessing language performance relationships over a longer time span than previously tested. Results of Study 1 show that brain correlates of speech segmentation ability at 10 months are positively related to 16-month-olds’ target fixations in a looking-while-listening task. Results of Study 2 show that infant speech segmentation ability no longer directly predicts language profiles at the age of five. However, a meta-analysis across our results and those of similar studies (Study 3) reveals that age at follow-up does not moderate effect size. Together, the results suggest that infants’ ability to recognize words in speech certainly benefits early vocabulary development; further observed relationships of later language skills to early word recognition may be consequent upon this vocabulary size effect.
  • Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.

    Abstract

    Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization.
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects

Share this page