Publications

Displaying 101 - 200 of 608
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Drozd, K. F. (1995). Child English pre-sentential negation as metalinguistic exclamatory sentence negation. Journal of Child Language, 22(3), 583-610. doi:10.1017/S030500090000996X.

    Abstract

    This paper presents a study of the spontaneous pre-sentential negations
    of ten English-speaking children between the ages of 1; 6 and 3; 4 which
    supports the hypothesis that child English nonanaphoric pre-sentential
    negation is a form of metalinguistic exclamatory sentence negation. A
    detailed discourse analysis reveals that children's pre-sentential negatives
    like No Nathaniel a king (i) are characteristically echoic, and (it)
    typically express objection and rectification, two characteristic functions
    of exclamatory negation in adult discourse, e.g. Don't say 'Nathaniel's a
    king'! A comparison of children's pre-sentential negations with their
    internal predicate negations using not and don't reveals that the two
    negative constructions are formally and functionally distinct. I argue
    that children's nonanaphoric pre-sentential negatives constitute an
    independent, well-formed class of discourse negation. They are not
    'primitive' constructions derived from the miscategorization of emphatic
    no in adult speech or children's 'inventions'. Nor are they an
    early derivational variant of internal sentence negation. Rather, these
    negatives reflect young children's competence in using grammatical
    negative constructions appropriately in discourse.
  • Drude, S. (2006). Documentação lingüística: O formato de anotação de textos. Cadernos de Estudos Lingüísticos, 35, 27-51.

    Abstract

    This paper presents the methods of language documentation as applied in the Awetí Language Documentation Project, one of the projects in the Documentation of Endangered Languages Programme (DOBES). It describes the steps of how a large digital corpus of annotated multi-media data is built. Special attention is devoted to the format of annotation of linguistic data. The Advanced Glossing format is presented and justified
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Dunn, M. (2006). [Review of the book Comparative Chukotko-Kamchatkan dictionary by Michael Fortescue]. Anthropological Linguistics, 48(3), 296-298.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Eisner, F., & McQueen, J. M. (2006). Perceptual learning in speech: Stability over time (L). Journal of the Acoustical Society of America, 119(4), 1950-1953. doi:10.1121/1.2178721.

    Abstract

    Perceptual representations of phonemes are flexible and adapt rapidly to accommodate idiosyncratic articulation in the speech of a particular talker. This letter addresses whether such adjustments remain stable over time and under exposure to other talkers. During exposure to a story, listeners learned to interpret an ambiguous sound as [f] or [s]. Perceptual adjustments measured after 12 h were as robust as those measured immediately after learning. Equivalent effects were found when listeners heard speech from other talkers in the 12 h interval, and when they had the opportunity to consolidate learning during sleep.
  • Enfield, N. J., Majid, A., & Van Staden, M. (2006). Cross-linguistic categorisation of the body: Introduction. Language Sciences, 28(2-3), 137-147. doi:10.1016/j.langsci.2005.11.001.

    Abstract

    The domain of the human body is an ideal focus for semantic typology, since the body is a physical universal and all languages have terms referring to its parts. Previous research on body part terms has depended on secondary sources (e.g. dictionaries), and has lacked sufficient detail or clarity for a thorough understanding of these terms’ semantics. The present special issue is the outcome of a collaborative project aimed at improving approaches to investigating the semantics of body part terms, by developing materials to elicit information that provides for cross-linguistic comparison. The articles in this volume are original fieldwork-based descriptions of terminology for parts of the body in ten languages. Also included are an elicitation guide and experimental protocol used in gathering data. The contributions provide inventories of body part terms in each language, with analysis of both intensional and extensional aspects of meaning, differences in morphological complexity, semantic relations among terms, and discussion of partonomic structure within the domain.
  • Enfield, N. J. (2006). Elicitation guide on parts of the body. Language Sciences, 28(2-3), 148-157. doi:10.1016/j.langsci.2005.11.003.

    Abstract

    This document is intended for use as an elicitation guide for the field linguist consulting with native speakers in collecting terms for parts of the body, and in the exploration of their semantics.
  • Enfield, N. J. (2006). [Review of the book A grammar of Semelai by Nicole Kruspe]. Linguistic Typology, 10(3), 452-455. doi:10.1515/LINGTY.2006.014.
  • Enfield, N. J. (2006). Languages as historical documents: The endangered archive in Laos. South East Asia Research, 14(3), 471-488.

    Abstract

    Abstract: This paper reviews current discussion of the issue of just what is lost when a language dies. Special reference is made to the current situation in Laos, a country renowned for its considerable cultural and linguistic diversity. It focuses on the historical, anthropological and ecological knowledge that a language can encode, and the social and cultural consequences of the loss of such traditional knowledge when a language is no longer passed on. Finally, the article points out the paucity of studies and obstacles to field research on minority languages in Laos, which seriously hamper their documentation.
  • Enfield, N. J. (2006). Lao body part terms. Language Sciences, 28(2-3), 181-200. doi:10.1016/j.langsci.2005.11.011.

    Abstract

    This article presents a description of nominal expressions for parts of the human body conventionalised in Lao, a Southwestern Tai language spoken in Laos, Northeast Thailand, and Northeast Cambodia. An inventory of around 170 Lao expressions is listed, with commentary where some notability is determined, usually based on explicit comparison to the metalanguage, English. Notes on aspects of the grammatical and semantic structure of the set of body part terms are provided, including a discussion of semantic relations pertaining among members of the set of body part terms. I conclude that the semantic relations which pertain between terms for different parts of the body not only include part/whole relations, but also relations of location, connectedness, and general association. Calling the whole system a ‘partonomy’ attributes greater centrality to the part/whole relation than is warranted.
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ernestus, M. (2006). Statistically gradient generalizations for contrastive phonological features. The Linguistic Review, 23(3), 217-233. doi:10.1515/TLR.2006.008.

    Abstract

    In mainstream phonology, contrastive properties, like stem-final voicing, are simply listed in the lexicon. This article reviews experimental evidence that such contrastive properties may be predictable to some degree and that the relevant statistically gradient generalizations form an inherent part of the grammar. The evidence comes from the underlying voice specification of stem-final obstruents in Dutch. Contrary to received wisdom, this voice specification is partly predictable from the obstruent’s manner and place of articulation and from the phonological properties of the preceding segments. The degree of predictability, which depends on the exact contents of the lexicon, directs speakers’ guesses of underlying voice specifications. Moreover, existing words that disobey the generalizations are disadvantaged by being recognized and produced more slowly and less accurately, also under natural conditions.We discuss how these observations can be accounted for in two types of different approaches to grammar, Stochastic Optimality Theory and exemplar-based modeling.
  • Ernestus, M., Lahey, M., Verhees, F., & Baayen, R. H. (2006). Lexical frequency and voice assimilation. Journal of the Acoustical Society of America, 120(2), 1040-1051. doi:10.1121/1.2211548.

    Abstract

    Acoustic duration and degree of vowel reduction are known to correlate with a word’s frequency of occurrence. The present study broadens the research on the role of frequency in speech production to voice assimilation. The test case was regressive voice assimilation in Dutch. Clusters from a corpus of read speech were more often perceived as unassimilated in lower-frequency words and as either completely voiced regressive assimilation or, unexpectedly, as completely voiceless progressive assimilation in higher-frequency words. Frequency did not predict the voice classifications over and above important acoustic cues to voicing, suggesting that the frequency effects on the classifications were carried exclusively by the acoustic signal. The duration of the cluster and the period of glottal vibration during the cluster decreased while the duration of the release noises increased with frequency. This indicates that speakers reduce articulatory effort for higher-frequency words, with some acoustic cues signaling more voicing and others less voicing. A higher frequency leads not only to acoustic reduction but also to more assimilation.
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., Hatchwell, E., Chand, A., Ockenden, N., Monaco, A. P., & Craig, I. W. (1995). Construction of two YAC contigs in human Xp11.23-p11.22, one encompassing the loci OATL1, GATA, TFE3, and SYP, the other linking DXS255 to DXS146. Genomics, 29(2), 496-502. doi:10.1006/geno.1995.9976.

    Abstract

    We have constructed two YAC contigs in the Xp11.23-p11.22 interval of the human X chromosome, a region that was previously poorly characterized. One contig, of at least 1.4 Mb, links the pseudogene OATL1 to the genes GATA1, TFE3, and SYP and also contains loci implicated in Wiskott-Aldrich syndrome and synovial sarcoma. A second contig, mapping proximal to the first, is estimated to be over 2.1 Mb and links the hypervariable locus DXS255 to DXS146, and also contains a chloride channel gene that is responsible for hereditary nephrolithiasis. We have used plasmid rescue, inverse PCR, and Alu-PCR to generate 20 novel markers from this region, 1 of which is polymorphic, and have positioned these relative to one another on the basis of YAC analysis. The order of previously known markers within our contigs, Xpter-OATL1-GATA-TFE3-SYP-DXS255146- Xcen, agrees with genomic pulsed-field maps of the region. In addition, we have constructed a rare-cutter restriction map for a 710-kb region of the DXS255-DXS146 contig and have identified three CPG islands. These contigs and new markers will provide a useful resource for more detailed analysis of Xp11.23-p11.22, a region implicated in several genetic diseases.
  • Fisher, S. E., Van Bakel, I., Lloyd, S. E., Pearce, S. H. S., Thakker, R. V., & Craig, I. W. (1995). Cloning and characterization of CLCN5, the human kidney chloride channel gene implicated in Dent disease (an X-linked hereditary nephrolithiasis). Genomics, 29, 598-606. doi:10.1006/geno.1995.9960.

    Abstract

    Dent disease, an X-linked familial renal tubular disorder, is a form of Fanconi syndrome associated with proteinuria, hypercalciuria, nephrocalcinosis, kidney stones, and eventual renal failure. We have previously used positional cloning to identify the 3' part of a novel kidney-specific gene (initially termed hClC-K2, but now referred to as CLCN5), which is deleted in patients from one pedigree segregating Dent disease. Mutations that disrupt this gene have been identified in other patients with this disorder. Here we describe the isolation and characterization of the complete open reading frame of the human CLCN5 gene, which is predicted to encode a protein of 746 amino acids, with significant homology to all known members of the ClC family of voltage-gated chloride channels. CLCN5 belongs to a distinct branch of this family, which also includes the recently identified genes CLCN3 and CLCN4. We have shown that the coding region of CLCN5 is organized into 12 exons, spanning 25-30 kb of genomic DNA, and have determined the sequence of each exon-intron boundary. The elucidation of the coding sequence and exon-intron organization of CLCN5 will both expedite the evaluation of structure/function relationships of these ion channels and facilitate the screening of other patients with renal tubular dysfunction for mutations at this locus.
  • Fisher, S. E., & Francks, C. (2006). Genes, cognition and dyslexia: Learning to read the genome. Trends in Cognitive Sciences, 10, 250-257. doi:10.1016/j.tics.2006.04.003.

    Abstract

    Studies of dyslexia provide vital insights into the cognitive architecture underpinning both disordered and normal reading. It is well established that inherited factors contribute to dyslexia susceptibility, but only very recently has evidence emerged to implicate specific candidate genes. In this article, we provide an accessible overview of four prominent examples--DYX1C1, KIAA0319, DCDC2 and ROBO1--and discuss their relevance for cognition. In each case correlations have been found between genetic variation and reading impairments, but precise risk variants remain elusive. Although none of these genes is specific to reading-related neuronal circuits, or even to the human brain, they have intriguing roles in neuronal migration or connectivity. Dissection of cognitive mechanisms that subserve reading will ultimately depend on an integrated approach, uniting data from genetic investigations, behavioural studies and neuroimaging.
  • Fisher, S. E. (2006). Tangled webs: Tracing the connections between genes and cognition. Cognition, 101, 270-297. doi:10.1016/j.cognition.2006.04.004.

    Abstract

    The rise of molecular genetics is having a pervasive influence in a wide variety of fields, including research into neurodevelopmental disorders like dyslexia, speech and language impairments, and autism. There are many studies underway which are attempting to determine the roles of genetic factors in the aetiology of these disorders. Beyond the obvious implications for diagnosis, treatment and understanding, success in these efforts promises to shed light on the links between genes and aspects of cognition and behaviour. However, the deceptive simplicity of finding correlations between genetic and phenotypic variation has led to a common misconception that there exist straightforward linear relationships between specific genes and particular behavioural and/or cognitive outputs. The problem is exacerbated by the adoption of an abstract view of the nature of the gene, without consideration of molecular, developmental or ontogenetic frameworks. To illustrate the limitations of this perspective, I select two cases from recent research into the genetic underpinnings of neurodevelopmental disorders. First, I discuss the proposal that dyslexia can be dissected into distinct components specified by different genes. Second, I review the story of the FOXP2 gene and its role in human speech and language. In both cases, adoption of an abstract concept of the gene can lead to erroneous conclusions, which are incompatible with current knowledge of molecular and developmental systems. Genes do not specify behaviours or cognitive processes; they make regulatory factors, signalling molecules, receptors, enzymes, and so on, that interact in highly complex networks, modulated by environmental influences, in order to build and maintain the brain. I propose that it is necessary for us to fully embrace the complexity of biological systems, if we are ever to untangle the webs that link genes to cognition.
  • Fisher, S. E., & Marcus, G. (2006). The eloquent ape: Genes, brains and the evolution of language. Nature Reviews Genetics, 7, 9-20. doi:10.1038/nrg1747.

    Abstract

    The human capacity to acquire complex language seems to be without parallel in the natural world. The origins of this remarkable trait have long resisted adequate explanation, but advances in fields that range from molecular genetics to cognitive neuroscience offer new promise. Here we synthesize recent developments in linguistics, psychology and neuroimaging with progress in comparative genomics, gene-expression profiling and studies of developmental disorders. We argue that language should be viewed not as a wholesale innovation, but as a complex reconfiguration of ancestral systems that have been adapted in evolutionarily novel ways.
  • Floyd, S., San Roque, L., & Majid, A. (2018). Smell is coded in grammar and frequent in discourse: Cha'palaa olfactory language in cross-linguistic perspective. Journal of Linguistic Anthropology, 28(2), 175-196. doi:10.1111/jola.12190.

    Abstract

    It has long been claimed that there is no lexical field of smell, and that smell is of too little validity to be expressed in grammar. We demonstrate both claims are false. The Cha'palaa language (Ecuador) has at least 15 abstract smell terms, each of which is formed using a type of classifier previously thought not to exist. Moreover, using conversational corpora we show that Cha'palaa speakers also talk about smell more than Imbabura Quechua and English speakers. Together, this shows how language and social interaction may jointly reflect distinct cultural orientations towards sensory experience in general and olfaction in particular.
  • Floyd, S., Rossi, G., Baranova, J., Blythe, J., Dingemanse, M., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2018). Universals and cultural diversity in the expression of gratitude. Royal Society Open Science, 5: 180391. doi:10.1098/rsos.180391.

    Abstract

    Gratitude is argued to have evolved to motivate and maintain social reciprocity among people, and to be linked to a wide range of positive effects — social, psychological, and even physical. But is socially reciprocal behaviour dependent on the expression of gratitude, for example by saying "thank you" as in English? Current research has not included cross-cultural elements, and has tended to conflate gratitude as an emotion with gratitude as a linguistic practice, as might appear to be the case in English. Here we ask to what extent people actually express gratitude in different societies by focussing on episodes of everyday life where someone obtains a good, service, or support from another, and comparing these episodes across eight languages from five continents. What we find is that expressions of gratitude in these episodes are remarkably rare, suggesting that social reciprocity in everyday life relies on tacit understandings of people’s rights and duties surrounding mutual assistance and collaboration. At the same time, we also find minor cross-cultural variation, with slightly higher rates in Western European languages English and Italian, showing that universal tendencies of social reciprocity should not be conflated with more culturally variable practices of expressing gratitude. Our study complements previous experimental and culture-specific research on social reciprocity with a systematic comparison of audiovisual corpora of naturally occurring social interaction from different cultures from around the world.
  • Forkel, S. J., & Catani, M. (2018). Lesion mapping in acute stroke aphasia and its implications for recovery. Neuropsychologia, 115, 88-100. doi:10.1016/j.neuropsychologia.2018.03.036.

    Abstract

    Patients with stroke offer a unique window into understanding human brain function. Mapping stroke lesions poses several challenges due to the complexity of the lesion anatomy and the mechanisms causing local and remote disruption on brain networks. In this prospective longitudinal study, we compare standard and advanced approaches to white matter lesion mapping applied to acute stroke patients with aphasia. Eighteen patients with acute left hemisphere stroke were recruited and scanned within two weeks from symptom onset. Aphasia assessment was performed at baseline and six-month follow-up. Structural and diffusion MRI contrasts indicated an area of maximum overlap in the anterior external/extreme capsule with diffusion images showing a larger overlap extending into posterior perisylvian regions. Anatomical predictors of recovery included damage to ipsilesional tracts (as shown by both structural and diffusion images) and contralesional tracts (as shown by diffusion images only). These findings indicate converging results from structural and diffusion lesion mapping methods but also clear differences between the two approaches in their ability to identify predictors of recovery outside the lesioned regions.
  • Forkstam, C., Hagoort, P., Fernandez, G., Ingvar, M., & Petersson, K. M. (2006). Neural correlates of artificial syntactic structure classification. NeuroImage, 32(2), 956-967. doi:10.1016/j.neuroimage.2006.03.057.

    Abstract

    The human brain supports acquisition mechanisms that extract structural regularities implicitly from experience without the induction of an explicit model. It has been argued that the capacity to generalize to new input is based on the acquisition of abstract representations, which reflect underlying structural regularities in the input ensemble. In this study, we explored the outcome of this acquisition mechanism, and to this end, we investigated the neural correlates of artificial syntactic classification using event-related functional magnetic resonance imaging. The participants engaged once a day during an 8-day period in a short-term memory acquisition task in which consonant-strings generated from an artificial grammar were presented in a sequential fashion without performance feedback. They performed reliably above chance on the grammaticality classification tasks on days 1 and 8 which correlated with a corticostriatal processing network, including frontal, cingulate, inferior parietal, and middle occipital/occipitotemporal regions as well as the caudate nucleus. Part of the left inferior frontal region (BA 45) was specifically related to syntactic violations and showed no sensitivity to local substring familiarity. In addition, the head of the caudate nucleus correlated positively with syntactic correctness on day 8 but not day 1, suggesting that this region contributes to an increase in cognitive processing fluency.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents—accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same region as the participant—and regional accents—from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the
    messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's
    social categorization (out-group member) affect memory and the credibility of the message (e.g., less trust-
    worthiness). In the present study, we go one step further and ask whether evaluations of messages are also
    affected by regional accents—accents from a different region than the listener. In the current study, we report
    results from three experiments on immediate memory recognition and immediate credibility assessments as well
    as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same
    region as the participant—and regional accents—from native speakers of a different country than the partici-
    pants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by
    accent does not seem to negatively affect how we treat the speakers' messages.
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Frauenfelder, U. H., Baayen, R. H., Hellwig, F. M., & Schreuder, R. (1993). Neighborhood Density and Frequency Across Languages and Modalities. Journal of Memory and Language, 32(6), 781-804. doi:10.1006/jmla.1993.1039.

    Abstract

    This research exploits the English and Dutch CELEX lexical database to investigate the form similarity relations between words. Lexical statistics analyses replicate and extend the findings of Landauer and Streeter (1973) concerning the relation between a word′s frequency and the density and frequency of its similarity neighborhood. The results for both Dutch and English reveal only a weak tendency for high-frequency written and spoken words to have more neighbors than rare words and for these neighbors to be more frequent than those of rare words. However, the number of neighbors was found to correlate more highly with bigram frequency than with word frequency. To clarify the relations between these properties, a stochastic model is presented which captures the relevant effects of phonotactic structure on neighborhood similarities. The implications of these findings for models of language production and comprehension are considered.
  • Frauenfelder, U. H., & Cutler, A. (1985). Preface. Linguistics, 23(5). doi:10.1515/ling.1985.23.5.657.
  • Gaby, A. R. (2006). The Thaayorre 'true man': Lexicon of the human body in an Australian language. Language Sciences, 28(2-3), 201-220. doi:10.1016/j.langsci.2005.11.006.

    Abstract

    Segmentation (and, indeed, definition) of the human body in Kuuk Thaayorre (a Paman language of Cape York Peninsula, Australia) is in some respects typologically unusual, while at other times it conforms to cross-linguistic patterns. The process of deriving complex body part terms from monolexemic items is revealing of metaphorical associations between parts of the body. Associations between parts of the body and entities and phenomena in the broader environment are evidenced by the ubiquity of body part terms (in their extended uses) throughout Thaayorre speech. Understanding the categorisation of the body is therefore prerequisite to understanding the Thaayorre language and worldview.
  • Ganushchak, L. Y., & Schiller, N. (2006). Effects of time pressure on verbal self-monitoring: An ERP study. Brain Research, 1125, 104-115. doi:10.1016/j.brainres.2006.09.096.

    Abstract

    The Error-Related Negativity (ERN) is a component of the event-related brain potential (ERP) that is associated with action monitoring and error detection. The present study addressed the question whether or not an ERN occurs after verbal error detection, e.g., during phoneme monitoring.We obtained an ERN following verbal errors which showed a typical decrease in amplitude under severe time pressure. This result demonstrates that the functioning of the verbal self-monitoring system is comparable to other performance monitoring, such as action monitoring. Furthermore, we found that participants made more errors in phoneme monitoring under time pressure than in a control condition. This may suggest that time pressure decreases the amount of resources available to a capacity-limited self-monitor thereby leading to more errors.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14: e1006690. doi:10.1371/journal.pcbi.1006690.

    Abstract

    Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.

    Additional information

    data via OSF
  • Gullberg, M. (2006). Some reasons for studying gesture and second language acquisition (Hommage à Adam Kendon). International Review of Applied Linguistics, 44(2), 103-124. doi:10.1515/IRAL.2006.004.

    Abstract

    This paper outlines some reasons for why gestures are relevant to the study of SLA. First, given cross-cultural and cross-linguistic gestural repertoires, gestures can be treated as part of what learners can acquire in a target language. Gestures can therefore be studied as a developing system in their own right both in L2 production and comprehension. Second, because of the close link between gestures, language, and speech, learners' gestures as deployed in L2 usage and interaction can offer valuable insights into the processes of acquisition, such as the handling of expressive difficulties, the influence of the first language, interlanguage phenomena, and possibly even into planning and processing difficulties. As a form of input to learners and to their interlocutors alike, finally, gestures also play a potential role for comprehension and learning.
  • Gullberg, M., & Ozyurek, A. (2006). Report on the Nijmegen Lectures 2004: Susan Goldin-Meadow 'The Many Faces of Gesture'. Gesture, 6(1), 151-164.
  • Gullberg, M., & Indefrey, P. (Eds.). (2006). The cognitive neuroscience of second language acquisition [Special Issue]. Language Learning, 56(suppl. 1).
  • Gullberg, M., & Holmqvist, K. (2006). What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition, 14(1), 53-82.

    Abstract

    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.
  • Gullberg, M. (Ed.). (2006). Gestures and second language acquisition [Special Issue]. International Review of Applied Linguistics, 44(2).
  • Gullberg, M. (2006). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56(1), 155-196. doi:10.1111/j.0023-8333.2006.00344.x.

    Abstract

    The production of cohesive discourse, especially maintained reference, poses problems for early second language (L2) speakers. This paper considers a communicative account of overexplicit L2 discourse by focusing on the interdependence between spoken and gestural cohesion, the latter being expressed by anchoring of referents in gesture space. Specifically, this study investigates whether overexplicit maintained reference in speech (lexical noun phrases [NPs]) and gesture (anaphoric gestures) constitutes an interactional communication strategy. We examine L2 speech and gestures of 16 Dutch learners of French retelling stories to addressees under two visibility conditions. The results indicate that the overexplicit properties of L2 speech are not motivated by interactional strategic concerns. The results for anaphoric gestures are more complex. Although their presence is not interactionally
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.

    Abstract

    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Hagoort, P. (2006). What we cannot learn from neuroanatomy about language learning and language processing [Commentary on Uylings]. Language Learning, 56(suppl. 1), 91-97. doi:10.1111/j.1467-9922.2006.00356.x.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P. (1993). [Review of the book Language: Structure, processing and disorders, by David Caplan]. Trends in Neurosciences, 16, 124. doi:10.1016/0166-2236(93)90138-C.
  • Hagoort, P. (2006). Event-related potentials from the user's perspective [Review of the book An introduction to the event-related potential technique by Steven J. Luck]. Nature Neuroscience, 9(4), 463-463. doi:10.1038/nn0406-463.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P. (1993). Impairments of lexical-semantic processing in aphasia: evidence from the processing of lexical ambiguities. Brain and Language, 45, 189-232. doi:10.1006/brln.1993.1043.

    Abstract

    Broca′s and Wernicke′s aphasics performed speeded lexical decisions on the third member of auditorily presented triplets consisting of two word primes followed by either a word or a nonword. In three of the four priming conditions, the second prime was a homonym with two unrelated meanings. The relation of the first prime and the target with the two meanings of the homonym was manipulated in the different priming conditions. The two readings of the ambiguous words either shared their grammatical form class (noun-noun ambiguities) or not (noun-verb ambiguities). The silent intervals between the members of the triplets were varied between 100, 500, and 1250 msec. Priming at the shortest interval is mainly attributed to automatic lexical processing, and priming at the longest interval is mainly due to forms of controlled lexical processing. For both Broca′s and Wernicke′s aphasics overall priming effects were obtained at ISIs of 100 and 500 msec, but not at an ISI of 1250 msec. This pattern of results is consistent with the view that both types of aphasics can automatically access the semantic lexicon, but might be impaired in integrating lexical-semantic information into the context. Broca′s aphasics showed a specific impairment in selecting the contextually appropriate reading of noun-verb ambiguities, which is suggested to result from a failure either in the on-line morphological parsing of complex word forms into a stem and an inflection or in the on-line exploitation of the syntactic implications of the inflectional suffix. In a final experiment patients were asked to explicitly judge the semantic relations between a subset of the primes that were used in the lexical decision study. Wernicke′s aphasics performed worse than both Broca′s aphasics and normal controls, indicating a specific impairment for these patients in consciously operating on automatically accessed lexical-semantic information.
  • Hagoort, P., & Brown, C. M. (1993). Hersenpotentialen als maat voor het menselijk taalvermogen. Stem, Spraak- en Taalpathologie, 2, 213-235.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hagoort, P. (1989). Processing of lexical ambiguities: a comment on Milberg, Blumstein, and Dworetzky (1987). Brain and Language, 36, 335-348. doi:10.1016/0093-934X(89)90070-9.

    Abstract

    In a study by Milberg, Blumstein, and Dworetzky (1987), normal control subjects and Wernicke's and Broca's aphasics performed a lexical decision task on the third element of auditorily presented triplets of words with either a word or a nonword as target. In three of the four types of word triplets, the first and the third words were related to one or both meanings of the second word, which was semantically ambiguous. The fourth type of word triplet consisted of three unrelated, unambiguous words, functioning as baseline. Milberg et al. (1987) claim that the results for their control subjects are similar to those reported by Schvaneveldt, Meyer, and Becker's original study (1976) with the same prime types, and so interpret these as evidence for a selective lexical access of the different meanings of ambiguous words. It is argued here that Milberg et al. only partially replicate the Schvaneveldt et al. results. Moreover, the results of Milberg et al. are not fully in line with the selective access hypothesis adopted. Replication of the Milberg et al. (1987) study with Dutch materials, using both a design without and a design with repetition of the same target words for the same subjects led to the original pattern as reported by Schvaneveldt et al. (1976). In the design with four separate presentations of the same target word, a strong repetition effect was found. It is therefore argued that the discrepancy between the Milberg et al. results on the one hand, and the Schvaneveldt et al. results on the other, might be due to the absence of a control for repetition effects in the within-subject design used by Milberg et al. It is concluded that this makes the results for both normal and aphasic subjects in the latter study difficult to interpret in terms of a selective access model for normal processing.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., Brown, C. M., & Groothusen, J. (1993). The syntactic positive shift (SPS) as an ERP measure of syntactic processing. Language and Cognitive Processes, 8, 439-483. doi:10.1080/01690969308407585.

    Abstract

    This paper presents event-related brain potential (ERP) data from an experiment on syntactic processing. Subjects read individual sentences containing one of three different kinds of violations of the syntactic constraints of Dutch. The ERP results provide evidence for M electrophysiological response to syntactic processing that is qualitatively different from established ERP responses to semantic processing. We refer to this electro-physiological manifestation of parsing as the Syntactic Positive Shift (SPS). The SPS was observed in an experiment in which no task demands, other than to read the input, were imposed on the subjects. The pattern of responses to the different kinds of syntactic violations suggests that the SPS indicates the impossibility for the parser to assign the preferred structure to an incoming string of words, irrespective of the specific syntactic nature of this preferred structure. The implications of these findings for further research on parsing are discussed.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hald, L. A., Bastiaansen, M. C. M., & Hagoort, P. (2006). EEG theta and gamma responses to semantic violations in online sentence processing. Brain and Language, 96(1), 90-105. doi:10.1016/j.bandl.2005.06.007.

    Abstract

    We explore the nature of the oscillatory dynamics in the EEG of subjects reading sentences that contain a semantic violation. More specifically, we examine whether increases in theta (≈3–7 Hz) and gamma (around 40 Hz) band power occur in response to sentences that were either semantically correct or contained a semantically incongruent word (semantic violation). ERP results indicated a classical N400 effect. A wavelet-based time-frequency analysis revealed a theta band power increase during an interval of 300–800 ms after critical word onset, at temporal electrodes bilaterally for both sentence conditions, and over midfrontal areas for the semantic violations only. In the gamma frequency band, a predominantly frontal power increase was observed during the processing of correct sentences. This effect was absent following semantic violations. These results provide a characterization of the oscillatory brain dynamics, and notably of both theta and gamma oscillations, that occur during language comprehension.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Haun, D. B. M., Call, J., Janzen, G., & Levinson, S. C. (2006). Evolutionary psychology of spatial representations in the hominidae. Current Biology, 16(17), 1736-1740. doi:10.1016/j.cub.2006.07.049.

    Abstract

    Comparatively little is known about the inherited primate background underlying human cognition, the human cognitive “wild-type.” Yet it is possible to trace the evolution of human cognitive abilities and tendencies by contrasting the skills of our nearest cousins, not just chimpanzees, but all the extant great apes, thus showing what we are likely to have inherited from the common ancestor [1]. By looking at human infants early in cognitive development, we can also obtain insights into native cognitive biases in our species [2]. Here, we focus on spatial memory, a central cognitive domain. We show, first, that all nonhuman great apes and 1-year-old human infants exhibit a preference for place over feature strategies for spatial memory. This suggests the common ancestor of all great apes had the same preference. We then examine 3-year-old human children and find that this preference reverses. Thus, the continuity between our species and the other great apes is masked early in human ontogeny. These findings, based on both phylogenetic and ontogenetic contrasts, open up the prospect of a systematic evolutionary psychology resting upon the cladistics of cognitive preferences.
  • Haun, D. B. M., Rapold, C. J., Call, J., Janzen, G., & Levinson, S. C. (2006). Cognitive cladistics and cultural override in Hominid spatial cognition. Proceedings of the National Academy of Sciences of the United States of America, 103(46), 17568-17573. doi:10.1073/pnas.0607999103.

    Abstract

    Current approaches to human cognition often take a strong nativist stance based on Western adult performance, backed up where possible by neonate and infant research and almost never by comparative research across the Hominidae. Recent research suggests considerable cross-cultural differences in cognitive strategies, including relational thinking, a domain where infant research is impossible because of lack of cognitive maturation. Here, we apply the same paradigm across children and adults of different cultures and across all nonhuman great ape genera. We find that both child and adult spatial cognition systematically varies with language and culture but that, nevertheless, there is a clear inherited bias for one spatial strategy in the great apes. It is reasonable to conclude, we argue, that language and culture mask the native tendencies in our species. This cladistic approach suggests that the correct perspective on human cognition is neither nativist uniformitarian nor ‘‘blank slate’’ but recognizes the powerful impact that language and culture can have on our shared primate cognitive biases.
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hebebrand, J., Peters, T., Schijven, D., Hebebrand, M., Grasemann, C., Winkler, T. W., Heid, I. M., Antel, J., Föcker, M., Tegeler, L., Brauner, L., Adan, R. A., Luykx, J. J., Correll, C. U., König, I. R., Hinney, A., & Libuda, L. (2018). The role of genetic variation of human metabolism for BMI, mental traits and mental disorders. Molecular Metabolism, 12, 1-11. doi:10.1016/j.molmet.2018.03.015.

    Abstract

    Objective
    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders
    Methods
    We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10−6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10−5) on the correction for the 516 SNPs only.
    Results
    19 SNPs located in nine independent loci revealed p-values < 6.92 × 10−6; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment.
    Conclusions
    Approximately 5–10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies.
  • Heinemann, T. (2006). Will you or can't you? Displaying entitlement in interrogative requests. Journal of Pragmatics, 38(7), 1081-1104. doi:10.1016/j.pragma.2005.09.013.

    Abstract

    Interrogative structures such as ‘Could you pass the salt? and ‘Couldn’t you pass the salt?’ can be used for making requests. A study of such pairs within a conversation analytic framework suggests that these are not used interchangeably, and that they have different impacts on the interaction. Focusing on Danish interactions between elderly care recipients and their home help assistants, I demonstrate how the care recipient displays different degrees of stance towards whether she is entitled to make a request or not, depending on whether she formats her request as a positive or a negative interrogative. With a positive interrogative request, the care recipient orients to her request as one she is not entitled to make. This is underscored by other features, such as the use of mitigating devices and the choice of verb. When accounting for this type of request, the care recipient ties the request to the specific situation she is in, at the moment in which the request is produced. In turn, the home help assistant orients to the lack of entitlement by resisting the request. With a negative interrogative request, the care recipient, in contrast, orients to her request as one she is entitled to make. This is strengthened by the choice of verb and the lack of mitigating devices. When such requests are accounted for, the requested task is treated as something that should be routinely performed, and hence as something the home help assistant has neglected to do. In turn, the home help assistant orients to the display of entitlement by treating the request as unproblematic, and by complying with it immediately.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hersh, T. A., Dimond, A. L., Ruth, B. A., Lupica, N. V., Bruce, J. C., Kelley, J. M., King, B. L., & Lutton, B. V. (2018). A role for the CXCR4-CXCL12 axis in the little skate, Leucoraja erinacea. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 315, R218-R229. doi:10.1152/ajpregu.00322.2017.

    Abstract

    The interaction between C-X-C chemokine receptor type 4 (CXCR4) and its cognate ligand C-X-C motif chemokine ligand 12 (CXCL12) plays a critical role in regulating hematopoietic stem cell activation and subsequent cellular mobilization. Extensive studies of these genes have been conducted in mammals, but much less is known about the expression and function of CXCR4 and CXCL12 in non-mammalian vertebrates. In the present study, we identify simultaneous expression of CXCR4 and CXCL12 orthologs in the epigonal organ (the primary hematopoietic tissue) of the little skate, Leucoraja erinacea. Genetic and phylogenetic analyses were functionally supported by significant mobilization of leukocytes following administration of Plerixafor, a CXCR4 antagonist and clinically important drug. Our results provide evidence that, as in humans, Plerixafor disrupts CXCR4/CXCL12 binding in the little skate, facilitating release of leukocytes into the bloodstream. Our study illustrates the value of the little skate as a model organism, particularly in studies of hematopoiesis and potentially for preclinical research on hematological and vascular disorders.

    Files private

    Request files
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Heyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S. and 9 moreHeyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S., EuroEPINOMICS RES Consortium, De Kovel, C. G. F., Poduri, A., Weber, Y. G., Weckhuysen, S., Sisodiya, S. M., Daly, M. J., Helbig, I., Lal, D., & Lemke, J. R. (2018). De novo variants in neurodevelopmental disorders with epilepsy. Nature Genetics, 50, 1048-1053. doi:10.1038/s41588-018-0143-7.

    Abstract

    Epilepsy is a frequent feature of neurodevelopmental disorders (NDDs), but little is known about genetic differences between NDDs with and without epilepsy. We analyzed de novo variants (DNVs) in 6,753 parent–offspring trios ascertained to have different NDDs. In the subset of 1,942 individuals with NDDs with epilepsy, we identified 33 genes with a significant excess of DNVs, of which SNAP25 and GABRB2 had previously only limited evidence of disease association. Joint analysis of all individuals with NDDs also implicated CACNA1E as a novel disease-associated gene. Comparing NDDs with and without epilepsy, we found missense DNVs, DNVs in specific genes, age of recruitment, and severity of intellectual disability to be associated with epilepsy. We further demonstrate the extent to which our results affect current genetic testing as well as treatment, emphasizing the benefit of accurate genetic diagnosis in NDDs with epilepsy.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hoeks, J. C. J., Hendriks, P., Vonk, W., Brown, C. M., & Hagoort, P. (2006). Processing the noun phrase versus sentence coordination ambiguity: Thematic information does not completely eliminate processing difficulty. Quarterly Journal of Experimental Psychology, 59, 1581-1899. doi:10.1080/17470210500268982.

    Abstract

    When faced with the noun phrase (NP) versus sentence (S) coordination ambiguity as in, for example, The thief shot the jeweller and the cop hellip, readers prefer the reading with NP-coordination (e.g., "The thief shot the jeweller and the cop yesterday") over one with two conjoined sentences (e.g., "The thief shot the jeweller and the cop panicked"). A corpus study is presented showing that NP-coordinations are produced far more often than S-coordinations, which in frequency-based accounts of parsing might be taken to explain the NP-coordination preference. In addition, we describe an eye-tracking experiment investigating S-coordinated sentences such as Jasper sanded the board and the carpenter laughed, where the poor thematic fit between carpenter and sanded argues against NP-coordination. Our results indicate that information regarding poor thematic fit was used rapidly, but not without leaving some residual processing difficulty. This is compatible with claims that thematic information can reduce but not completely eliminate garden-path effects.
  • Hoeks, B., & Levelt, W. J. M. (1993). Pupillary dilation as a measure of attention: A quantitative system analysis. Behavior Research Methods, Instruments, & Computers, 25(1), 16-26.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Holler, J., Kendrick, K. H., & Levinson, S. C. (2018). Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychonomic Bulletin & Review, 25(5), 1900-1908. doi:10.3758/s13423-017-1363-z.

    Abstract

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information

Share this page