Publications

Displaying 201 - 300 of 952
  • Enfield, N. J. (2008). Language as shaped by social interaction [Commentary on Christiansen and Chater]. Behavioral and Brain Sciences, 31(5), 519-520. doi:10.1017/S0140525X08005104.

    Abstract

    Language is shaped by its environment, which includes not only the brain, but also the public context in which speech acts are effected. To fully account for why language has the shape it has, we need to examine the constraints imposed by language use as a sequentially organized joint activity, and as the very conduit for linguistic diffusion and change.
  • Enfield, N. J. (1997). Review of 'Give: a cognitive linguistic study', by John Newman. Australian Journal of Linguistics, 17(1), 89-92. doi:10.1080/07268609708599546.
  • Enfield, N. J. (1997). Review of 'Plastic glasses and church fathers: semantic extension from the ethnoscience tradition', by David Kronenfeld. Anthropological Linguistics, 39(3), 459-464. Retrieved from http://www.jstor.org/stable/30028999.
  • Enfield, N. J. (2005). Review of the book [The Handbook of Historical Linguistics, edited by Brian D. Joseph and Richard D. Janda]. Linguistics, 43(6), 1191-1197. doi:10.1515/ling.2005.43.6.1191.
  • Erard, M. (2017). Write yourself invisible. New Scientist, 236(3153), 36-39.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ernestus, M., Mak, W. M., & Baayen, R. H. (2005). Waar 't kofschip strandt. Levende Talen Magazine, 92, 9-11.
  • Ernestus, M., & Neijt, A. (2008). Word length and the location of primary word stress in Dutch, German, and English. Linguistics, 46(3), 507-540. doi:10.1515/LING.2008.017.

    Abstract

    This study addresses the extent to which the location of primary stress in Dutch, German, and English monomorphemic words is affected by the syllables preceding the three final syllables. We present analyses of the monomorphemic words in the CELEX lexical database, which showed that penultimate primary stress is less frequent in Dutch and English trisyllabic than quadrisyllabic words. In addition, we discuss paper-and-pencil experiments in which native speakers assigned primary stress to pseudowords. These experiments provided evidence that in all three languages penultimate stress is more likely in quadrisyllabic than in trisyllabic words. We explain this length effect with the preferences in these languages for word-initial stress and for alternating patterns of stressed and unstressed syllables. The experimental data also showed important intra- and interspeaker variation, and they thus form a challenging test case for theories of language variation.
  • Ernestus, M., Dikmans, M., & Giezenaar, G. (2017). Advanced second language learners experience difficulties processing reduced word pronunciation variants. Dutch Journal of Applied Linguistics, 6(1), 1-20. doi:10.1075/dujal.6.1.01ern.

    Abstract

    Words are often pronounced with fewer segments in casual conversations than in formal speech. Previous research has shown that foreign language learners and beginning second language learners experience problems processing reduced speech. We examined whether this also holds for advanced second language learners. We designed a dictation task in Dutch consisting of sentences spliced from casual conversations and an unreduced counterpart of this task, with the same sentences carefully articulated by the same speaker. Advanced second language learners of Dutch produced substantially more transcription errors for the reduced than for the unreduced sentences. These errors made the sentences incomprehensible or led to non-intended meanings. The learners often did not rely on the semantic and syntactic information in the sentence or on the subsegmental cues to overcome the reductions. Hence, advanced second language learners also appear to suffer from the reduced pronunciation variants of words that are abundant in everyday conversations
  • Ernestus, M., & Mak, W. M. (2005). Analogical effects in reading Dutch verb forms. Memory & Cognition, 33(7), 1160-1173.

    Abstract

    Previous research has shown that the production of morphologically complex words in isolation is affected by the properties of morphologically, phonologically, or semantically similar words stored in the mental lexicon. We report five experiments with Dutch speakers that show that reading an inflectional word form in its linguistic context is also affected by analogical sets of formally similar words. Using the self-paced reading technique, we show in Experiments 1-3 that an incorrectly spelled suffix delays readers less if the incorrect spelling is in line with the spelling of verbal suffixes in other inflectional forms of the same verb. In Experiments 4 and 5, our use of the self-paced reading technique shows that formally similar words with different stems affect the reading of incorrect suffixal allomorphs on a given stem. These intra- and interparadigmatic effects in reading may be due to online processes or to the storage of incorrect forms resulting from analogical effects in production.
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Escudero, P., Hayes-Harb, R., & Mitterer, H. (2008). Novel second-language words and asymmetric lexical access. Journal of Phonetics, 36(2), 345-360. doi:10.1016/j.wocn.2007.11.002.

    Abstract

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /e/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /e/ symmetrically, i.e., both /æ/ and /e/ auditory tokens triggered looks to pictures containing both /æ/ and /e/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /e/ when presented with /e/ target tokens, but looked at pictures of words containing both /æ/ and /e/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A. P., Simkin, Z., Conti-Ramsden, G., & Consortium (2008). Genetic and phenotypic effects of phonological short-term memory and grammatical morphology in specific language impairment. Genes, Brain and Behavior, 7, 393-402. doi:10.1111/j.1601-183X.2007.00364.x.

    Abstract

    Deficits in phonological short-term memory and aspects of verb grammar morphology have been proposed as phenotypic markers of specific language impairment (SLI) with the suggestion that these traits are likely to be under different genetic influences. This investigation in 300 first-degree relatives of 93 probands with SLI examined familial aggregation and genetic linkage of two measures thought to index these two traits, non-word repetition and tense marking. In particular, the involvement of chromosomes 16q and 19q was examined as previous studies found these two regions to be related to SLI. Results showed a strong association between relatives' and probands' scores on non-word repetition. In contrast, no association was found for tense marking when examined as a continuous measure. However, significant familial aggregation was found when tense marking was treated as a binary measure with a cut-off point of -1.5 SD, suggestive of the possibility that qualitative distinctions in the trait may be familial while quantitative variability may be more a consequence of non-familial factors. Linkage analyses supported previous findings of the SLI Consortium of linkage to chromosome 16q for phonological short-term memory and to chromosome 19q for expressive language. In addition, we report new findings that relate to the past tense phenotype. For the continuous measure, linkage was found on both chromosomes, but evidence was stronger on chromosome 19. For the binary measure, linkage was observed on chromosome 19 but not on chromosome 16.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fisher, S. E. (2005). Dissection of molecular mechanisms underlying speech and language disorders. Applied Psycholinguistics, 26, 111-128. doi:10.1017/S0142716405050095.

    Abstract

    Developmental disorders affecting speech and language are highly heritable, but very little is currently understood about the neuromolecular mechanisms that underlie these traits. Integration of data from diverse research areas, including linguistics, neuropsychology, neuroimaging, genetics, molecular neuroscience, developmental biology, and evolutionary anthropology, is becoming essential for unraveling the relevant pathways. Recent studies of the FOXP2 gene provide a case in point. Mutation of FOXP2 causes a rare form of speech and language disorder, and the gene appears to be a crucial regulator of embryonic development for several tissues. Molecular investigations of the central nervous system indicate that the gene may be involved in establishing and maintaining connectivity of corticostriatal and olivocerebellar circuits in mammals. Notably, it has been shown that FOXP2 was subject to positive selection in recent human evolution. Consideration of findings from multiple levels of analysis demonstrates that FOXP2 cannot be characterized as “the gene for speech,” but rather as one critical piece of a complex puzzle. This story gives a flavor of what is to come in this field and indicates that anyone expecting simple explanations of etiology or evolution should be prepared for some intriguing surprises.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, S. E. (2005). On genes, speech, and language. The New England Journal of Medicine: NEJM / Publ. by the Massachusetts Medical Society, 353, 1655-1657. doi:10.1056/NEJMp058207.

    Abstract

    Learning to talk is one of the most important milestones in human development, but we still have only a limited understanding of the way in which the process occurs. It normally takes just a few years to go from babbling newborn to fluent communicator. During this period, the child learns to produce a rich array of speech sounds through intricate control of articulatory muscles, assembles a vocabulary comprising thousands of words, and deduces the complicated structural rules that permit construction of meaningful sentences. All of this (and more) is achieved with little conscious effort.

    Files private

    Request files
  • Fisher, S. E., Ciccodicola, A., Tanaka, K., Curci, A., Desicato, S., D'urso, M., & Craig, I. W. (1997). Sequence-based exon prediction around the synaptophysin locus reveals a gene-rich area containing novel genes in human proximal Xp. Genomics, 45, 340-347. doi:10.1006/geno.1997.4941.

    Abstract

    The human Xp11.23-p11.22 interval has been implicated in several inherited diseases including Wiskott-Aldrich syndrome; three forms of X-linked hypercalciuric nephrolithiaisis; and the eye disorders retinitis pigmentosa 2, congenital stationary night blindness, and Aland Island eye disease. In constructing YAC contigs spanning Xp11. 23-p11.22, we have previously shown that the region around the synaptophysin (SYP) gene is refractory to cloning in YACs, but highly stable in cosmids. Preliminary analysis of the latter suggested that this might reflect a high density of coding sequences and we therefore undertook the complete sequencing of a SYP-containing cosmid. Sequence data were extensively analyzed using computer programs such as CENSOR (to mask repeats), BLAST (for homology searches), and GRAIL and GENE-ID (to predict exons). This revealed the presence of 29 putative exons, organized into three genes, in addition to the 7 exons of the complete SYP coding region, all mapping within a 44-kb interval. Two genes are novel, one (CACNA1F) showing high homology to alpha1 subunits of calcium channels, the other (LMO6) encoding a product with significant similarity to LIM-domain proteins. RT-PCR and Northern blot studies confirmed that these loci are indeed transcribed. The third locus is the previously described, but not previously localized, A4 differentiation-dependent gene. Given that the intron-exon boundaries predicted by the analysis are consistent with previous information where available, we have been able to suggest the genomic organization of the novel genes with some confidence. The region has an elevated GC content (>53%), and we identified CpG islands associated with the 5' ends of SYP, A4, and LMO6. The order of loci was Xpter-A4-LMO6-SYP-CACNA1F-Xcen, with intergenic distances ranging from approximately 300 bp to approximately 5 kb. The density of transcribed sequences in this area (>80%) is comparable to that found in the highly gene-rich chromosomal band Xq28. Further studies may aid our understanding of the long-range organization surrounding such gene-enriched regions.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • FitzPatrick, I., & Weber, K. (2008). “Il piccolo principe est allé”: Processing of language switches in auditory sentence comprehension. Journal of Neuroscience, 28(18), 4581-4582. doi:10.1523/JNEUROSCI.0905-08.2008.
  • Floyd, S., San Roque, L., & Majid, A. (2018). Smell is coded in grammar and frequent in discourse: Cha'palaa olfactory language in cross-linguistic perspective. Journal of Linguistic Anthropology, 28(2), 175-196. doi:10.1111/jola.12190.

    Abstract

    It has long been claimed that there is no lexical field of smell, and that smell is of too little validity to be expressed in grammar. We demonstrate both claims are false. The Cha'palaa language (Ecuador) has at least 15 abstract smell terms, each of which is formed using a type of classifier previously thought not to exist. Moreover, using conversational corpora we show that Cha'palaa speakers also talk about smell more than Imbabura Quechua and English speakers. Together, this shows how language and social interaction may jointly reflect distinct cultural orientations towards sensory experience in general and olfaction in particular.
  • Floyd, S. (2008). The Pirate media economy and the emergence of Quichua language media spaces in Ecuador. Anthropology of Work Review, 29(2), 34-41. doi:10.1111/j.1548-1417.2008.00012.x.

    Abstract

    This paper gives an account of the pirate media economy of Ecuador and its role in the emergence of indigenous Quichua-language media spaces, identifying the different parties involved in this economy, discussing their relationship to the parallel ‘‘legitimate’’ media economy, and considering the implications of this informal media market for Quichua linguistic and cultural reproduction. As digital recording and playback technology has become increasingly more affordable and widespread over recent years, black markets have grown up worldwide, based on cheap ‘‘illegal’’ reproduction of commercial media, today sold by informal entrepreneurs in rural markets, shops and street corners around Ecuador. Piggybacking on this pirate infrastructure, Quichua-speaking media producers and consumers have begun to circulate indigenous-language video at an unprecedented rate, helped by small-scale merchants who themselves profit by supplying market demands for positive images of indigenous people. In a context of a national media that has tended to silence indigenous voices rather than amplify them, informal media producers, consumers and vendors are developing relationships that open meaningful media spaces within the particular social, economic and linguistic contexts of Ecuador.
  • Floyd, S., Rossi, G., Baranova, J., Blythe, J., Dingemanse, M., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2018). Universals and cultural diversity in the expression of gratitude. Royal Society Open Science, 5: 180391. doi:10.1098/rsos.180391.

    Abstract

    Gratitude is argued to have evolved to motivate and maintain social reciprocity among people, and to be linked to a wide range of positive effects — social, psychological, and even physical. But is socially reciprocal behaviour dependent on the expression of gratitude, for example by saying "thank you" as in English? Current research has not included cross-cultural elements, and has tended to conflate gratitude as an emotion with gratitude as a linguistic practice, as might appear to be the case in English. Here we ask to what extent people actually express gratitude in different societies by focussing on episodes of everyday life where someone obtains a good, service, or support from another, and comparing these episodes across eight languages from five continents. What we find is that expressions of gratitude in these episodes are remarkably rare, suggesting that social reciprocity in everyday life relies on tacit understandings of people’s rights and duties surrounding mutual assistance and collaboration. At the same time, we also find minor cross-cultural variation, with slightly higher rates in Western European languages English and Italian, showing that universal tendencies of social reciprocity should not be conflated with more culturally variable practices of expressing gratitude. Our study complements previous experimental and culture-specific research on social reciprocity with a systematic comparison of audiovisual corpora of naturally occurring social interaction from different cultures from around the world.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Forkel, S. J., & Catani, M. (2018). Lesion mapping in acute stroke aphasia and its implications for recovery. Neuropsychologia, 115, 88-100. doi:10.1016/j.neuropsychologia.2018.03.036.

    Abstract

    Patients with stroke offer a unique window into understanding human brain function. Mapping stroke lesions poses several challenges due to the complexity of the lesion anatomy and the mechanisms causing local and remote disruption on brain networks. In this prospective longitudinal study, we compare standard and advanced approaches to white matter lesion mapping applied to acute stroke patients with aphasia. Eighteen patients with acute left hemisphere stroke were recruited and scanned within two weeks from symptom onset. Aphasia assessment was performed at baseline and six-month follow-up. Structural and diffusion MRI contrasts indicated an area of maximum overlap in the anterior external/extreme capsule with diffusion images showing a larger overlap extending into posterior perisylvian regions. Anatomical predictors of recovery included damage to ipsilesional tracts (as shown by both structural and diffusion images) and contralesional tracts (as shown by diffusion images only). These findings indicate converging results from structural and diffusion lesion mapping methods but also clear differences between the two approaches in their ability to identify predictors of recovery outside the lesioned regions.
  • Forkstam, C., & Petersson, K. M. (2005). Towards an explicit account of implicit learning. Current Opinion in Neurology, 18(4), 435-441.

    Abstract

    Purpose of review: The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Recent findings: Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. Summary: The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents—accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same region as the participant—and regional accents—from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the
    messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's
    social categorization (out-group member) affect memory and the credibility of the message (e.g., less trust-
    worthiness). In the present study, we go one step further and ask whether evaluations of messages are also
    affected by regional accents—accents from a different region than the listener. In the current study, we report
    results from three experiments on immediate memory recognition and immediate credibility assessments as well
    as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same
    region as the participant—and regional accents—from native speakers of a different country than the partici-
    pants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by
    accent does not seem to negatively affect how we treat the speakers' messages.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2008). World knowledge in computational models of discourse comprehension. Discourse Processes, 45(6), 429-463. doi:10.1080/01638530802069926.

    Abstract

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, ad hoc selection, extraction from text corpora, and implementation of all knowledge about a simplified microworld. Each of these strategies is illustrated by a detailed discussion of a model of discourse comprehension. It is argued that seemingly successful modeling results are uninformative if knowledge is implemented ad hoc or not at all, that knowledge extracted from large text corpora is not appropriate for discourse comprehension, and that a suitable implementation can be obtained by applying the microworld strategy.
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franke, B., Hoogman, M., Vasquez, A. A., Heister, J., Savelkoul, P., Naber, M., Scheffer, H., Kiemeney, L., Kan, C., Kooij, J., & Buitelaar, J. (2008). Association of the dopamine transporter (SLC6A3/DAT1) gene 9-6 haplotype with adult ADHD. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147, 1576-1579. doi:10.1002/ajmg.b.30861.

    Abstract

    ADHD is a neuropsychiatric disorder characterized by chronic hyperactivity, inattention and impulsivity, which affects about 5% of school-age children. ADHD persists into adulthood in at least 15% of cases. It is highly heritable and familial influences seem strongest for ADHD persisting into adulthood. However, most of the genetic research in ADHD has been carried out in children with the disorder. The gene that has received most attention in ADHD genetics is SLC6A3/DAT1 encoding the dopamine transporter. In the current study we attempted to replicate in adults with ADHD the reported association of a 10–6 SLC6A3-haplotype, formed by the 10-repeat allele of the variable number of tandem repeat (VNTR) polymorphism in the 3′ untranslated region of the gene and the 6-repeat allele of the VNTR in intron 8 of the gene, with childhood ADHD. In addition, we wished to explore the role of a recently described VNTR in intron 3 of the gene. Two hundred sixteen patients and 528 controls were included in the study. We found a 9–6 SLC6A3-haplotype, rather than the 10–6 haplotype, to be associated with ADHD in adults. The intron 3 VNTR showed no association with adult ADHD. Our findings converge with earlier reports and suggest that age is an important factor to be taken into account when assessing the association of SLC6A3 with ADHD. If confirmed in other studies, the differential association of the gene with ADHD in children and in adults might imply that SLC6A3 plays a role in modulating the ADHD phenotype, rather than causing it
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Brain error-monitoring activity is affected by semantic relatedness: An event-related brain potentials study. Journal of Cognitive Neuroscience, 20(5), 927-940. doi:10.1162/jocn.2008.20514.

    Abstract

    Speakers continuously monitor what they say. Sometimes, self-monitoring malfunctions and errors pass undetected and uncorrected. In the field of action monitoring, an event-related brain potential, the error-related negativity (ERN), is associated with error processing. The present study relates the ERN to verbal self-monitoring and investigates how the ERN is affected by auditory distractors during verbal monitoring. We found that the ERN was largest following errors that occurred after semantically related distractors had been presented, as compared to semantically unrelated ones. This result demonstrates that the ERN is sensitive not only to response conflict resulting from the incompatibility of motor responses but also to more abstract lexical retrieval conflict resulting from activation of multiple lexical entries. This, in turn, suggests that the functioning of the verbal self-monitoring system during speaking is comparable to other performance monitoring, such as action monitoring.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error-monitoring activity: An event-related brain potentials study. NeuroImage, 39, 395-405. doi:10.1016/j.neuroimage.2007.09.001.

    Abstract

    During speech production, we continuously monitor what we say. In
    situations in which speech errors potentially have more severe
    consequences, e.g. during a public presentation, our verbal selfmonitoring
    system may pay special attention to prevent errors than in
    situations in which speech errors are more acceptable, such as a casual
    conversation. In an event-related potential study, we investigated
    whether or not motivation affected participants’ performance using a
    picture naming task in a semantic blocking paradigm. Semantic
    context of to-be-named pictures was manipulated; blocks were
    semantically related (e.g., cat, dog, horse, etc.) or semantically
    unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated
    independently by monetary reward. The motivation manipulation did
    not affect error rate during picture naming. However, the highmotivation
    condition yielded increased amplitude and latency values of
    the error-related negativity (ERN) compared to the low-motivation
    condition, presumably indicating higher monitoring activity. Furthermore,
    participants showed semantic interference effects in reaction
    times and error rates. The ERN amplitude was also larger during
    semantically related than unrelated blocks, presumably indicating that
    semantic relatedness induces more conflict between possible verbal
    responses.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Gaspard III, J. C., Bauer, G. B., Mann, D. A., Boerner, K., Denum, L., Frances, C., & Reep, R. L. (2017). Detection of hydrodynamic stimuli by the postcranial body of Florida manatees (Trichechus manatus latirostris) A Neuroethology, sensory, neural, and behavioral physiology. Journal of Comparative Physiology, 203, 111-120. doi:10.1007/s00359-016-1142-8.

    Abstract

    Manatees live in shallow, frequently turbid
    waters. The sensory means by which they navigate in these
    conditions are unknown. Poor visual acuity, lack of echo-
    location, and modest chemosensation suggest that other
    modalities play an important role. Rich innervation of sen-
    sory hairs that cover the entire body and enlarged soma-
    tosensory areas of the brain suggest that tactile senses are
    good candidates. Previous tests of detection of underwater
    vibratory stimuli indicated that they use passive movement
    of the hairs to detect particle displacements in the vicinity
    of a micron or less for frequencies from 10 to 150 Hz. In
    the current study, hydrodynamic stimuli were created by
    a sinusoidally oscillating sphere that generated a dipole
    field at frequencies from 5 to 150 Hz. Go/no-go tests of
    manatee postcranial mechanoreception of hydrodynamic
    stimuli indicated excellent sensitivity but about an order of
    magnitude less than the facial region. When the vibrissae
    were trimmed, detection thresholds were elevated, suggest-
    ing that the vibrissae were an important means by which
    detection occurred. Manatees were also highly accurate in two-choice directional discrimination: greater than 90%
    correct at all frequencies tested. We hypothesize that mana-
    tees utilize vibrissae as a three-dimensional array to detect
    and localize low-frequency hydrodynamic stimuli
  • Gayán, J., Willcutt, E. G., Fisher, S. E., Francks, C., Cardon, L. R., Olson, R. K., Pennington, B. F., Smith, S., Monaco, A. P., & DeFries, J. C. (2005). Bivariate linkage scan for reading disability and attention-deficit/hyperactivity disorder localizes pleiotropic loci. Journal of Child Psychology and Psychiatry, 46(10), 1045-1056. doi:10.1111/j.1469-7610.2005.01447.x.

    Abstract

    BACKGROUND: There is a growing interest in the study of the genetic origins of comorbidity, a direct consequence of the recent findings of genetic loci that are seemingly linked to more than one disorder. There are several potential causes for these shared regions of linkage, but one possibility is that these loci may harbor genes with manifold effects. The established genetic correlation between reading disability (RD) and attention-deficit/hyperactivity disorder (ADHD) suggests that their comorbidity is due at least in part to genes that have an impact on several phenotypes, a phenomenon known as pleiotropy. METHODS: We employ a bivariate linkage test for selected samples that could help identify these pleiotropic loci. This linkage method was employed to carry out the first bivariate genome-wide analysis for RD and ADHD, in a selected sample of 182 sibling pairs. RESULTS: We found evidence for a novel locus at chromosome 14q32 (multipoint LOD=2.5; singlepoint LOD=3.9) with a pleiotropic effect on RD and ADHD. Another locus at 13q32, which had been implicated in previous univariate scans of RD and ADHD, seems to have a pleiotropic effect on both disorders. 20q11 is also suggested as a pleiotropic locus. Other loci previously implicated in RD or ADHD did not exhibit bivariate linkage. CONCLUSIONS: Some loci are suggested as having pleiotropic effects on RD and ADHD, while others might have unique effects. These results highlight the utility of this bivariate linkage method to study pleiotropy.
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Gialluisi, A., Guadalupe, T., Francks, C., & Fisher, S. E. (2017). Neuroimaging genetic analyses of novel candidate genes associated with reading and language. Brain and Language, 172, 9-15. doi:10.1016/j.bandl.2016.07.002.

    Abstract

    Neuroimaging measures provide useful endophenotypes for tracing genetic effects on reading and language. A recent Genome-Wide Association Scan Meta-Analysis (GWASMA) of reading and language skills (N = 1862) identified strongest associations with the genes CCDC136/FLNC and RBFOX2. Here, we follow up the top findings from this GWASMA, through neuroimaging genetics in an independent sample of 1275 healthy adults. To minimize multiple-testing, we used a multivariate approach, focusing on cortical regions consistently implicated in prior literature on developmental dyslexia and language impairment. Specifically, we investigated grey matter surface area and thickness of five regions selected a priori: middle temporal gyrus (MTG); pars opercularis and pars triangularis in the inferior frontal gyrus (IFG-PO and IFG-PT); postcentral parietal gyrus (PPG) and superior temporal gyrus (STG). First, we analysed the top associated polymorphisms from the reading/language GWASMA: rs59197085 (CCDC136/FLNC) and rs5995177 (RBFOX2). There was significant multivariate association of rs5995177 with cortical thickness, driven by effects on left PPG, right MTG, right IFG (both PO and PT), and STG bilaterally. The minor allele, previously associated with reduced reading-language performance, showed negative effects on grey matter thickness. Next, we performed exploratory gene-wide analysis of CCDC136/FLNC and RBFOX2; no other associations surpassed significance thresholds. RBFOX2 encodes an important neuronal regulator of alternative splicing. Thus, the prior reported association of rs5995177 with reading/language performance could potentially be mediated by reduced thickness in associated cortical regions. In future, this hypothesis could be tested using sufficiently large samples containing both neuroimaging data and quantitative reading/language scores from the same individuals.

    Additional information

    mmc1.docx
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Goodhew, S. C., & Kidd, E. (2017). Language use statistics and prototypical grapheme colours predict synaesthetes' and non-synaesthetes' word-colour associations. Acta Psychologica, 173, 73-86. doi:10.1016/j.actpsy.2016.12.008.

    Abstract

    Synaesthesia is the neuropsychological phenomenon in which individuals experience unusual sensory associations, such as experiencing particular colours in response to particular words. While it was once thought the particular pairings between stimuli were arbitrary and idiosyncratic to particular synaesthetes, there is now growing evidence for a systematic psycholinguistic basis to the associations. Here we sought to assess the explanatory value of quantifiable lexical association measures (via latent semantic analysis; LSA) in the pairings observed between words and colours in synaesthesia. To test this, we had synaesthetes report the particular colours they experienced in response to given concept words, and found that language association between the concept and colour words provided highly reliable predictors of the reported pairings. These results provide convergent evidence for a psycholinguistic basis to synaesthesia, but in a novel way, showing that exposure to particular patterns of associations in language can predict the formation of particular synaesthetic lexical-colour associations. Consistent with previous research, the prototypical synaesthetic colour for the first letter of the word also played a role in shaping the colour for the whole word, and this effect also interacted with language association, such that the effect of the colour for the first letter was stronger as the association between the concept word and the colour word in language increased. Moreover, when a group of non-synaesthetes were asked what colours they associated with the concept words, they produced very similar reports to the synaesthetes that were predicted by both language association and prototypical synaesthetic colour for the first letter of the word. This points to a shared linguistic experience generating the associations for both groups.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • De Graaf, T. A., Duecker, F., Stankevich, Y., Ten Oever, S., & Sack, A. T. (2017). Seeing in the dark: Phosphene thresholds with eyes open versus closed in the absence of visual inputs. Brain Stimulation, 10(4), 828-835. doi:10.1016/j.brs.2017.04.127.

    Abstract

    Background: Voluntarily opening or closing our eyes results in fundamentally different input patterns and expectancies. Yet it remains unclear how our brains and visual systems adapt to these ocular states.
    Objective/Hypothesis: We here used transcranial magnetic stimulation (TMS) to probe the excitability of the human visual system with eyes open or closed, in the complete absence of visual inputs.
    Methods: Combining Bayesian staircase procedures with computer control of TMS pulse intensity allowed interleaved determination of phosphene thresholds (PT) in both conditions. We measured parieto-occipital EEG baseline activity in several stages to track oscillatory power in the alpha (8-12 Hz) frequency-band, which has previously been shown to be inversely related to phosphene perception.
    Results: Since closing the eyes generally increases alpha power, one might have expected a decrease in excitability (higher PT). While we confirmed a rise in alpha power with eyes closed, visual excitability was actually increased (PT was lower) with eyes closed.
    Conclusions: This suggests that, aside from oscillatory alpha power, additional neuronal mechanisms influence the excitability of early visual cortex. One of these may involve a more internally oriented mode of brain operation, engaged by closing the eyes. In this state, visual cortex may be more susceptible to top-down inputs, to facilitate for example multisensory integration or imagery/working memory, although alternative explanations remain possible. (C) 2017 Elsevier Inc. All rights reserved.

    Additional information

    Supplementary data
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • Greenfield, P. M., Slobin, D., Cole, M., Gardner, H., Sylva, K., Levelt, W. J. M., Lucariello, J., Kay, A., Amsterdam, A., & Shore, B. (2017). Remembering Jerome Bruner: A series of tributes to Jerome “Jerry” Bruner, who died in 2016 at the age of 100, reflects the seminal contributions that led him to be known as a co-founder of the cognitive revolution. Observer, 30(2). Retrieved from http://www.psychologicalscience.org/observer/remembering-jerome-bruner.

    Abstract

    Jerome Seymour “Jerry” Bruner was born on October 1, 1915, in New York City. He began his academic career as psychology professor at Harvard University; he ended it as University Professor Emeritus at New York University (NYU) Law School. What happened at both ends and in between is the subject of the richly variegated remembrances that follow. On June 5, 2016, Bruner died in his Greenwich Village loft at age 100. He leaves behind his beloved partner Eleanor Fox, who was also his distinguished colleague at NYU Law School; his son Whitley; his daughter Jenny; and three grandchildren.

    Bruner’s interdisciplinarity and internationalism are seen in the remarkable variety of disciplines and geographical locations represented in the following tributes. The reader will find developmental psychology, anthropology, computer science, psycholinguistics, cognitive psychology, cultural psychology, education, and law represented; geographically speaking, the writers are located in the United States, Canada, the United Kingdom, and the Netherlands. The memories that follow are arranged in roughly chronological order according to when the writers had their first contact with Jerry Bruner.
  • Greenhill, S. J., Wu, C.-H., Hua, X., Dunn, M., Levinson, S. C., & Gray, R. D. (2017). Evolutionary dynamics of language systems. Proceedings of the National Academy of Sciences of the United States of America, 114(42), E8822-E8829. doi:10.1073/pnas.1700388114.

    Abstract

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact.
  • Grieco-Calub, T. M., Ward, K. M., & Brehm, L. (2017). Multitasking During Degraded Speech Recognition in School-Age Children. Trends in hearing, 21, 1-14. doi:10.1177/2331216516686786.

    Abstract

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unpro- cessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
  • Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14: e1006690. doi:10.1371/journal.pcbi.1006690.

    Abstract

    Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.

    Additional information

    data via OSF
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25, 225-240. doi:10.1080/13506285.2017.1324934.

    Abstract

    Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant, there was still evidence for visual and semantic biases, but these biases were substantially weaker, and similar in strength and temporal dynamics, without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements.
  • Groszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P. and 1 moreGroszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P., & Fisher, S. E. (2008). Impaired synaptic plasticity and motor learning in mice with a point mutation implicated in human speech deficits. Current Biology, 18(5), 354-362. doi:10.1016/j.cub.2008.01.060.

    Abstract

    The most well-described example of an inherited speech and language disorder is that observed in the multigenerational KE family, caused by a heterozygous missense mutation in the FOXP2 gene. Affected individuals are characterized by deficits in the learning and production of complex orofacial motor sequences underlying fluent speech and display impaired linguistic processing for both spoken and written language. The FOXP2 transcription factor is highly similar in many vertebrate species, with conserved expression in neural circuits related to sensorimotor integration and motor learning. In this study, we generated mice carrying an identical point mutation to that of the KE family, yielding the equivalent arginine-to-histidine substitution in the Foxp2 DNA-binding domain. Homozygous R552H mice show severe reductions in cerebellar growth and postnatal weight gain but are able to produce complex innate ultrasonic vocalizations. Heterozygous R552H mice are overtly normal in brain structure and development. Crucially, although their baseline motor abilities appear to be identical to wild-type littermates, R552H heterozygotes display significant deficits in species-typical motor-skill learning, accompanied by abnormal synaptic plasticity in striatal and cerebellar neural circuits.

    Additional information

    mmc1.pdf
  • Guadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T. and 141 moreGuadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T., Blangero, J., Bokde, A. L., Boedhoe, P. S., Bose, A., Brem, S., Brodaty, H., Bromberg, U., Brooks, S., Büchel, C., Buitelaar, J., Calhoun, V. D., Cannon, D. M., Cattrell, A., Cheng, Y., Conrod, P. J., Conzelmann, A., Corvin, A., Crespo-Facorro, B., Crivello, F., Dannlowski, U., De Zubicaray, G. I., De Zwarte, S. M., Deary, I. J., Desrivières, S., Doan, N. T., Donohoe, G., Dørum, E. S., Ehrlich, S., Espeseth, T., Fernández, G., Flor, H., Fouche, J.-P., Frouin, V., Fukunaga, M., Gallinat, J., Garavan, H., Gill, M., Suarez, A. G., Gowland, P., Grabe, H. J., Grotegerd, D., Gruber, O., Hagenaars, S., Hashimoto, R., Hauser, T. U., Heinz, A., Hibar, D. P., Hoekstra, P. J., Hoogman, M., Howells, F. M., Hu, H., Hulshoff Pol, H. E.., Huyser, C., Ittermann, B., Jahanshad, N., Jönsson, E. G., Jurk, S., Kahn, R. S., Kelly, S., Kraemer, B., Kugel, H., Kwon, J. S., Lemaitre, H., Lesch, K.-P., Lochner, C., Luciano, M., Marquand, A. F., Martin, N. G., Martínez-Zalacaín, I., Martinot, J.-L., Mataix-Cols, D., Mather, K., McDonald, C., McMahon, K. L., Medland, S. E., Menchón, J. M., Morris, D. W., Mothersill, O., Maniega, S. M., Mwangi, B., Nakamae, T., Nakao, T., Narayanaswaamy, J. C., Nees, F., Nordvik, J. E., Onnink, A. M. H., Opel, N., Ophoff, R., Martinot, M.-L.-P., Orfanos, D. P., Pauli, P., Paus, T., Poustka, L., Reddy, J. Y., Renteria, M. E., Roiz-Santiáñez, R., Roos, A., Royle, N. A., Sachdev, P., Sánchez-Juan, P., Schmaal, L., Schumann, G., Shumskaya, E., Smolka, M. N., Soares, J. C., Soriano-Mas, C., Stein, D. J., Strike, L. T., Toro, R., Turner, J. A., Tzourio-Mazoyer, N., Uhlmann, A., Valdés Hernández, M., Van den Heuvel, O. A., Van der Meer, D., Van Haren, N. E.., Veltman, D. J., Venkatasubramanian, G., Vetter, N. C., Vuletic, D., Walitza, S., Walter, H., Walton, E., Wang, Z., Wardlaw, J., Wen, W., Westlye, L. T., Whelan, R., Wittfeld, K., Wolfers, T., Wright, M. J., Xu, J., Xu, X., Yun, J.-Y., Zhao, J., Franke, B., Thompson, P. M., Glahn, D. C., Mazoyer, B., Fisher, S. E., & Francks, C. (2017). Human subcortical asymmetries in 15,847 people worldwide reveal effects of age and sex. Brain Imaging and Behavior, 11(5), 1497-1514. doi:10.1007/s11682-016-9629-z.

    Abstract

    The two hemispheres of the human brain differ functionally and structurally. Despite over a century of research, the extent to which brain asymmetry is influenced by sex, handedness, age, and genetic factors is still controversial. Here we present the largest ever analysis of subcortical brain asymmetries, in a harmonized multi-site study using meta-analysis methods. Volumetric asymmetry of seven subcortical structures was assessed in 15,847 MRI scans from 52 datasets worldwide. There were sex differences in the asymmetry of the globus pallidus and putamen. Heritability estimates, derived from 1170 subjects belonging to 71 extended pedigrees, revealed that additive genetic factors influenced the asymmetry of these two structures and that of the hippocampus and thalamus. Handedness had no detectable effect on subcortical asymmetries, even in this unprecedented sample size, but the asymmetry of the putamen varied with age. Genetic drivers of asymmetry in the hippocampus, thalamus and basal ganglia may affect variability in human cognition, including susceptibility to psychiatric disorders.

    Additional information

    11682_2016_9629_MOESM1_ESM.pdf
  • Le Guen, O. (2005). Geografía de lo sagrado entre los Mayas Yucatecos de Quintana Roo: configuración del espacio y su aprendizaje entre los niños. Ketzalcalli, 2005(1), 54-68.
  • Le Guen, O. (2008). Ubèel pixan: El camino de las almas ancetros familiares y colectivos entre los Mayas Yacatecos. Penisula, 3(1), 83-120. Retrieved from http://www.revistas.unam.mx/index.php/peninsula/article/viewFile/44354/40086.

    Abstract

    The aim of this article is to analyze the funerary customs and ritual for the souls among contemporary Yucatec Maya in order to better understand their relations with pre-Hispanic burial patterns. It is suggested that the souls of the dead are considered as ancestors that can be distinguished between family and collective ancestors considering several criteria: the place of burial, the place of ritual performance and the ritual treatment. In this proposition, funerary practices as well as ritual categories of ancestors (family or collective), are considered as reminiscences of ancient practices whose traces can be found throughout historical sources. Through an analyze of the current funerary practices and their variations, this article aims to demonstrate that over the time and despite socio-economical changes, ancient funerary practices (specifically from the post-classic period) had kept some homogeneity, preserving some essential characteristics that can be observed in the actuality.
  • Guest, O., & Love, B. C. (2017). What the success of brain imaging implies about the neural code. eLife, 6: e21397. doi:10.7554/eLife.21397.

    Abstract

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? Language Learning, 58(suppl. 1), 207-216. doi:10.1111/j.1467-9922.2008.00472.x.
  • Gullberg, M., De Bot, K., & Volterra, V. (2008). Gestures and some key issues in the study of language development. Gesture, 8(2), 149-179. doi:10.1075/gest.8.2.03gul.

    Abstract

    The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
  • Gullberg, M., & De Bot, K. (Eds.). (2008). Gestures in language development [Special Issue]. Gesture, 8(2).
  • Gullberg, M., & McCafferty, S. G. (2008). Introduction to gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition, 30(2), 133-146. doi:10.1017/S0272263108080285.

    Abstract

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not all gesture studies embrace an integrationist view—indeed, the field applies numerous theories across various disciplines—it is nonetheless true that to study gesture is to study what has traditionally been called paralinguistic modes of interaction, with the paralinguistic label given on the assumption that gesture is not part of the core meaning of what is rendered linguistically. However, arguably, most researchers within gesture studies would maintain just the opposite: The studies presented in this special issue reflect a view whereby gesture is regarded as a central aspect of language in use, integral to how we communicate (make meaning) both with each other and with ourselves.
  • Gullberg, M., Hendriks, H., & Hickmann, M. (2008). Learning to talk and gesture about motion in French. First Language, 28(2), 200-236. doi:10.1177/0142723707088074.

    Abstract

    This study explores how French adults and children aged four and six years talk and gesture about voluntary motion, examining (1) how they encode path and manner in speech, (2) how they encode this information in accompanying gestures; and (3) whether gestures are co-expressive with speech or express other information. When path and manner are equally relevant, children’s and adults’ speech and gestures both focus on path, rather than on manner. Moreover, gestures are predominantly co-expressive with speech at all ages. However, when they are non-redundant, adults tend to gesture about path while talking about manner, whereas children gesture about both path and manner while talking about path. The discussion highlights implications for our understanding of speakers’ representations and their development.
  • Gullberg, M. (2005). L'expression orale et gestuelle de la cohésion dans le discours de locuteurs langue 2 débutants. AILE, 23, 153-172.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416-423. doi:10.1016/j.tics.2005.07.004.

    Abstract

    In speaking and comprehending language, word information is retrieved from memory and combined into larger units (unification). Unification operations take place in parallel at the semantic, syntactic and phonological levels of processing. This article proposes a new framework that connects psycholinguistic models to a neurobiological account of language. According to this proposal the left inferior frontal gyrus (LIFG) plays an important role in unification. Research in other domains of cognition indicates that left prefrontal cortex has the necessary neurobiological characteristics for its involvement in the unification for language. I offer here a psycholinguistic perspective on the nature of language unification and the role of LIFG.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Hagoort, P. (1997). De rappe prater als gewoontedier [Review of the book Smooth talkers: The linguistic performance of auctioneers and sportscasters, by Koenraad Kuiper]. Psychologie, 16, 22-23.
  • Hagoort, P. (2017). Don't forget neurobiology: An experimental approach to linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e292. doi:10.1017/S0140525X17000401.

    Abstract

    Acceptability judgments are no longer acceptable as the holy grail for testing the nature of linguistic representations. Experimental and quantitative methods should be used to test theoretical claims in psycholinguistics. These methods should include not only behavior, but also the more recent possibilities to probe the neural codes for language-relevant representation
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2005). De talige aap. Linguaan, 26-35.
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hagoort, P. (1997). Semantic priming in Broca's aphasics at a short SOA: No support for an automatic access deficit. Brain and Language, 56, 287-300. doi:10.1006/brln.1997.1849.

    Abstract

    This study tests the recent claim that Broca’s aphasics are impaired in automatic lexical access, including the retrieval of word meaning. Subjects are required to perform a lexical decision on visually presented prime target pairs. Half of the word targets are preceded by a related word, half by an unrelated word. Primes and targets are presented with a long stimulus-onset-asynchrony (SOA) of 1400 msec and with a short SOA of 300 msec. Normal priming effects are observed in Broca’s aphasics for both SOAs. This result is discussed in the context of the claim that Broca’s aphasics suffer from an impairment in the automatic access of lexical–semantic information. It is argued that none of the current priming studies provides evidence supporting this claim, since with short SOAs priming effects have been reliably obtained in Broca’s aphasics. The results are more compatible with the claim that in many Broca’s aphasics the functional locus of their comprehension deficit is at the level of postlexical integration processes.
  • Hagoort, P. (1997). Valt er nog te lachen zonder de rechter hersenhelft? Psychologie, 16, 52-55.

Share this page