Publications

Displaying 301 - 400 of 1069
  • Gisladottir, R. S., Chwilla, D., & Levinson, S. C. (2015). Conversation electrified: ERP correlates of speech act recognition in underspecified utterances. PLoS One, 10(3): e0120068. doi:10.1371/journal.pone.0120068.

    Abstract

    The ability to recognize speech acts (verbal actions) in conversation is critical for everyday interaction. However, utterances are often underspecified for the speech act they perform, requiring listeners to rely on the context to recognize the action. The goal of this study was to investigate the time-course of auditory speech act recognition in action-underspecified utterances and explore how sequential context (the prior action) impacts this process. We hypothesized that speech acts are recognized early in the utterance to allow for quick transitions between turns in conversation. Event-related potentials (ERPs) were recorded while participants listened to spoken dialogues and performed an action categorization task. The dialogues contained target utterances that each of which could deliver three distinct speech acts depending on the prior turn. The targets were identical across conditions, but differed in the type of speech act performed and how it fit into the larger action sequence. The ERP results show an early effect of action type, reflected by frontal positivities as early as 200 ms after target utterance onset. This indicates that speech act recognition begins early in the turn when the utterance has only been partially processed. Providing further support for early speech act recognition, actions in highly constraining contexts did not elicit an ERP effect to the utterance-final word. We take this to show that listeners can recognize the action before the final word through predictions at the speech act level. However, additional processing based on the complete utterance is required in more complex actions, as reflected by a posterior negativity at the final word when the speech act is in a less constraining context and a new action sequence is initiated. These findings demonstrate that sentence comprehension in conversational contexts crucially involves recognition of verbal action which begins as soon as it can.
  • Gisladottir, R. S. (2015). Other-initiated repair in Icelandic. Open Linguistics, 1(1), 309-328. doi:10.1515/opli-2015-0004.

    Abstract

    The ability to repair problems with hearing or understanding in conversation is critical for successful communication. This article describes the linguistic practices of other-initiated repair (OIR) in Icelandic through quantitative and qualitative analysis of a corpus of video-recorded conversations. The study draws on the conceptual distinctions developed in the comparative project on repair described in the introduction to this issue. The main aim is to give an overview of the formats for OIR in Icelandic and the type of repair practices engendered by them. The use of repair initiations in social actions not aimed at solving comprehension problems is also briefly discussed. In particular, the interjection ha has a rich usage extending beyond open other-initiation of repair. By describing the linguistic machinery for other-initiated repair in Icelandic, this study contributes to the typology of conversational structure and to the still nascent field of Icelandic social interaction studies.
  • Goldin-Meadow, S., Namboodiripad, S., Mylander, C., Ozyurek, A., & Sancar, B. (2015). The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. Journal of Cognition and Development, 16, 55-80. doi:10.1080/15248372.2013.803970.

    Abstract

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, which have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also used the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although cospeech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
  • Goncharova, M. V., Klenova, A. V., & Bragina, E. V. (2015). Development of cues to individuality and sex in calls of three crane species: when is it good to be recognizable? Journal of Ethology, 33, 165-175. doi:10.1007/s10164-015-0428-6.

    Abstract

    Vocal individuality provides a method of personalization for multiple avian species. However, expression of individual vocal features depends on necessity of recognition. Here we focused on chick vocalizations of demoiselle, Siberian and red-crowned cranes that differ by their body size, developmental rates and some ecological traits. Cranes are territorial during summer, but gather in
    large flocks during autumn and winter. Nevertheless, parents keep feeding their chicks, even on winter grounds, despite the potential of confusing their own and alien
    chicks. Here we aimed to compare expression of individuality and sex in calls of three crane species between solitary and gregarious periods of a chick’s life, and between species. We found significant individual patterns of
    acoustic variables in the calls of all three species both before and after fledging. However, only red-crowned crane chicks increased expression of individuality significantly after the fledging. Also, we found that chicks of all three species significantly increased occurrence of nonlinear phenomena, i.e., irregular oscillations of soundproducing membranes (biphonations, sidebands, and deterministic chaos), in their calls after fledging. Non-linear phenomena can be a way of increasing the potential for
    individual recognition as well as avoiding habituation of parents to their chicks’ calls. The older chicks are, the less
    their parents feed them, and chicks benefit from keeping the permanent attention.

    Files private

    Request files
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., & Kidd, E. (2016). The conceptual cueing database: Rated items for the study of the interaction between language and attention. Behavior Research Methods, 48(3), 1004-1007. doi:10.3758/s13428-015-0625-9.

    Abstract

    Humans appear to rely on spatial mappings to describe and represent concepts. In particular, conceptual cueing refers to the effect whereby after reading or hearing a particular word, the location of observers’ visual attention in space can be systematically shifted in a particular direction. For example, words such as “sun” and “happy” orient attention upwards, whereas words such as “basement” and “bitter” orient attention downwards. This area of research has garnered much interest, particularly within the embodied cognition framework, for its potential to enhance our understanding of the interaction between abstract cognitive processes such as language and basic visual processes such as attention and stimulus processing. To date, however, this area has relied on subjective classification criteria to determine whether words ought to be classified as having a meaning that implies “up” or “down.” The present study, therefore, provides a set of 498 items that have each been systematically rated by over 90 participants, providing refined, continuous measures of the extent to which people associate given words with particular spatial dimensions. The resulting database provides an objective means to aid item-selection for future research in this area.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gordon, P. C., & Hoedemaker, R. S. (2016). Effective scheduling of looking and talking during rapid automatized naming. Journal of Experimental Psychology: Human Perception and Performance, 42(5), 742-760. doi:10.1037/xhp0000171.

    Abstract

    Rapid automatized naming (RAN) is strongly related to literacy gains in developing readers, reading disabilities, and reading ability in children and adults. Because successful RAN performance depends on the close coordination of a number of abilities, it is unclear what specific skills drive this RAN-reading relationship. The current study used concurrent recordings of young adult participants' vocalizations and eye movements during the RAN task to assess how individual variation in RAN performance depends on the coordination of visual and vocal processes. Results showed that fast RAN times are facilitated by having the eyes 1 or more items ahead of the current vocalization, as long as the eyes do not get so far ahead of the voice as to require a regressive eye movement to an earlier item. These data suggest that optimizing RAN performance is a problem of scheduling eye movements and vocalization given memory constraints and the efficiency of encoding and articulatory control. Both RAN completion time (conventionally used to indicate RAN performance) and eye-voice relations predicted some aspects of participants' eye movements on a separate sentence reading task. However, eye-voice relations predicted additional features of first-pass reading that were not predicted by RAN completion time. This shows that measurement of eye-voice patterns can identify important aspects of individual variation in reading that are not identified by the standard measure of RAN performance. We argue that RAN performance predicts reading ability because both tasks entail challenges of scheduling cognitive and linguistic processes that operate simultaneously on multiple linguistic inputs

    Files private

    Request files
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goriot, C., Denessen, E., Bakker, J., & Droop, M. (2016). Benefits of being bilingual? The relationship between pupils’ perceptions of teachers’ appreciation of their home language and executive functioning. International Journal of Bilingualism, 20(6), 700-713. doi:10.1177/1367006915586470.

    Abstract

    Aims: We aimed to investigate whether bilingual pupil’s perceptions of teachers’ appreciation of their home language were of influence on bilingual cognitive advantages.
    Design: We examined whether Dutch bilingual primary school pupils who speak either German or Turkish at home differed in perceptions of their teacher’s appreciation of their HL, and whether these differences could explain differences between the two groups in executive functioning.
    Data and analysis: Executive functioning was measured through computer tasks, and perceived home language appreciation through orally administered questionnaires. The relationship between the two was assessed with regression analyses.
    Findings: German-Dutch pupils perceived there to be more appreciation of their home language from their teacher than Turkish-Dutch pupils. This difference did partly explain differences in executive functioning. Besides, we replicated bilingual advantages in nonverbal working memory and switching, but not in verbal working memory or inhibition.
    Originality and significance: This study demonstrates that bilingual advantages cannot be dissociated from the influence of the sociolinguistic context of the classroom. Thereby, it stresses the importance of culturally responsive teaching.
  • Goriot, C., Denessen, E., Bakker, J., & Droop, M. (2016). Zijn de voordelen van tweetaligheid voor alle tweetalige kinderen even groot? Een exploratief onderzoek naar de leerkrachtwaardering van de thuistaal van leerlingen en de invloed daarvan op de ontwikkeling van hun executieve functies. Pedagogiek, 16(2), 135-154. doi:10.5117/PED2016.2.GORI.

    Abstract

    Benefits of being bilingual? The relationship between pupils’ perceptions of
    teachers’ appreciation of their home language and executive functioning
    We aimed to investigate whether bilingual pupils’ perceptions of their
    teachers’ appreciation of their Home Language (HL) were of influence on
    bilingual cognitive advantages. We examined whether Dutch bilingual primary
    school pupils who speak either German or Turkish at home differed in
    perceptions of their teacher’s appreciation of their HL, and whether these
    differences could explain differences between the two groups in executive
    functioning. Executive functioning was measured through computer tasks,
    and perceived HL appreciation through orally administered questionnaires.
    The relationship between the two was assessed with regression analyses.
    German-Dutch pupils perceived more appreciation of their home language
    from their teacher than Turkish-Dutch pupils did. This difference partly
    explained differences in executive functioning. Besides, we replicated bilingual
    advantages in nonverbal working memory and switching, but not in
    verbal working memory or inhibition. This study demonstrates that bilingual
    advantages cannot be dissociated from the influence of the sociolinguistic
    context of the classroom. Thereby, it stresses the importance of culturally
    responsive teaching.
  • Graham, S. A., Deriziotis, P., & Fisher, S. E. (2015). Insights into the genetic foundations of human communication. Neuropsychology Review, 25(1), 3-26. doi:10.1007/s11065-014-9277-2.

    Abstract

    The human capacity to acquire sophisticated language is unmatched in the animal kingdom. Despite the discontinuity in communicative abilities between humans and other primates, language is built on ancient genetic foundations, which are being illuminated by comparative genomics. The genetic architecture of the language faculty is also being uncovered by research into neurodevelopmental disorders that disrupt the normally effortless process of language acquisition. In this article, we discuss the strategies that researchers are using to reveal genetic factors contributing to communicative abilities, and review progress in identifying the relevant genes and genetic variants. The first gene directly implicated in a speech and language disorder was FOXP2. Using this gene as a case study, we illustrate how evidence from genetics, molecular cell biology, animal models and human neuroimaging has converged to build a picture of the role of FOXP2 in neurodevelopment, providing a framework for future endeavors to bridge the gaps between genes, brains and behavior
  • Graham, S. A., & Fisher, S. E. (2015). Understanding language from a genomic perspective. Annual Review of Genetics, 49, 131-160. doi:10.1146/annurev-genet-120213-092236.

    Abstract

    Language is a defining characteristic of the human species, but its foundations remain mysterious. Heritable disorders offer a gateway into biological underpinnings, as illustrated by the discovery that FOXP2 disruptions cause a rare form of speech and language impairment. The genetic architecture underlying language-related disorders is complex, and although some progress has been made, it has proved challenging to pinpoint additional relevant genes with confidence. Next-generation sequencing and genome-wide association studies are revolutionizing understanding of the genetic bases of other neurodevelopmental disorders, like autism and schizophrenia, and providing fundamental insights into the molecular networks crucial for typical brain development. We discuss how a similar genomic perspective, brought to the investigation of language-related phenotypes, promises to yield equally informative discoveries. Moreover, we outline how follow-up studies of genetic findings using cellular systems and animal models can help to elucidate the biological mechanisms involved in the development of brain circuits supporting language.

    Files private

    Request files
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Groenman, A. P., Greven, C. U., Van Donkelaar, M. M. J., Schellekens, A., van Hulzen, K. J., Rommelse, N., Hartman, C. A., Hoekstra, P. J., Luman, M., Franke, B., Faraone, S. V., Oosterlaan, J., & Buitelaar, J. K. (2016). Dopamine and serotonin genetic risk scores predicting substance and nicotine use in attention deficit/hyperactivity disorder. Addiction biology, 21(4), 915-923. doi:10.1111/adb.12230.

    Abstract

    Individuals with attention deficit/hyperactivity disorder (ADHD) are at increased risk of developing substance use disorders (SUDs) and nicotine dependence. The co-occurrence of ADHD and SUDs/nicotine dependence may in part be mediated by shared genetic liability. Several neurobiological pathways have been implicated in both ADHD and SUDs, including dopamine and serotonin pathways. We hypothesized that variations in dopamine and serotonin neurotransmission genes were involved in the genetic liability to develop SUDs/nicotine dependence in ADHD. The current study included participants with ADHD (n = 280) who were originally part of the Dutch International Multicenter ADHD Genetics study. Participants were aged 5-15 years and attending outpatient clinics at enrollment in the study. Diagnoses of ADHD, SUDs, nicotine dependence, age of first nicotine and substance use, and alcohol use severity were based on semi-structured interviews and questionnaires. Genetic risk scores were created for both serotonergic and dopaminergic risk genes previously shown to be associated with ADHD and SUDs and/or nicotine dependence. The serotonin genetic risk score significantly predicted alcohol use severity. No significant serotonin x dopamine risk score or effect of stimulant medication was found. The current study adds to the literature by providing insight into genetic underpinnings of the co-morbidity of ADHD and SUDs. While the focus of the literature so far has been mostly on dopamine, our study suggests that serotonin may also play a role in the relationship between these disorders.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search. Visual Cognition, 24, 226-245. doi:10.1080/13506285.2016.1221013.

    Abstract

    When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than fixating unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments, the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180-196. doi:10.1037/xhp0000102.

    Abstract

    An important question is to what extent visual attention is driven by the semantics of individual objects, rather than by their visual appearance. This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the target instruction was presented either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting towards semantically related objects compared to visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature.
  • De Groot, F., Koelewijn, T., Huettig, F., & Olivers, C. N. L. (2016). A stimulus set of words and pictures matched for visual and semantic similarity. Journal of Cognitive Psychology, 28(1), 1-15. doi:10.1080/20445911.2015.1101119.

    Abstract

    Researchers in different fields of psychology have been interested in how vision and language interact, and what type of representations are involved in such interactions. We introduce a stimulus set that facilitates such research (available online). The set consists of 100 words each of which is paired with four pictures of objects: One semantically similar object (but visually dissimilar), one visually similar object (but semantically dissimilar), and two unrelated objects. Visual and semantic similarity ratings between corresponding items are provided for every picture for Dutch and for English. In addition, visual and linguistic parameters of each picture are reported. We thus present a stimulus set from which researchers can select, on the basis of various parameters, the items most optimal for their research question.

    Files private

    Request files
  • Grünloh, T., & Liszkowski, U. (2015). Prelinguistic vocalizations distinguish pointing acts. Journal of Child Language, 42(6), 1312-1336. doi:10.1017/S0305000914000816.

    Abstract

    The current study investigated whether point-accompanying characteristics, like vocalizations and hand shape, differentiate infants' underlying motives of prelinguistic pointing. We elicited imperative (requestive) and declarative (expressive and informative) pointing acts in experimentally controlled situations, and analyzed accompanying characteristics. Experiment 1 revealed that prosodic characteristics of point-accompanying vocalizations distinguished requestive from both expressive and informative pointing acts, with little differences between the latter two. In addition, requestive points were more often realized with the whole hand than the index finger, while this was the opposite for expressive and informative acts. Experiment 2 replicated Experiment 1, revealing distinct prosodic characteristics for requestive pointing also when the referent was distal and when it had an index-finger shape. Findings reveal that beyond the social context, point-accompanying vocalizations give clues to infants' underlying intentions when pointing.
  • Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Buitelaar, J., van Bokhoven, H., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2015). Asymmetry within and around the human planum temporale is sexually dimorphic and influenced by genes involved in steroid hormone receptor activity. Cortex, 62, 41-55. doi:10.1016/j.cortex.2014.07.015.

    Abstract

    The genetic determinants of cerebral asymmetries are unknown. Sex differences in asymmetry of the planum temporale, that overlaps Wernicke’s classical language area, have been inconsistently reported. Meta-analysis of previous studies has suggested that publication bias established this sex difference in the literature. Using probabilistic definitions of cortical regions we screened over the cerebral cortex for sexual dimorphisms of asymmetry in 2337 healthy subjects, and found the planum temporale to show the strongest sex-linked asymmetry of all regions, which was supported by two further datasets, and also by analysis with the Freesurfer package that performs automated parcellation of cerebral cortical regions. We performed a genome-wide association scan meta-analysis of planum temporale asymmetry in a pooled sample of 3095 subjects, followed by a candidate-driven approach which measured a significant enrichment of association in genes of the ´steroid hormone receptor activity´ and 'steroid metabolic process' pathways. Variants in the genes and pathways identified may affect the role of the planum temporale in language cognition.
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Gubian, M., Torreira, F., & Boves, L. (2015). Using functional data analysis for investigating multidimensional dynamic phonetic contrasts. Journal of Phonetics, 49, 16-40. doi:10.1016/j.wocn.2014.10.001.

    Abstract

    The study of phonetic contrasts and related phenomena, e.g. inter- and intra-speaker variability, often requires to analyse data in the form of measured time series, like f0 contours and formant trajectories. As a consequence, the investigator has to find suitable ways to reduce the raw and abundant numerical information contained in a bundle of time series into a small but sufficient set of numerical descriptors of their shape. This approach requires one to decide in advance which dynamic traits to include in the analysis and which not. For example, a rising pitch gesture may be represented by its duration and slope, hence reducing it to a straight segment, or by a richer coding specifying also whether (and how much) the rising contour is concave or convex, the latter being irrelevant in some context but crucial in others. Decisions become even more complex when a phenomenon is described by a multidimensional time series, e.g. by the first two formants. In this paper we introduce a methodology based on Functional Data Analysis (FDA) that allows the investigator to delegate most of the decisions involved in the quantitative description of multidimensional time series to the data themselves. FDA produces a data-driven parametrisation of the main shape traits present in the data that is visually interpretable, in the same way as slopes or peak heights are. These output parameters are numbers that are amenable to ordinary statistical analysis, e.g. linear (mixed effects) models. FDA is also able to capture correlations among different dimensions of a time series, e.g. between formants F1 and F2. We present FDA by means of an extended case study on diphthong – hiatus distinction in Spanish, a contrast that involves duration, formant trajectories and pitch contours.
  • Le Guen, O. (2005). Geografía de lo sagrado entre los Mayas Yucatecos de Quintana Roo: configuración del espacio y su aprendizaje entre los niños. Ketzalcalli, 2005(1), 54-68.
  • Le Guen, O., Samland, J., Friedrich, T., Hanus, D., & Brown, P. (2015). Making sense of (exceptional) causal relations. A cross-cultural and cross-linguistic study. Frontiers in Psychology, 6: 1645. doi:10.3389/fpsyg.2015.01645.

    Abstract

    In order to make sense of the world, humans tend to see causation almost everywhere. Although most causal relations may seem straightforward, they are not always construed in the same way cross-culturally. In this study, we investigate concepts of ‘chance’, ‘coincidence’ or ‘randomness’ that refer to assumed relations between intention, action, and outcome in situations, and we ask how people from different cultures make sense of such non-law-like connections. Based on a framework proposed by Alicke (2000), we administered a task that aims to be a neutral tool for investigating causal construals cross-culturally and cross-linguistically. Members of four different cultural groups, rural Mayan Yucatec and Tseltal speakers from Mexico and urban students from Mexico and Germany, were presented with a set of scenarios involving various types of causal and non-causal relations and were asked to explain the described events. Three links varied as to whether they were present or not in the scenarios: Intention to Action, Action to Outcome, and Intention to Outcome. Our results show that causality is recognized in all four cultural groups. However, how causality and especially non-law-like causality are interpreted depends on the type of links, the cultural background and the language used. In all three groups, Action to Outcome is the decisive link for recognizing causality. Despite the fact that the two Mayan groups share similar cultural backgrounds, they display different ideologies regarding concepts of non-law causality. The data suggests that the concept of ‘chance’ is not universal, but seems to be an explanation that only some cultural groups draw on to make sense of specific situations. Of particular importance is the existence of linguistic concepts in each language that trigger ideas of causality in the responses from each cultural group

    Additional information

    LeGuen_etal_2015sup.docx
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guest, O., & Rougier, N. P. (2016). "What is computational reproducibility?" and "Diversity in reproducibility". IEEE CIS Newsletter on Cognitive and Developmental Systems, 13(2), 4 and 12.
  • Guggenheim, J. A., St Pourcain, B., McMahon, G., Timpson, N. J., Evans, D. M., & Williams, C. (2015). Assumption-free estimation of the genetic contribution to refractive error across childhood. Molecular Vision, 21, 621-632. Retrieved from http://www.molvis.org/molvis/v21/621.

    Abstract

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias.
    Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404).
    The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old.
    Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Gullberg, M. (2005). L'expression orale et gestuelle de la cohésion dans le discours de locuteurs langue 2 débutants. AILE, 23, 153-172.
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Gupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A. and 16 moreGupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A., Greve, D. N., Andreassen, O., Agartz, I., Gollub, R. L., Sponheim, S. R., Ehrlich, S., Wang, L., Pearlson, G., Glahn, D. S., Sprooten, E., Mayer, A. R., Stephen, J., Jung, R. E., Canive, J., Bustillo, J., & Turner, J. A. (2015). Patterns of gray matter abnormalities in schizophrenia based on an international mega-analysis. Schizophrenia Bulletin, 41(5), 1133-1142. doi:10.1093/schbul/sbu177.

    Abstract

    Analyses of gray matter concentration (GMC) deficits in patients with schizophrenia (Sz) have identified robust changes throughout the cortex. We assessed the relationships between diagnosis, overall symptom severity, and patterns of gray matter in the largest aggregated structural imaging dataset to date. We performed both source-based morphometry (SBM) and voxel-based morphometry (VBM) analyses on GMC images from 784 Sz and 936 controls (Ct) across 23 scanning sites in Europe and the United States. After correcting for age, gender, site, and diagnosis by site interactions, SBM analyses showed 9 patterns of diagnostic differences. They comprised separate cortical, subcortical, and cerebellar regions. Seven patterns showed greater GMC in Ct than Sz, while 2 (brainstem and cerebellum) showed greater GMC for Sz. The greatest GMC deficit was in a single pattern comprising regions in the superior temporal gyrus, inferior frontal gyrus, and medial frontal cortex, which replicated over analyses of data subsets. VBM analyses identified overall cortical GMC loss and one small cluster of increased GMC in Sz, which overlapped with the SBM brainstem component. We found no significant association between the component loadings and symptom severity in either analysis. This mega-analysis confirms that the commonly found GMC loss in Sz in the anterior temporal lobe, insula, and medial frontal lobe form a single, consistent spatial pattern even in such a diverse dataset. The separation of GMC loss into robust, repeatable spatial patterns across multiple datasets paves the way for the application of these methods to identify subtle genetic and clinical cohort effects.
  • Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416-423. doi:10.1016/j.tics.2005.07.004.

    Abstract

    In speaking and comprehending language, word information is retrieved from memory and combined into larger units (unification). Unification operations take place in parallel at the semantic, syntactic and phonological levels of processing. This article proposes a new framework that connects psycholinguistic models to a neurobiological account of language. According to this proposal the left inferior frontal gyrus (LIFG) plays an important role in unification. Research in other domains of cognition indicates that left prefrontal cortex has the necessary neurobiological characteristics for its involvement in the unification for language. I offer here a psycholinguistic perspective on the nature of language unification and the role of LIFG.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2005). De talige aap. Linguaan, 26-35.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hall, M. L., Ahn, D., Mayberry, R. I., & Ferreira, V. S. (2015). Production and comprehension show divergent constituent order preferences: Evidence from elicited pantomime. Journal of Memory and Language, 81, 16-33. doi:10.1016/j.jml.2014.12.003.

    Abstract

    All natural languages develop devices to communicate who did what to whom. Elicited pantomime provides one model for studying this process, by providing a window into how humans (hearing non-signers) behave in a natural communicative modality (silent gesture) without established conventions from a grammar. Most studies in this paradigm focus on production, although they sometimes make assumptions about how comprehenders would likely behave. Here, we directly assess how naïve speakers of English (Experiments 1 & 2), Korean (Experiment 1), and Turkish (Experiment 2) comprehend pantomimed descriptions of transitive events, which are either semantically reversible (Experiments 1 & 2) or not (Experiment 2). Contrary to previous assumptions, we find no evidence that Person-Person-Action sequences are ambiguous to comprehenders, who simply adopt an agent-first parsing heuristic for all constituent orders. We do find that Person-Action-Person sequences yield the most consistent interpretations, even in native speakers of SOV languages. The full range of behavior in both production and comprehension provides counter-evidence to the notion that producers’ utterances are motivated by the needs of comprehenders. Instead, we argue that production and comprehension are subject to different sets of cognitive pressures, and that the dynamic interaction between these competing pressures can help explain synchronic and diachronic constituent order phenomena in natural human languages, both signed and spoken.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2016). Commentary: There is no demonstrable effect of desiccation [Commentary on "Language evolution and climate: The case of desiccation and tone'']. Journal of Language Evolution, 1, 65-69. doi:10.1093/jole/lzv015.
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Hammarström, H. (2016). Linguistic diversity and language evolution. Journal of Language Evolution, 1, 19-29. doi:10.1093/jole/lzw002.

    Abstract

    What would your ideas about language evolution be if there was only one language left on earth? Fortunately, our investigation need not be that impoverished. In the present article, we survey the state of knowledge regarding the kinds of language found among humans, the language inventory, population sizes, time depth, grammatical variation, and other relevant issues that a theory of language evolution should minimally take into account
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review. Language, 91, 723-737. doi:10.1353/lan.2015.0038.

    Abstract

    Ethnologue (http://www.ethnologue.com) is the most widely consulted inventory of the world’slanguages used today. The present review article looks carefully at the goals and description of the content of the Ethnologue’s 16th, 17th, and 18th editions, and reports on a comprehensive survey of the accuracy of the inventory itself. While hundreds of spurious and missing languages can be documented for Ethnologue, it is at present still better than any other nonderivative work of the same scope, in all aspects but one. Ethnologue fails to disclose the sources for the information presented, at odds with well-established scientific principles. The classification of languages into families in Ethnologue is also evaluated, and found to be far off from that argued in the specialist literature on the classification of individual languages. Ethnologue is frequently held to be splitting: that is, it tends to recognize more languages than an application of the criterion of mutual intelligibility would yield. By means of a random sample, we find that, indeed, with confidence intervals, the number of mutually unintelligible languages is on average 85% of the number found in Ethnologue. © 2015, Linguistic Society of America. All rights reserved.
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review: Online appendices. Language, 91(3), s1-s188. doi:10.1353/lan.2015.0049.
  • Hanique, I., Ernestus, M., & Boves, L. (2015). Choice and pronunciation of words: Individual differences within a homogeneous group of speakers. Corpus Linguistics and Linguistic Theory, 11, 161-185. doi:10.1515/cllt-2014-0025.

    Abstract

    This paper investigates whether individual speakers forming a homogeneous group differ in their choice and pronunciation of words when engaged in casual conversation, and if so, how they differ. More specifically, it examines whether the Balanced Winnow classifier is able to distinguish between the twenty speakers of the Ernestus Corpus of Spontaneous Dutch, who all have the same social background. To examine differences in choice and pronunciation of words, instead of characteristics of the speech signal itself, classification was based on lexical and pronunciation features extracted from hand-made orthographic and automatically generated broad phonetic transcriptions. The lexical features consisted of words and two-word combinations. The pronunciation features represented pronunciation variations at the word and phone level that are typical for casual speech. The best classifier achieved a performance of 79.9% and was based on the lexical features and on the pronunciation features representing single phones and triphones. The speakers must thus differ from each other in these features. Inspection of the relevant features indicated that, among other things, the words relevant for classification generally do not contain much semantic content, and that speakers differ not only from each other in the use of these words but also in their pronunciation.
  • Hannerfors, A.-K., Hellgren, C., Schijven, D., Iliadis, S. I., Comasco, E., Skalkidou, A., Olivier, J. D., & Sundström-Poromaa, I. (2015). Treatment with serotonin reuptake inhibitors during pregnancy is associated with elevated corticotropin-releasing hormone levels. Psychoneuroendocrinology, 58, 104-113. doi:10.1016/j.psyneuen.2015.04.009.

    Abstract

    Treatment with serotonin reuptake inhibitors (SSRI) has been associated with an increased risk of preterm birth, but causality remains unclear. While placental CRH production is correlated with gestational length and preterm birth, it has been difficult to establish if psychological stress or mental health problems are associated with increased CRH levels. This study compared second trimester CRH serum concentrations in pregnant women on SSRI treatment (n=207) with untreated depressed women (n=56) and controls (n=609). A secondary aim was to investigate the combined effect of SSRI treatment and CRH levels on gestational length and risk for preterm birth. Women on SSRI treatment had significantly higher second trimester CRH levels than controls, and untreated depressed women. CRH levels and SSRI treatment were independently associated with shorter gestational length. The combined effect of SSRI treatment and high CRH levels yielded the highest risk estimate for preterm birth. SSRI treatment during pregnancy is associated with increased CRH levels. However, the elevated risk for preterm birth in SSRI users appear not to be mediated by increased placental CRH production, instead CRH appear as an independent risk factor for shorter gestational length and preterm birth.
  • Hao, X., Huang, Y., Li, X., Song, Y., Kong, X., Wang, X., Yang, Z., Zhen, Z., & Liu, J. (2016). Structural and functional neural correlates of spatial navigation: A combined voxel‐based morphometry and functional connectivity study. Brain and Behavior, 6(12): e00572. doi:10.1002/brb3.572.

    Abstract

    Introduction: Navigation is a fundamental and multidimensional cognitive function that individuals rely on to move around the environment. In this study, we investigated the neural basis of human spatial navigation ability. Methods: A large cohort of participants (N > 200) was examined on their navigation ability behaviorally and structural and functional magnetic resonance imaging (MRI) were then used to explore the corresponding neural basis of spatial navigation. Results: The gray matter volume (GMV) of the bilateral parahippocampus (PHG), retrosplenial complex (RSC), entorhinal cortex (EC), hippocampus (HPC), and thalamus (THAL) was correlated with the participants’ self-reported navigational ability in general, and their sense of direction in particular. Further fMRI studies showed that the PHG, RSC, and EC selectively responded to visually presented scenes, whereas the HPC and THAL showed no selectivity, suggesting a functional division of labor among these regions in spatial navigation. The resting-state functional connectivity analysis further revealed a hierarchical neural network for navigation constituted by these regions, which can be further categorized into three relatively independent components (i.e., scene recognition component, cognitive map component, and the component of heading direction for locomotion, respectively). Conclusions: Our study combined multi-modality imaging data to illustrate that multiple brain regions may work collaboratively to extract, integrate, store, and orientate spatial information to guide navigation behaviors.

    Additional information

    brb3572-sup-0001-FigS1-S4.docx
  • Hardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R. and 8 moreHardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R., Timmerman, V., Lerche, H., Maudsley, S., Helbig, I., Suls, A., Koeleman, B. P. C., De Jonghe, P., & Euro Res Consortium, E. (2015). Recessive mutations in SLC13A5 result in a loss of citrate transport and cause neonatal epilepsy, developmental delay and teeth hypoplasia. Brain., 138(11), 3238-3250. doi:10.1093/brain/awv263.

    Abstract

    The epileptic encephalopathies are a clinically and aetiologically heterogeneous subgroup of epilepsy syndromes. Most epileptic encephalopathies have a genetic cause and patients are often found to carry a heterozygous de novo mutation in one of the genes associated with the disease entity. Occasionally recessive mutations are identified: a recent publication described a distinct neonatal epileptic encephalopathy (MIM 615905) caused by autosomal recessive mutations in the SLC13A5 gene. Here, we report eight additional patients belonging to four different families with autosomal recessive mutations in SLC13A5. SLC13A5 encodes a high affinity sodium-dependent citrate transporter, which is expressed in the brain. Neurons are considered incapable of de novo synthesis of tricarboxylic acid cycle intermediates; therefore they rely on the uptake of intermediates, such as citrate, to maintain their energy status and neurotransmitter production. The effect of all seven identified mutations (two premature stops and five amino acid substitutions) was studied in vitro, using immunocytochemistry, selective western blot and mass spectrometry. We hereby demonstrate that cells expressing mutant sodium-dependent citrate transporter have a complete loss of citrate uptake due to various cellular loss-of-function mechanisms. In addition, we provide independent proof of the involvement of autosomal recessive SLC13A5 mutations in the development of neonatal epileptic encephalopathies, and highlight teeth hypoplasia as a possible indicator for SLC13A5 screening. All three patients who tried the ketogenic diet responded well to this treatment, and future studies will allow us to ascertain whether this is a recurrent feature in this severe disorder.
  • Hartung, F., Burke, M., Hagoort, P., & Willems, R. M. (2016). Taking perspective: Personal pronouns affect experiential aspects of literary reading. PLoS One, 11(5): e0154732. doi:10.1371/journal.pone.0154732.

    Abstract

    Personal pronouns have been shown to influence cognitive perspective taking during comprehension. Studies using single sentences found that 3rd person pronouns facilitate the construction of a mental model from an observer’s perspective, whereas 2nd person pronouns support an actor’s perspective. The direction of the effect for 1st person pronouns seems to depend on the situational context. In the present study, we investigated how personal pronouns influence discourse comprehension when people read fiction stories and if this has consequences for affective components like emotion during reading or appreciation of the story. We wanted to find out if personal pronouns affect immersion and arousal, as well as appreciation of fiction. In a natural reading paradigm, we measured electrodermal activity and story immersion, while participants read literary stories with 1st and 3rd person pronouns referring to the protagonist. In addition, participants rated and ranked the stories for appreciation. Our results show that stories with 1st person pronouns lead to higher immersion. Two factors—transportation into the story world and mental imagery during reading—in particular showed higher scores for 1st person as compared to 3rd person pronoun stories. In contrast, arousal as measured by electrodermal activity seemed tentatively higher for 3rd person pronoun stories. The two measures of appreciation were not affected by the pronoun manipulation. Our findings underscore the importance of perspective for language processing, and additionally show which aspects of the narrative experience are influenced by a change in perspective.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Haun, D. B. M., Allen, G. L., & Wedell, D. H. (2005). Bias in spatial memory: A categorical endorsement. Acta Psychologica, 118(1-2), 149-170. doi:10.1016/j.actpsy.2004.10.011.
  • Hay, J. B., & Baayen, R. H. (2005). Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Sciences, 9(7), 342-348. doi:10.1016/j.tics.2005.04.002.

    Abstract

    Morphology is the study of the internal structure of words. A vigorous ongoing debate surrounds the question of how such internal structure is best accounted for: by means of lexical entries and deterministic symbolic rules, or by means of probabilistic subsymbolic networks implicitly encoding structural similarities in connection weights. In this review, we separate the question of subsymbolic versus symbolic implementation from the question of deterministic versus probabilistic structure. We outline a growing body of evidence, mostly external to the above debate, indicating that morphological structure is indeed intrinsically graded. By allowing probability into the grammar, progress can be made towards solving some long-standing puzzles in morphological theory.
  • Heeschen, C., Ryalls, J., & Hagoort, P. (1988). Psychological stress in Broca's versus Wernicke's aphasia. Clinical Linguistics & Phonetics, 2, 309-316. doi:10.3109/02699208808985262.

    Abstract

    We advance the hypothesis here that the higher-than-average vocal pitch (FO) found for speech of Broca's aphasics in experimental settings is due, in part, to increased psychological stress. Two experiments were conducted which manipulated conversational constraints and the sentence forms to be produced by aphasic patients. Our study revealed significant differences between changes in vocal pitch of agrammatic Broca's aphasics versus those of Wernicke's aphasics and normal controls. It is suggested that the greater psychological stress experienced by the Broca's aphasics, but not by the Wernicke's aphasics, accounts for these observed differences.
  • Heidlmayr, K., Doré-Mazars, K., Aparicio, X., & Isel, F. (2016). Multiple language use influences oculomotor task performance: Neurophysiological evidence of a shared substrate between language and motor control. PLoS One, 11(11): e0165029. doi:10.1371/journal.pone.0165029.

    Abstract

    In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism.
  • Heidlmayr, K., Hemforth, B., Moutier, S., & Isel, F. (2015). Neurodynamics of executive control processes in bilinguals: Evidence from ERP and source reconstruction analyses. Frontiers in Psychology, 6: 821. doi:10.3389/fpsyg.2015.00821.

    Abstract

    The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French–German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role.
  • Henderson, L., Coltheart, M., Cutler, A., & Vincent, N. (1988). Preface. Linguistics, 26(4), 519-520. doi:10.1515/ling.1988.26.4.519.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2015). Brain functional plasticity associated with the emergence of expertise in extreme language control. NeuroImage, 114, 264-274. doi:10.1016/j.neuroimage.2015.03.072.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to longitudinally examine brain plasticity arising from long-term, intensive simultaneous interpretation training. Simultaneous interpretation is a bilingual task with heavy executive control demands. We compared brain responses observed during simultaneous interpretation with those observed during simultaneous speech repetition (shadowing) in a group of trainee simultaneous interpreters, at the beginning and at the end of their professional training program. Age, sex and language-proficiency matched controls were scanned at similar intervals. Using multivariate pattern classification, we found distributed patterns of changes in functional responses from the first to second scan that distinguished the interpreters from the controls. We also found reduced recruitment of the right caudate nucleus during simultaneous interpretation as a result of training. Such practice-related change is consistent with decreased demands on multilingual language control as the task becomes more automatized with practice. These results demonstrate the impact of simultaneous interpretation training on the brain functional response in a cerebral structure that is not specifically linguistic, but that is known to be involved in learning, in motor control, and in a variety of domain-general executive functions. Along with results of recent studies showing functional and structural adaptations in the caudate nuclei of experts in a broad range of domains, our results underline the importance of this structure as a central node in expertise-related networks. (C) 2015 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., Michel, C. M., & Golestani, N. (2015). fMRI of simultaneous interpretation reveals the neural basis of extreme language control. Cerebral Cortex, 25(12), 4727-4739. doi:10.1093/cercor/bhu158.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to examine the neural basis of extreme multilingual language control in a group of 50 multilingual participants. Comparing brain responses arising during simultaneous interpretation (SI) with those arising during simultaneous repetition revealed activation of regions known to be involved in speech perception and production, alongside a network incorporating the caudate nucleus that is known to be implicated in domain-general cognitive control. The similarity between the networks underlying bilingual language control and general executive control supports the notion that the frequently reported bilingual advantage on executive tasks stems from the day-to-day demands of language control in the multilingual brain. We examined neural correlates of the management of simultaneity by correlating brain activity during interpretation with the duration of simultaneous speaking and hearing. This analysis showed significant modulation of the putamen by the duration of simultaneity. Our findings suggest that, during SI, the caudate nucleus is implicated in the overarching selection and control of the lexico-semantic system, while the putamen is implicated in ongoing control of language output. These findings provide the first clear dissociation of specific dorsal striatum structures in polyglot language control, roles that are consistent with previously described involvement of these regions in nonlinguistic executive control.
  • Hervais-Adelman, A., Legrand, L. B., Zhan, M. Y., Tamietto, M., de Gelder, B., & Pegna, A. J. (2015). Looming sensitive cortical regions without V1 input: Evidence from a patient with bilateral cortical blindness. Frontiers in Integrative Neuroscience, 9: 51. doi:10.3389/fnint.2015.00051.

    Abstract

    Fast and automatic behavioral responses are required to avoid collision with an approaching stimulus. Accordingly, looming stimuli have been found to be highly salient and efficient attractors of attention due to the implication of potential collision and potential threat. Here, we address the question of whether looming motion is processed in the absence of any functional primary visual cortex and consequently without awareness. For this, we investigated a patient (TN) suffering from complete, bilateral damage to his primary visual cortex. Using an fMRI paradigm, we measured TN's brain activation during the presentation of looming, receding, rotating, and static point lights, of which he was unaware. When contrasted with other conditions, looming was found to produce bilateral activation of the middle temporal areas, as well as the superior temporal sulcus and inferior parietal lobe (IPL). The latter are generally thought to be involved in multisensory processing of motion in extrapersonal space, as well as attentional capture and saliency. No activity was found close to the lesioned V1 area. This demonstrates that looming motion is processed in the absence of awareness through direct subcortical projections to areas involved in multisensory processing of motion and saliency that bypass V-1.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Hibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K. and 267 moreHibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Cuellar-Partida, G., den Braber, A., Giddaluru, S., Goldman, A. L., Grimm, O., Guadalupe, T., Hass, J., Woldehawariat, G., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Kim, S., Klein, M., Kraemer, B., Lee, P. H., Olde Loohuis, L. M., Luciano, M., Macare, C., Mather, K. A., Mattheisen, M., Milaneschi, Y., Nho, K., Papmeyer, M., Ramasamy, A., Risacher, S. L., Roiz-Santiañez, R., Rose, E. J., Salami, A., Sämann, P. G., Schmaal, L., Schork, A. J., Shin, J., Strike, L. T., Teumer, A., Van Donkelaar, M. M. J., Van Eijk, K. R., Walters, R. K., Westlye, L. T., Whelan, C. D., Winkler, A. M., Zwiers, M. P., Alhusaini, S., Athanasiu, L., Ehrlich, S., Hakobjan, M. M. H., Hartberg, C. B., Haukvik, U. K., Heister, A. J. G. A. M., Hoehn, D., Kasperaviciute, D., Liewald, D. C. M., Lopez, L. M., Makkinje, R. R. R., Matarin, M., Naber, M. A. M., McKay, D. R., Needham, M., Nugent, A. C., Pütz, B., Royle, N. A., Shen, L., Sprooten, E., Trabzuni, D., Van der Marel, S. S. L., Van Hulzen, K. J. E., Walton, E., Wolf, C., Almasy, L., Ames, D., Arepalli, S., Assareh, A. A., Bastin, M. E., Brodaty, H., Bulayeva, K. B., Carless, M. A., Cichon, S., Corvin, A., Curran, J. E., Czisch, M., De Zubicaray, G. I., Dillman, A., Duggirala, R., Dyer, T. D., Erk, S., Fedko, I. O., Ferrucci, L., Foroud, T. M., Fox, P. T., Fukunaga, M., Gibbs, J. R., Göring, H. H. H., Green, R. C., Guelfi, S., Hansell, N. K., Hartman, C. A., Hegenscheid, K., Heinz, A., Hernandez, D. G., Heslenfeld, D. J., Hoekstra, P. J., Holsboer, F., Homuth, G., Hottenga, J.-J., Ikeda, M., Jack, C. R., Jenkinson, M., Johnson, R., Kanai, R., Keil, M., Kent, J. W., Kochunov, P., Kwok, J. B., Lawrie, S. M., Liu, X., Longo, D. L., McMahon, K. L., Meisenzahl, E., Melle, I., Mohnke, S., Montgomery, G. W., Mostert, J. C., Mühleisen, T. W., Nalls, M. A., Nichols, T. E., Nilsson, L. G., Nöthen, M. M., Ohi, K., Olvera, R. L., Perez-Iglesias, R., Pike, G. B., Potkin, S. G., Reinvang, I., Reppermund, S., Rietschel, M., Romanczuk-Seiferth, N., Rosen, G. D., Rujescu, D., Schnell, K., Schofield, P. R., Smith, C., Steen, V. M., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Turner, J. A., Valdes Hernández, M. C., van Ent, D. ’., Van der Brug, M., Van der Wee, N. J. A., Van Tol, M.-J., Veltman, D. J., Wassink, T. H., Westman, E., Zielke, R. H., Zonderman, A. B., Ashbrook, D. G., Hager, R., Lu, L., McMahon, F. J., Morris, D. W., Williams, R. W., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Cahn, W., Calhoun, V. D., Cavalleri, G. L., Crespo-Facorro, B., Dale, A. M., Davies, G. E., Delanty, N., Depondt, C., Djurovic, S., Drevets, W. C., Espeseth, T., Gollub, R. L., Ho, B.-C., Hoffmann, W., Hosten, N., Kahn, R. S., Le Hellard, S., Meyer-Lindenberg, A., Müller-Myhsok, B., Nauck, M., Nyberg, L., Pandolfo, M., Penninx, B. W. J. H., Roffman, J. L., Sisodiya, S. M., Smoller, J. W., Van Bokhoven, H., Van Haren, N. E. M., Völzke, H., Walter, H., Weiner, M. W., Wen, W., White, T., Agartz, I., Andreassen, O. A., Blangero, J., Boomsma, D. I., Brouwer, R. M., Cannon, D. M., Cookson, M. R., De Geus, E. J. C., Deary, I. J., Donohoe, G., Fernández, G., Fisher, S. E., Francks, C., Glahn, D. C., Grabe, H. J., Gruber, O., Hardy, J., Hashimoto, R., Hulshoff Pol, H. E., Jönsson, E. G., Kloszewska, I., Lovestone, S., Mattay, V. S., Mecocci, P., McDonald, C., McIntosh, A. M., Ophoff, R. A., Paus, T., Pausova, Z., Ryten, M., Sachdev, P. S., Saykin, A. J., Simmons, A., Singleton, A., Soininen, H., Wardlaw, J. M., Weale, M. E., Weinberger, D. R., Adams, H. H. H., Launer, L. J., Seiler, S., Schmidt, R., Chauhan, G., Satizabal, C. L., Becker, J. T., Yanek, L., van der Lee, S. J., Ebling, M., Fischl, B., Longstreth, W. T., Greve, D., Schmidt, H., Nyquist, P., Vinke, L. N., Van Duijn, C. M., Xue, L., Mazoyer, B., Bis, J. C., Gudnason, V., Seshadri, S., Ikram, M. A., The Alzheimer’s Disease Neuroimaging Initiative, The CHARGE Consortium, EPIGEN, IMAGEN, SYS, Martin, N. G., Wright, M. J., Schumann, G., Franke, B., Thompson, P. M., & Medland, S. E. (2015). Common genetic variants influence human subcortical brain structures. Nature, 520, 224-229. doi:10.1038/nature14101.

    Abstract

    The highly complex structure of the human brain is strongly shaped by genetic influences. Subcortical brain regions form circuits with cortical areas to coordinate movement, learning, memory and motivation, and altered circuits can lead to abnormal behaviour and disease. To investigate how common genetic variants affect the structure of these brain regions, here we conduct genome-wide association studies of the volumes of seven subcortical regions and the intracranial volume derived from magnetic resonance images of 30,717 individuals from 50 cohorts. We identify five novel genetic variants influencing the volumes of the putamen and caudate nucleus. We also find stronger evidence for three loci with previously established influences on hippocampal volume and intracranial volume. These variants show specific volumetric effects on brain structures rather than global effects across structures. The strongest effects were found for the putamen, where a novel intergenic locus with replicable influence on volume (rs945270; P = 1.08 × 10-33; 0.52% variance explained) showed evidence of altering the expression of the KTN1 gene in both brain and blood tissue. Variants influencing putamen volume clustered near developmental genes that regulate apoptosis, axon guidance and vesicle transport. Identification of these genetic variants provides insight into the causes of variability in human brain development, and may help to determine mechanisms of neuropsychiatric dysfunction

    Files private

    Request files
  • Hilbrink, E., Gattis, M., & Levinson, S. C. (2015). Early developmental changes in the timing of turn-taking: A longitudinal study of mother-infant interaction. Frontiers in Psychology, 6: 1492. doi:10.3389/fpsyg.2015.01492.

    Abstract

    To accomplish a smooth transition in conversation from one speaker to the next, a tight coordination of interaction between speakers is required. Recent studies of adult conversation suggest that this close timing of interaction may well be a universal feature of conversation. In the present paper, we set out to assess the development of this close timing of turns in infancy in vocal exchanges between mothers and infants. Previous research has demonstrated an early sensitivity to timing in interactions (e.g. Murray & Trevarthen, 1985). In contrast, less is known about infants’ abilities to produce turns in a timely manner and existing findings are rather patchy. We conducted a longitudinal study of twelve mother-infant dyads in free-play interactions at the ages of 3, 4, 5, 9, 12 and 18 months. Based on existing work and the predictions made by the Interaction Engine Hypothesis (Levinson, 2006), we expected that infants would begin to develop the temporal properties of turn-taking early in infancy but that their timing of turns would slow down at 12 months, which is around the time when infants start to produce their first words. Findings were consistent with our predictions: Infants were relatively fast at timing their turn early in infancy but slowed down towards the end of the first year. Furthermore, the changes observed in infants’ turn-timing skills were not caused by changes in maternal timing, which remained stable across the 3-18 month period. However, the slowing down of turn-timing started somewhat earlier than predicted: at 9 months.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2016). Encouraging prediction during production facilitates subsequent comprehension: Evidence from interleaved object naming in sentence context and sentence reading. Quarterly Journal of Experimental Psychology, 69(6), 1056-1063. doi:10.1080/17470218.2015.1131309.

    Abstract

    Many studies have shown that a supportive context facilitates language comprehension. A currently influential view is that language production may support prediction in language comprehension. Experimental evidence for this, however, is relatively sparse. Here we explored whether encouraging prediction in a language production task encourages the use of predictive contexts in an interleaved comprehension task. In Experiment 1a, participants listened to the first part of a sentence and provided the final word by naming aloud a picture. The picture name was predictable or not predictable from the sentence context. Pictures were named faster when they could be predicted than when this was not the case. In Experiment 1b the same sentences, augmented by a final spill-over region, were presented in a self-paced reading task. No difference in reading times for predictive vs. non-predictive sentences was found. In Experiment 2, reading and naming trials were intermixed. In the naming task, the advantage for predictable picture names was replicated. More importantly, now reading times for the spill-over region were considerable faster for predictive vs. non-predictive sentences. We conjecture that these findings fit best with the notion that prediction in the service of language production encourages the use of predictive contexts in comprehension. Further research is required to identify the exact mechanisms by which production exerts its influence on comprehension.
  • Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.

    Abstract

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

    Additional information

    Data availability
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoey, E. (2015). Lapses: How people arrive at, and deal with, discontinuities in talk. Research on Language and Social Interaction, 48(4), 430-453. doi:10.1080/08351813.2015.1090116.

    Abstract

    Interaction includes moments of silence. When all participants forgo the option to speak, the silence can be called a “lapse.” This article builds on existing work on lapses and other kinds of silences (gaps, pauses, and so on) to examine how participants reach a point where lapsing is a possibility and how they orient to the lapse that subsequently develops. Drawing from a wide range of activities and settings, I will show that participants may treat lapses as (a) the relevant cessation of talk, (b) the allowable development of silence, or (c) the conspicuous absence of talk. Data are in American and British English.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Hogekamp, Z., Blomster, J. B., Bursalioglu, A., Calin, M. C., Çetinçelik, M., Haastrup, L., & Van den Berg, Y. H. M. (2016). Examining the Importance of the Teachers' Emotional Support for Students' Social Inclusion Using the One-with-Many Design. Frontiers in Psychology, 7: 1014. doi:10.3389/fpsyg.2016.01014.

    Abstract

    The importance of high quality teacher–student relationships for students' well-being has been long documented. Nonetheless, most studies focus either on teachers' perceptions of provided support or on students' perceptions of support. The degree to which teachers and students agree is often neither measured nor taken into account. In the current study, we will therefore use a dyadic analysis strategy called the one-with-many design. This design takes into account the nestedness of the data and looks at the importance of reciprocity when examining the influence of teacher support for students' academic and social functioning. Two samples of teachers and their students from Grade 4 (age 9–10 years) have been recruited in primary schools, located in Turkey and Romania. By using the one-with-many design we can first measure to what degree teachers' perceptions of support are in line with students' experiences. Second, this level of consensus is taken into account when examining the influence of teacher support for students' social well-being and academic functioning.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (2015). Editorial: Turn-taking in human communicative interaction. Frontiers in Psychology, 6: 1919. doi:10.3389/fpsyg.2015.01919.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Holler, J., & Kendrick, K. H. (2015). Unaddressed participants’ gaze in multi-person interaction: Optimizing recipiency. Frontiers in Psychology, 6: 98. doi:10.3389/fpsyg.2015.00098.

    Abstract

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogues to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation towards online processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question-response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze towards the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recog-nizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Horschig, J. M., Smolders, R., Bonnefond, M., Schoffelen, J.-M., Van den Munckhof, P., Schuurman, P. R., Cools, R., Denys, D., & Jensen, O. (2015). Directed communication between nucleus accumbens and neocortex in humans is differentially supported by synchronization in the theta and alpha band. PLoS One, 10(9): e0138685. doi:10.1371/journal.pone.0138685.

    Abstract

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Li, W., Li, X., Huang, L., Kong, X., Yang, W., Wei, D., Li, J., Cheng, H., Zhang, Q., Qiu, J., & Liu, J. (2015). Brain structure links trait creativity to openness to experience. Social Cognitive and Affective Neuroscience, 10(2), 191-198. doi:10.1093/scan/nsu041.

    Abstract

    Creativity is crucial to the progression of human civilization and has led to important scientific discoveries. Especially, individuals are more likely to have scientific discoveries if they possess certain personality traits of creativity (trait creativity), including imagination, curiosity, challenge and risk-taking. This study used voxel-based morphometry to identify the brain regions underlying individual differences in trait creativity, as measured by the Williams creativity aptitude test, in a large sample (n = 246). We found that creative individuals had higher gray matter volume in the right posterior middle temporal gyrus (pMTG), which might be related to semantic processing during novelty seeking (e.g. novel association, conceptual integration and metaphor understanding). More importantly, although basic personality factors such as openness to experience, extroversion, conscientiousness and agreeableness (as measured by the NEO Personality Inventory) all contributed to trait creativity, only openness to experience mediated the association between the right pMTG volume and trait creativity. Taken together, our results suggest that the basic personality trait of openness might play an important role in shaping an individual’s trait creativity.
  • Huang, L., Zhou, G., Liu, Z., Dang, X., Yang, Z., Kong, X., Wang, X., Song, Y., Zhen, Z., & Liu, J. (2016). A Multi-Atlas Labeling Approach for Identifying Subject-Specific Functional Regions of Interest. PLoS One, 11(1): e0146868. doi:10.1371/journal.pone.0146868.

    Abstract

    The functional region of interest (fROI) approach has increasingly become a favored methodology in functional magnetic resonance imaging (fMRI) because it can circumvent inter-subject anatomical and functional variability, and thus increase the sensitivity and functional resolution of fMRI analyses. The standard fROI method requires human experts to meticulously examine and identify subject-specific fROIs within activation clusters. This process is time-consuming and heavily dependent on experts’ knowledge. Several algorithmic approaches have been proposed for identifying subject-specific fROIs; however, these approaches cannot easily incorporate prior knowledge of inter-subject variability. In the present study, we improved the multi-atlas labeling approach for defining subject-specific fROIs. In particular, we used a classifier-based atlas-encoding scheme and an atlas selection procedure to account for the large spatial variability across subjects. Using a functional atlas database for face recognition, we showed that with these two features, our approach efficiently circumvented inter-subject anatomical and functional variability and thus improved labeling accuracy. Moreover, in comparison with a single-atlas approach, our multi-atlas labeling approach showed better performance in identifying subject-specific fROIs.

    Additional information

    S1_Fig.tif S2_Fig.tif
  • Hubers, F., Snijders, T. M., & De Hoop, H. (2016). How the brain processes violations of the grammatical norm: An fMRI study. Brain and Language, 163, 22-31. doi:10.1016/j.bandl.2016.08.006.

    Abstract

    Native speakers of Dutch do not always adhere to prescriptive grammar rules in their daily speech. These grammatical norm violations can elicit emotional reactions in language purists, mostly high-educated people, who claim that for them these constructions are truly ungrammatical. However, linguists generally assume that grammatical norm violations are in fact truly grammatical, especially when they occur frequently in a language. In an fMRI study we investigated the processing of grammatical norm violations in the brains of language purists, and compared them with truly grammatical and truly ungrammatical sentences. Grammatical norm violations were found to be unique in that their processing resembled not only the processing of truly grammatical sentences (in left medial Superior Frontal Gyrus and Angular Gyrus), but also that of truly ungrammatical sentences (in Inferior Frontal Gyrus), despite what theories of grammar would usually lead us to believe
  • Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23-B32. doi:10.1016/j.cognition.2004.10.003.

    Abstract

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
  • Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.

    Abstract

    It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80-93. doi:10.1080/23273798.2015.1047459.

    Abstract

    It is now well established that anticipation of up-coming input is a key characteristic of spoken language comprehension. Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed however has hardly been explored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32 to 77 years of age received spoken instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM" - look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use gender information from the article to predict the upcoming target object. The average participant anticipated the target objects well in advance of the critical noun. Multiple regression analyses showed that working memory and processing speed had the largest mediating effects: Enhanced working memory abilities and faster processing speed supported anticipatory spoken language processing. These findings suggest that models of predictive language processing must take mediating factors such as working memory and processing speed into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual-spatial representations.
  • Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.

    Abstract

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing.
  • Huettig, F., & Mani, N. (2016). Is prediction necessary to understand language? Probably not. Language, Cognition and Neuroscience, 31(1), 19-31. doi:10.1080/23273798.2015.1072223.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. In the present opinion paper we evaluate this proposal. We first critically discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, we discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Hugh-Jones, D., Verweij, K. J. H., St Pourcain, B., & Abdellaoui, A. (2016). Assortative mating on educational attainment leads to genetic spousal resemblance for causal alleles. Intelligence, 59, 103-108. doi:10.1016/j.intell.2016.08.005.

    Abstract

    We examined whether assortative mating for educational attainment (“like marries like”) can be detected in the genomes of ~ 1600 UK spouse pairs of European descent. Assortative mating on heritable traits like educational attainment increases the genetic variance and heritability of the trait in the population, which may increase social inequalities. We test for genetic assortative mating in the UK on educational attainment, a phenotype that is indicative of socio-economic status and has shown substantial levels of assortative mating. We use genome-wide allelic effect sizes from a large genome-wide association study on educational attainment (N ~ 300 k) to create polygenic scores that are predictive of educational attainment in our independent sample (r = 0.23, p < 2 × 10− 16). The polygenic scores significantly predict partners' educational outcome (r = 0.14, p = 4 × 10− 8 and r = 0.19, p = 2 × 10− 14, for prediction from males to females and vice versa, respectively), and are themselves significantly correlated between spouses (r = 0.11, p = 7 × 10− 6). Our findings provide molecular genetic evidence for genetic assortative mating on education in the UK
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Humphries, S., Holler, J., Crawford, T. J., Herrera, E., & Poliakoff, E. (2016). A third-person perspective on co-speech action gestures in Parkinson’s disease. Cortex, 78, 44-54. doi:10.1016/j.cortex.2016.02.009.

    Abstract

    A combination of impaired motor and cognitive function in Parkinson’s disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production
  • Hwang, S.-O., Tomita, N., Morgan, H., Ergin, R., İlkbaşaran, D., Seegers, S., Lepic, R., & Padden, C. (2016). Of the body and the hands: patterned iconicity for semantic categories. Language and Cognition, 9(4), 573-602. doi:10.1017/langcog.2016.28.

    Abstract

    This paper examines how gesturers and signers use their bodies to express concepts such as instrumentality and humanness. Comparing across eight sign languages (American, Japanese, German, Israeli, and Kenyan Sign Languages, Ha Noi Sign Language of Vietnam, Central Taurus Sign Language of Turkey, and Al-Sayyid Bedouin Sign Language of Israel) and the gestures of American non-signers, we find recurring patterns for naming entities in three semantic categories (tools, animals, and fruits & vegetables). These recurring patterns are captured in a classification system that identifies iconic strategies based on how the body is used together with the hands. Across all groups, tools are named with manipulation forms, where the head and torso represent those of a human agent. Animals tend to be identified with personification forms, where the body serves as a map for a comparable non-human body. Fruits & vegetables tend to be identified with object forms, where the hands act independently from the rest of the body to represent static features of the referent. We argue that these iconic patterns are rooted in using the body for communication, and provide a basis for understanding how meaningful communication emerges quickly in gesture and persists in emergent and established sign languages.
  • Iliadis, S. I., Sylvén, S., Hellgren, C., Olivier, J. D., Schijven, D., Comasco, E., Chrousos, G. P., Sundström Poromaa, I., & Skalkidou, A. (2016). Mid-pregnancy corticotropin-releasing hormone levels in association with postpartum depressive symptoms. Depression and Anxiety, 33(11), 1023-1030. doi:10.1002/da.22529.

    Abstract

    Background Peripartum depression is a common cause of pregnancy- and postpartum-related morbidity. The production of corticotropin-releasing hormone (CRH) from the placenta alters the profile of hypothalamus–pituitary–adrenal axis hormones and may be associated with postpartum depression. The purpose of this study was to assess, in nondepressed pregnant women, the possible association between CRH levels in pregnancy and depressive symptoms postpartum. Methods A questionnaire containing demographic data and the Edinburgh Postnatal Depression Scale (EPDS) was filled in gestational weeks 17 and 32, and 6 week postpartum. Blood samples were collected in week 17 for assessment of CRH. A logistic regression model was constructed, using postpartum EPDS score as the dependent variable and log-transformed CRH levels as the independent variable. Confounding factors were included in the model. Subanalyses after exclusion of study subjects with preterm birth, newborns small for gestational age (SGA), and women on corticosteroids were performed. Results Five hundred thirty-five women without depressive symptoms during pregnancy were included. Logistic regression showed an association between high CRH levels in gestational week 17 and postpartum depressive symptoms, before and after controlling for several confounders (unadjusted OR = 1.11, 95% CI 1.01–1.22; adjusted OR = 1.13, 95% CI 1.02–1.26; per 0.1 unit increase in log CRH). Exclusion of women with preterm birth and newborns SGA as well as women who used inhalation corticosteroids during pregnancy did not alter the results. Conclusions This study suggests an association between high CRH levels in gestational week 17 and the development of postpartum depressive symptoms, among women without depressive symptoms during pregnancy.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.

Share this page