Publications

Displaying 201 - 300 of 895
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gaspard III, J. C., Bauer, G. B., Mann, D. A., Boerner, K., Denum, L., Frances, C., & Reep, R. L. (2017). Detection of hydrodynamic stimuli by the postcranial body of Florida manatees (Trichechus manatus latirostris) A Neuroethology, sensory, neural, and behavioral physiology. Journal of Comparative Physiology, 203, 111-120. doi:10.1007/s00359-016-1142-8.

    Abstract

    Manatees live in shallow, frequently turbid
    waters. The sensory means by which they navigate in these
    conditions are unknown. Poor visual acuity, lack of echo-
    location, and modest chemosensation suggest that other
    modalities play an important role. Rich innervation of sen-
    sory hairs that cover the entire body and enlarged soma-
    tosensory areas of the brain suggest that tactile senses are
    good candidates. Previous tests of detection of underwater
    vibratory stimuli indicated that they use passive movement
    of the hairs to detect particle displacements in the vicinity
    of a micron or less for frequencies from 10 to 150 Hz. In
    the current study, hydrodynamic stimuli were created by
    a sinusoidally oscillating sphere that generated a dipole
    field at frequencies from 5 to 150 Hz. Go/no-go tests of
    manatee postcranial mechanoreception of hydrodynamic
    stimuli indicated excellent sensitivity but about an order of
    magnitude less than the facial region. When the vibrissae
    were trimmed, detection thresholds were elevated, suggest-
    ing that the vibrissae were an important means by which
    detection occurred. Manatees were also highly accurate in two-choice directional discrimination: greater than 90%
    correct at all frequencies tested. We hypothesize that mana-
    tees utilize vibrissae as a three-dimensional array to detect
    and localize low-frequency hydrodynamic stimuli
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gialluisi, A., Guadalupe, T., Francks, C., & Fisher, S. E. (2017). Neuroimaging genetic analyses of novel candidate genes associated with reading and language. Brain and Language, 172, 9-15. doi:10.1016/j.bandl.2016.07.002.

    Abstract

    Neuroimaging measures provide useful endophenotypes for tracing genetic effects on reading and language. A recent Genome-Wide Association Scan Meta-Analysis (GWASMA) of reading and language skills (N = 1862) identified strongest associations with the genes CCDC136/FLNC and RBFOX2. Here, we follow up the top findings from this GWASMA, through neuroimaging genetics in an independent sample of 1275 healthy adults. To minimize multiple-testing, we used a multivariate approach, focusing on cortical regions consistently implicated in prior literature on developmental dyslexia and language impairment. Specifically, we investigated grey matter surface area and thickness of five regions selected a priori: middle temporal gyrus (MTG); pars opercularis and pars triangularis in the inferior frontal gyrus (IFG-PO and IFG-PT); postcentral parietal gyrus (PPG) and superior temporal gyrus (STG). First, we analysed the top associated polymorphisms from the reading/language GWASMA: rs59197085 (CCDC136/FLNC) and rs5995177 (RBFOX2). There was significant multivariate association of rs5995177 with cortical thickness, driven by effects on left PPG, right MTG, right IFG (both PO and PT), and STG bilaterally. The minor allele, previously associated with reduced reading-language performance, showed negative effects on grey matter thickness. Next, we performed exploratory gene-wide analysis of CCDC136/FLNC and RBFOX2; no other associations surpassed significance thresholds. RBFOX2 encodes an important neuronal regulator of alternative splicing. Thus, the prior reported association of rs5995177 with reading/language performance could potentially be mediated by reduced thickness in associated cortical regions. In future, this hypothesis could be tested using sufficiently large samples containing both neuroimaging data and quantitative reading/language scores from the same individuals.

    Additional information

    mmc1.docx
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., & Kidd, E. (2017). Language use statistics and prototypical grapheme colours predict synaesthetes' and non-synaesthetes' word-colour associations. Acta Psychologica, 173, 73-86. doi:10.1016/j.actpsy.2016.12.008.

    Abstract

    Synaesthesia is the neuropsychological phenomenon in which individuals experience unusual sensory associations, such as experiencing particular colours in response to particular words. While it was once thought the particular pairings between stimuli were arbitrary and idiosyncratic to particular synaesthetes, there is now growing evidence for a systematic psycholinguistic basis to the associations. Here we sought to assess the explanatory value of quantifiable lexical association measures (via latent semantic analysis; LSA) in the pairings observed between words and colours in synaesthesia. To test this, we had synaesthetes report the particular colours they experienced in response to given concept words, and found that language association between the concept and colour words provided highly reliable predictors of the reported pairings. These results provide convergent evidence for a psycholinguistic basis to synaesthesia, but in a novel way, showing that exposure to particular patterns of associations in language can predict the formation of particular synaesthetic lexical-colour associations. Consistent with previous research, the prototypical synaesthetic colour for the first letter of the word also played a role in shaping the colour for the whole word, and this effect also interacted with language association, such that the effect of the colour for the first letter was stronger as the association between the concept word and the colour word in language increased. Moreover, when a group of non-synaesthetes were asked what colours they associated with the concept words, they produced very similar reports to the synaesthetes that were predicted by both language association and prototypical synaesthetic colour for the first letter of the word. This points to a shared linguistic experience generating the associations for both groups.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • De Graaf, T. A., Duecker, F., Stankevich, Y., Ten Oever, S., & Sack, A. T. (2017). Seeing in the dark: Phosphene thresholds with eyes open versus closed in the absence of visual inputs. Brain Stimulation, 10(4), 828-835. doi:10.1016/j.brs.2017.04.127.

    Abstract

    Background: Voluntarily opening or closing our eyes results in fundamentally different input patterns and expectancies. Yet it remains unclear how our brains and visual systems adapt to these ocular states.
    Objective/Hypothesis: We here used transcranial magnetic stimulation (TMS) to probe the excitability of the human visual system with eyes open or closed, in the complete absence of visual inputs.
    Methods: Combining Bayesian staircase procedures with computer control of TMS pulse intensity allowed interleaved determination of phosphene thresholds (PT) in both conditions. We measured parieto-occipital EEG baseline activity in several stages to track oscillatory power in the alpha (8-12 Hz) frequency-band, which has previously been shown to be inversely related to phosphene perception.
    Results: Since closing the eyes generally increases alpha power, one might have expected a decrease in excitability (higher PT). While we confirmed a rise in alpha power with eyes closed, visual excitability was actually increased (PT was lower) with eyes closed.
    Conclusions: This suggests that, aside from oscillatory alpha power, additional neuronal mechanisms influence the excitability of early visual cortex. One of these may involve a more internally oriented mode of brain operation, engaged by closing the eyes. In this state, visual cortex may be more susceptible to top-down inputs, to facilitate for example multisensory integration or imagery/working memory, although alternative explanations remain possible. (C) 2017 Elsevier Inc. All rights reserved.

    Additional information

    Supplementary data
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Greenfield, P. M., Slobin, D., Cole, M., Gardner, H., Sylva, K., Levelt, W. J. M., Lucariello, J., Kay, A., Amsterdam, A., & Shore, B. (2017). Remembering Jerome Bruner: A series of tributes to Jerome “Jerry” Bruner, who died in 2016 at the age of 100, reflects the seminal contributions that led him to be known as a co-founder of the cognitive revolution. Observer, 30(2). Retrieved from http://www.psychologicalscience.org/observer/remembering-jerome-bruner.

    Abstract

    Jerome Seymour “Jerry” Bruner was born on October 1, 1915, in New York City. He began his academic career as psychology professor at Harvard University; he ended it as University Professor Emeritus at New York University (NYU) Law School. What happened at both ends and in between is the subject of the richly variegated remembrances that follow. On June 5, 2016, Bruner died in his Greenwich Village loft at age 100. He leaves behind his beloved partner Eleanor Fox, who was also his distinguished colleague at NYU Law School; his son Whitley; his daughter Jenny; and three grandchildren.

    Bruner’s interdisciplinarity and internationalism are seen in the remarkable variety of disciplines and geographical locations represented in the following tributes. The reader will find developmental psychology, anthropology, computer science, psycholinguistics, cognitive psychology, cultural psychology, education, and law represented; geographically speaking, the writers are located in the United States, Canada, the United Kingdom, and the Netherlands. The memories that follow are arranged in roughly chronological order according to when the writers had their first contact with Jerry Bruner.
  • Greenhill, S. J., Wu, C.-H., Hua, X., Dunn, M., Levinson, S. C., & Gray, R. D. (2017). Evolutionary dynamics of language systems. Proceedings of the National Academy of Sciences of the United States of America, 114(42), E8822-E8829. doi:10.1073/pnas.1700388114.

    Abstract

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact.
  • Grieco-Calub, T. M., Ward, K. M., & Brehm, L. (2017). Multitasking During Degraded Speech Recognition in School-Age Children. Trends in hearing, 21, 1-14. doi:10.1177/2331216516686786.

    Abstract

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unpro- cessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
  • Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14: e1006690. doi:10.1371/journal.pcbi.1006690.

    Abstract

    Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.

    Additional information

    data via OSF
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25, 225-240. doi:10.1080/13506285.2017.1324934.

    Abstract

    Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant, there was still evidence for visual and semantic biases, but these biases were substantially weaker, and similar in strength and temporal dynamics, without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements.
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T. and 141 moreGuadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T., Blangero, J., Bokde, A. L., Boedhoe, P. S., Bose, A., Brem, S., Brodaty, H., Bromberg, U., Brooks, S., Büchel, C., Buitelaar, J., Calhoun, V. D., Cannon, D. M., Cattrell, A., Cheng, Y., Conrod, P. J., Conzelmann, A., Corvin, A., Crespo-Facorro, B., Crivello, F., Dannlowski, U., De Zubicaray, G. I., De Zwarte, S. M., Deary, I. J., Desrivières, S., Doan, N. T., Donohoe, G., Dørum, E. S., Ehrlich, S., Espeseth, T., Fernández, G., Flor, H., Fouche, J.-P., Frouin, V., Fukunaga, M., Gallinat, J., Garavan, H., Gill, M., Suarez, A. G., Gowland, P., Grabe, H. J., Grotegerd, D., Gruber, O., Hagenaars, S., Hashimoto, R., Hauser, T. U., Heinz, A., Hibar, D. P., Hoekstra, P. J., Hoogman, M., Howells, F. M., Hu, H., Hulshoff Pol, H. E.., Huyser, C., Ittermann, B., Jahanshad, N., Jönsson, E. G., Jurk, S., Kahn, R. S., Kelly, S., Kraemer, B., Kugel, H., Kwon, J. S., Lemaitre, H., Lesch, K.-P., Lochner, C., Luciano, M., Marquand, A. F., Martin, N. G., Martínez-Zalacaín, I., Martinot, J.-L., Mataix-Cols, D., Mather, K., McDonald, C., McMahon, K. L., Medland, S. E., Menchón, J. M., Morris, D. W., Mothersill, O., Maniega, S. M., Mwangi, B., Nakamae, T., Nakao, T., Narayanaswaamy, J. C., Nees, F., Nordvik, J. E., Onnink, A. M. H., Opel, N., Ophoff, R., Martinot, M.-L.-P., Orfanos, D. P., Pauli, P., Paus, T., Poustka, L., Reddy, J. Y., Renteria, M. E., Roiz-Santiáñez, R., Roos, A., Royle, N. A., Sachdev, P., Sánchez-Juan, P., Schmaal, L., Schumann, G., Shumskaya, E., Smolka, M. N., Soares, J. C., Soriano-Mas, C., Stein, D. J., Strike, L. T., Toro, R., Turner, J. A., Tzourio-Mazoyer, N., Uhlmann, A., Valdés Hernández, M., Van den Heuvel, O. A., Van der Meer, D., Van Haren, N. E.., Veltman, D. J., Venkatasubramanian, G., Vetter, N. C., Vuletic, D., Walitza, S., Walter, H., Walton, E., Wang, Z., Wardlaw, J., Wen, W., Westlye, L. T., Whelan, R., Wittfeld, K., Wolfers, T., Wright, M. J., Xu, J., Xu, X., Yun, J.-Y., Zhao, J., Franke, B., Thompson, P. M., Glahn, D. C., Mazoyer, B., Fisher, S. E., & Francks, C. (2017). Human subcortical asymmetries in 15,847 people worldwide reveal effects of age and sex. Brain Imaging and Behavior, 11(5), 1497-1514. doi:10.1007/s11682-016-9629-z.

    Abstract

    The two hemispheres of the human brain differ functionally and structurally. Despite over a century of research, the extent to which brain asymmetry is influenced by sex, handedness, age, and genetic factors is still controversial. Here we present the largest ever analysis of subcortical brain asymmetries, in a harmonized multi-site study using meta-analysis methods. Volumetric asymmetry of seven subcortical structures was assessed in 15,847 MRI scans from 52 datasets worldwide. There were sex differences in the asymmetry of the globus pallidus and putamen. Heritability estimates, derived from 1170 subjects belonging to 71 extended pedigrees, revealed that additive genetic factors influenced the asymmetry of these two structures and that of the hippocampus and thalamus. Handedness had no detectable effect on subcortical asymmetries, even in this unprecedented sample size, but the asymmetry of the putamen varied with age. Genetic drivers of asymmetry in the hippocampus, thalamus and basal ganglia may affect variability in human cognition, including susceptibility to psychiatric disorders.

    Additional information

    11682_2016_9629_MOESM1_ESM.pdf
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guest, O., & Love, B. C. (2017). What the success of brain imaging implies about the neural code. eLife, 6: e21397. doi:10.7554/eLife.21397.

    Abstract

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2017). Don't forget neurobiology: An experimental approach to linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e292. doi:10.1017/S0140525X17000401.

    Abstract

    Acceptability judgments are no longer acceptable as the holy grail for testing the nature of linguistic representations. Experimental and quantitative methods should be used to test theoretical claims in psycholinguistics. These methods should include not only behavior, but also the more recent possibilities to probe the neural codes for language-relevant representation
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience and Biobehavioral Reviews, 81, 194-204. doi:10.1016/j.neubiorev.2017.01.048.

    Abstract

    In this paper a general cognitive architecture of spoken language processing is specified. This is followed by an account of how this cognitive architecture is instantiated in the human brain. Both the spatial aspects of the networks for language are discussed, as well as the temporal dynamics and the underlying neurophysiology. A distinction is proposed between networks for coding/decoding linguistic information and additional networks for getting from coded meaning to speaker meaning, i.e. for making the inferences that enable the listener to understand the intentions of the speaker
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Hao, X., Huang, Y., Song, Y., Kong, X., & Liu, J. (2017). Experience with the Cardinal Coordinate System Contributes to the Precision of Cognitive Maps. Frontiers in Psychology, 8: 1166. doi:10.3389/fpsyg.2017.01166.

    Abstract

    The coordinate system has been proposed as a fundamental and cross-culturally used spatial representation, through which people code location and direction information in the environment. Here we provided direct evidence demonstrating that daily experience with the cardinal coordinate system (i.e., east, west, north, and south) contributed to the representation of cognitive maps. Behaviorally, we found that individuals who relied more on the cardinal coordinate system for daily navigation made smaller errors in an indoor pointing task, suggesting that the cardinal coordinate system is an important element of cognitive maps. Neurally, the extent to which individuals relied on the cardinal coordinate system was positively correlated with the gray matter volume of the entorhinal cortex, suggesting that the entorhinal cortex may serve as the neuroanatomical basis of coordinate-based navigation (the entorhinal coordinate area, ECA). Further analyses on the resting-state functional connectivity revealed that the intrinsic interaction between the ECA and two hippocampal sub-regions, the subiculum and cornu ammonis, might be linked with the representation precision of cognitive maps. In sum, our study reveals an association between daily experience with the cardinal coordinate system and cognitive maps, and suggests that the ECA works in collaboration with hippocampal sub-regions to represent cognitive maps.
  • Harmon, Z., & Kapatsinski, V. (2017). Putting old tools to novel uses: The role of form accessibility in semantic extension. Cognitive Psychology, 98, 22-44. doi:10.1016/j.cogpsych.2017.08.002.

    Abstract

    An increase in frequency of a form has been argued to result in semantic extension (Bybee, 2003; Zipf, 1949). Yet, research on the acquisition of lexical semantics suggests that a form that frequently co-occurs with a meaning gets restricted to that meaning (Xu & Tenenbaum, 2007). The current work reconciles these positions by showing that – through its effect on form accessibility – frequency causes semantic extension in production, while at the same time causing entrenchment in comprehension. Repeatedly experiencing a form paired with a specific meaning makes one more likely to re-use the form to express related meanings, while also increasing one’s confidence that the form is never used to express those meanings. Recurrent pathways of semantic change are argued to result from a tug of war between the production-side pressure to reuse easily accessible forms and the comprehension-side confidence that one has seen all possible uses of a frequent form.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hebebrand, J., Peters, T., Schijven, D., Hebebrand, M., Grasemann, C., Winkler, T. W., Heid, I. M., Antel, J., Föcker, M., Tegeler, L., Brauner, L., Adan, R. A., Luykx, J. J., Correll, C. U., König, I. R., Hinney, A., & Libuda, L. (2018). The role of genetic variation of human metabolism for BMI, mental traits and mental disorders. Molecular Metabolism, 12, 1-11. doi:10.1016/j.molmet.2018.03.015.

    Abstract

    Objective
    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders
    Methods
    We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10−6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10−5) on the correction for the 516 SNPs only.
    Results
    19 SNPs located in nine independent loci revealed p-values < 6.92 × 10−6; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment.
    Conclusions
    Approximately 5–10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies.
  • Hendricks, A. E., Bochukova, E. G., Marenne, G., Keogh, J. M., Atanassova, N., Bounds, R., Wheeler, E., Mistry, V., Henning, E., Körner, A., Muddyman, D., McCarthy, S., Hinney, A., Hebebrand, J., Scott, R. A., Langenberg, C., Wareham, N. J., Surendran, P., Howson, J. M., Butterworth, A. S. and 14 moreHendricks, A. E., Bochukova, E. G., Marenne, G., Keogh, J. M., Atanassova, N., Bounds, R., Wheeler, E., Mistry, V., Henning, E., Körner, A., Muddyman, D., McCarthy, S., Hinney, A., Hebebrand, J., Scott, R. A., Langenberg, C., Wareham, N. J., Surendran, P., Howson, J. M., Butterworth, A. S., Danesh, J., Børge G, N., Nielse, S. F., Afzal, S., Papadia, S., Ashford, S., Garg, S., Palomino, R. I., Kwasniewska, A., Tachmazidou, I., O’Rahilly, S., Zeggini, E., Barroso, I., & Farooqi, I. S. (2017). Rare Variant Analysis of Human and Rodent Obesity Genes in Individuals with Severe Childhood Obesity. Scientific Reports, 7: 4394. doi:10.1038/s41598-017-03054-8.

    Abstract

    Obesity is a genetically heterogeneous disorder. Using targeted and whole-exome sequencing, we studied 32 human and 87 rodent obesity genes in 2,548 severely obese children and 1,117 controls. We identified 52 variants contributing to obesity in 2% of cases including multiple novel variants in GNAS, which were sometimes found with accelerated growth rather than short stature as described previously. Nominally significant associations were found for rare functional variants in BBS1, BBS9, GNAS, MKKS, CLOCK and ANGPTL6. The p.S284X variant in ANGPTL6 drives the association signal (rs201622589, MAF~0.1%, odds ratio = 10.13, p-value = 0.042) and results in complete loss of secretion in cells. Further analysis including additional case-control studies and population controls (N = 260,642) did not support association of this variant with obesity (odds ratio = 2.34, p-value = 2.59 × 10−3), highlighting the challenges of testing rare variant associations and the need for very large sample sizes. Further validation in cohorts with severe obesity and engineering the variants in model organisms will be needed to explore whether human variants in ANGPTL6 and other genes that lead to obesity when deleted in mice, do contribute to obesity. Such studies may yield druggable targets for weight loss therapies.
  • Hersh, T. A., Dimond, A. L., Ruth, B. A., Lupica, N. V., Bruce, J. C., Kelley, J. M., King, B. L., & Lutton, B. V. (2018). A role for the CXCR4-CXCL12 axis in the little skate, Leucoraja erinacea. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 315, R218-R229. doi:10.1152/ajpregu.00322.2017.

    Abstract

    The interaction between C-X-C chemokine receptor type 4 (CXCR4) and its cognate ligand C-X-C motif chemokine ligand 12 (CXCL12) plays a critical role in regulating hematopoietic stem cell activation and subsequent cellular mobilization. Extensive studies of these genes have been conducted in mammals, but much less is known about the expression and function of CXCR4 and CXCL12 in non-mammalian vertebrates. In the present study, we identify simultaneous expression of CXCR4 and CXCL12 orthologs in the epigonal organ (the primary hematopoietic tissue) of the little skate, Leucoraja erinacea. Genetic and phylogenetic analyses were functionally supported by significant mobilization of leukocytes following administration of Plerixafor, a CXCR4 antagonist and clinically important drug. Our results provide evidence that, as in humans, Plerixafor disrupts CXCR4/CXCL12 binding in the little skate, facilitating release of leukocytes into the bloodstream. Our study illustrates the value of the little skate as a model organism, particularly in studies of hematopoiesis and potentially for preclinical research on hematological and vascular disorders.

    Files private

    Request files
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., Murray, M. M., & Golestani, N. (2017). Cortical thickness increases after simultaneous interpretation training. Neuropsychologia, 98, 212-219. doi:10.1016/j.neuropsychologia.2017.01.008.

    Abstract

    Simultaneous interpretation is a complex cognitive task that not only demands multilingual language processing, but also requires application of extreme levels of domain-general cognitive control. We used MRI to longitudinally measure cortical thickness in simultaneous interpretation trainees before and after a Master's program in conference interpreting. We compared them to multilingual control participants scanned at the same interval of time. Increases in cortical thickness were specific to trainee interpreters. Increases were observed in regions involved in lower-level, phonetic processing (left posterior superior temporal gyrus, anterior supramarginal gyrus and planum temporale), in the higher-level formulation of propositional speech (right angular gyrus) and in the conversion of items from working memory into a sequence (right dorsal premotor cortex), and finally, in domain-general executive control and attention (right parietal lobule). Findings are consistent with the linguistic requirements of simultaneous interpretation and also with the more general cognitive demands on attentional control for expert performance in simultaneous interpreting. Our findings may also reflect beneficial, potentially protective effects of simultaneous interpretation training, which has previously been shown to confer enhanced skills in certain executive and attentional domains over and above those conferred by bilingualism.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S. and 9 moreHeyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S., EuroEPINOMICS RES Consortium, De Kovel, C. G. F., Poduri, A., Weber, Y. G., Weckhuysen, S., Sisodiya, S. M., Daly, M. J., Helbig, I., Lal, D., & Lemke, J. R. (2018). De novo variants in neurodevelopmental disorders with epilepsy. Nature Genetics, 50, 1048-1053. doi:10.1038/s41588-018-0143-7.

    Abstract

    Epilepsy is a frequent feature of neurodevelopmental disorders (NDDs), but little is known about genetic differences between NDDs with and without epilepsy. We analyzed de novo variants (DNVs) in 6,753 parent–offspring trios ascertained to have different NDDs. In the subset of 1,942 individuals with NDDs with epilepsy, we identified 33 genes with a significant excess of DNVs, of which SNAP25 and GABRB2 had previously only limited evidence of disease association. Joint analysis of all individuals with NDDs also implicated CACNA1E as a novel disease-associated gene. Comparing NDDs with and without epilepsy, we found missense DNVs, DNVs in specific genes, age of recruitment, and severity of intellectual disability to be associated with epilepsy. We further demonstrate the extent to which our results affect current genetic testing as well as treatment, emphasizing the benefit of accurate genetic diagnosis in NDDs with epilepsy.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L. and 312 moreHibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L., Axelsson, T., Beecham, A. H., Beiser, A., Bernard, M., Blanton, S. H., Bohlken, M. M., Boks, M. P., Bralten, J., Brickman, A. M., Carmichael, O., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Chouraki, V., Cuellar-Partida, G., Crivello, F., den Brabander, A., Doan, N. T., Ehrlich, S., Giddaluru, S., Goldman, A. L., Gottesman, R. F., Grimm, O., Griswold, M. E., Guadalupe, T., Gutman, B. A., Hass, J., Haukvik, U. K., Hoehn, D., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Jørgensen, K. N., Mirza-Schreiber, N., Kasperaviciute, D., Kim, S., Klein, M., Krämer, B., Lee, P. H., Liewald, D. C. M., Lopez, L. M., Luciano, M., Macare, C., Marquand, A. F., Matarin, M., Mather, K. A., Mattheisen, M., McKay, D. R., Milaneschi, Y., Maniega, S. M., Nho, K., Nugent, A. C., Nyquist, P., Olde Loohuis, L. M., Oosterlaan, J., Papmeyer, M., Pirpamer, L., Pütz, B., Ramasamy, A., Richards, J. S., Risacher, S., Roiz-Santiañez, R., Rommelse, N., Ropele, S., Rose, E., Royle, N. A., Rundek, T., Sämann, P. G., Saremi, A., Satizabal, C. L., Schmaal, L., Schork, A. J., Shen, L., Shin, J., Shumskaya, E., Smith, A. V., Sprooten, E., Strike, L. T., Teumer, A., Tordesillas-Gutierrez, D., Toro, R., Trabzuni, D., Trompet, S., Vaidya, D., Van der Grond, J., Van der Lee, S. J., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, K. R., van Erp, T. G. M., Van Rooij, D., Walton, E., Westlye, L. T., Whelan, C. D., Windham, B. G., Winkler, A. M., Wittfeld, K. M., Woldehawariat, G., Wolf, C., Wolfers, T., Yanek, L. R., Yang, J., Zijdenbos, A., Zwiers, M. P., Agartz, I., Almasy, L., Ames, D., Amouyel, P., Andreassen, O. A., Arepalli, S., Assareh, A. A., Barral, S., Bastin, M. E., Becker, D. M., Becker, J. T., Bennett, D. A., Blangero, J., Van Bokhoven, H., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bulayeva, K. B., Cahn, W., Calhoun, V. D., Cannon, D. M., Cavalleri, G. L., Cheng, C.-Y., Cichon, S., Cookson, M. R., Corvin, A., Crespo-Facorro, B., Curran, J. E., Czisch, M., Dale, A. M., Davies, G. E., De Craen, A. J. M., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. i., Deary, I. J., Debette, S., DeCarli, C., Delanty, N., Depondt, C., DeStefano, A., Dillman, A., Djurovic, S., Donohoe, G., Drevets, W. C., Duggirala, R., Dyer, T. D., Enzinger, C., Erk, S., Espeseth, T., Fedko, I. O., Fernández, G., Ferrucci, L., Fisher, S. E., Fleischman, D. A., Ford, I., Fornage, M., Foroud, T. M., Fox, P. T., Francks, C., Fukunaga, M., Gibbs, J. R., Glahn, D. C., Gollub, R. L., Göring, H. H. H., Green, R. C., Gruber, O., Gudnason, V., Guelfi, S., Haberg, A. K., Hansell, N. K., Hardy, J., Hartman, C. A., Hashimoto, R., Hegenscheid, K., Heinz, A., Le Hellard, S., Hernandez, D. G., Heslenfeld, D. J., Ho, B.-C., Hoekstra, P. J., Hoffmann, W., Hofman, A., Holsboer, F., Homuth, G., Hosten, N., Hottenga, J.-J., Huentelman, M., Pol, H. E. H., Ikeda, M., Jack Jr., C. R., Jenkinson, M., Johnson, R., Jonsson, E. G., Jukema, J. W., Kahn, R. S., Kanai, R., Kloszewska, I., Knopman, D. S., Kochunov, P., Kwok, J. B., Lawrie, S. M., Lemaître, H., Liu, X., Longo, D. L., Lopez, O. L., Lovestone, S., Martinez, O., Martinot, J.-L., Mattay, V. S., McDonald, C., Mcintosh, A. M., McMahon, F., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mohnke, S., Montgomery, G. W., Morris, D. W., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Nalls, M. A., Nauck, M., Nichols, T. E., Niessen, W. J., Nöthen, M. M., Nyberg, L., Ohi, K., Olvera, R. L., Ophoff, R. A., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Potkin, S. G., Psaty, B. M., Reppermund, S., Rietschel, M., Roffman, J. L., Romanczuk-Seiferth, N., Rotter, J. I., Ryten, M., Sacco, R. L., Sachdev, P. S., Saykin, A. J., Schmidt, R., Schmidt, H., Schofield, P. R., Sigursson, S., Simmons, A., Singleton, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soininen, H., Steen, V. M., Stott, D. J., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Tsolaki, M., Tzourio, C., Uitterlinden, A. G., Hernández, M. C. V., Van der Brug, M., Van der Lugt, A., Van der Wee, N. J. A., Van Haren, N. E. M., Van Tol, M.-J., Vardarajan, B. N., Vellas, B., Veltman, D. J., Völzke, H., Walter, H., Wardlaw, J. M., Wassink, T. H., Weale, M. e., Weinberger, D. R., Weiner, M., Wen, W., Westman, E., White, T., Wong, T. Y., Wright, C. B., Zielke, R. H., Zonderman, A. B., Martin, N. G., Van Duijn, C. M., Wright, M. J., Longstreth, W. W. T., Schumann, G., Grabe, H. J., Franke, B., Launer, L. J., Medland, S. E., Seshadri, S., Thompson, P. M., & Ikram, A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8: 13624. doi:10.1038/ncomms13624.

    Abstract

    The hippocampal formation is a brain structure integrally involved in episodic memory, spatial navigation, cognition and stress responsiveness. Structural abnormalities in hippocampal volume and shape are found in several common neuropsychiatric disorders. To identify the genetic underpinnings of hippocampal structure here we perform a genome-wide association study (GWAS) of 33,536 individuals and discover six independent loci significantly associated with hippocampal volume, four of them novel. Of the novel loci, three lie within genes (ASTN2, DPP4 and MAST4) and one is found 200 kb upstream of SHH. A hippocampal subfield analysis shows that a locus within the MSRB3 gene shows evidence of a localized effect along the dentate gyrus, subiculum, CA1 and fissure. Further, we show that genetic variants associated with decreased hippocampal volume are also associated with increased risk for Alzheimer’s disease (rg=−0.155). Our findings suggest novel biological pathways through which human genetic variation influences hippocampal volume and risk for neuropsychiatric illness.

    Additional information

    ncomms13624-s1.pdf ncomms13624-s2.xlsx
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2017). Predictors of verb-mediated anticipatory eye movements in the visual world. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1352-1374. doi:10.1037/xlm0000388.

    Abstract

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of five potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners’ production fluency, receptive vocabulary knowledge, and non-verbal intelligence. In three eye-tracking experiments, participants looked at sets of four objects and listened to sentences where the final word was predictable or not predictable (e.g., “The man peels/draws an apple”). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and non-verbal intelligence was only a very weak predictor of anticipatory eye movements. Participants’ production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. 
  • Hirschmann, J., Schoffelen, J.-M., Schnitzler, A., & Van Gerven, M. A. J. (2017). Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus. Clinical Neurophysiology, 128, 2029-2036. doi:10.1016/j.clinph.2017.07.419.

    Abstract

    Objective: To investigate the possibility of tremor detection based on deep brain activity.
    Methods: We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10
    PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands
    was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments
    as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual
    power features.
    Results: Applying a threshold directly to band-limited power was insufficient for tremor detection (mean
    area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in
    contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained
    from a single contact pair. Within-patient training yielded better accuracy than across-patient training
    (0.84 vs. 0.78, p = 0.03), yet tremor could often be detected accurately with either approach. High frequency
    oscillations (>200 Hz) were the best performing individual feature.
    Conclusions: LFP-based markers of tremor are robust enough to allow for accurate tremor detection in
    short data segments, provided that appropriate statistical models are used.
    Significance: LFP-based markers of tremor could be useful control signals for closed-loop deep brain
    stimulation.
  • Hoedemaker, R. S., & Gordon, P. C. (2017). The onset and time course of semantic priming during rapid recognition of visual words. Journal of Experimental Psychology: Human Perception and Performance, 43(5), 881-902. doi:10.1037/xhp0000377.

    Abstract

    In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response
    tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these
    distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment;
    manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2017). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Acta Psychologica, 172, 55-63. doi:10.1016/j.actpsy.2016.11.007.

    Abstract

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, the joint-naming task showed a within-speaker CSI effect, such that naming latencies increased as a function of the number of category exemplars named previously by the participant (self-produced items). Crucially, we also observed an across-speaker CSI effect, such that naming latencies slowed as a function of the number of category members named by the participant's task partner (other-produced items). The magnitude of the across-speaker CSI effect did not vary as a function of whether or not the listening participant could see the pictures their partner was naming. The observation of across-speaker CSI suggests that the effect originates at the conceptual level of the language system, as proposed by Belke's (2013) Conceptual Accumulation account. Whereas self-produced and other-produced words both resulted in a CSI effect on naming latencies, post-experiment free recall rates were higher for self-produced than other-produced items. Together, these results suggest that both speaking and listening result in implicit learning at the conceptual level of the language system but that these effects are independent of explicit learning as indicated by item recall.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoey, E. (2017). [Review of the book Temporality in Interaction]. Studies in Language, 41(1), 232-238. doi:10.1075/sl.41.1.08hoe.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Hoey, E. (2017). Sequence recompletion: A practice for managing lapses in conversation. Journal of Pragmatics, 109, 47-63. doi:10.1016/j.pragma.2016.12.008.

    Abstract

    Conversational interaction occasionally lapses as topics become exhausted or as participants are left with no obvious thing to talk about next. In this article I look at episodes of ordinary conversation to examine how participants resolve issues of speakership and sequentiality in lapse environments. In particular, I examine one recurrent phenomenon—sequence recompletion—whereby participants bring to completion a sequence of talk that was already treated as complete. Using conversation analysis, I describe four methods for sequence recompletion: turn-exiting, action redoings, delayed replies, and post-sequence transitions. With this practice, participants use verbal and vocal resources to locally manage their participation framework when ending one course of action and potentially starting up a new one
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., Kendrick, K. H., & Levinson, S. C. (2018). Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychonomic Bulletin & Review, 25(5), 1900-1908. doi:10.3758/s13423-017-1363-z.

    Abstract

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as addressee feedback in face-to-face conversation. Research on Language and Social Interaction, 50, 54-70. doi:10.1080/08351813.2017.1262143.

    Abstract

    Does blinking function as a type of feedback in conversation? To address this question, we built a corpus of Dutch conversations, identified short and long addressee blinks during extended turns, and measured their occurrence relative to the end of turn constructional units (TCUs), the location
    where feedback typically occurs. Addressee blinks were indeed timed to the
    end of TCUs. Also, long blinks were more likely than short blinks to occur
    during mutual gaze, with nods or continuers, and their occurrence was
    restricted to sequential contexts in which signaling understanding was
    particularly relevant, suggesting a special signaling capacity of long blinks.
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques

Share this page