Publications

Displaying 301 - 400 of 1104
  • Friederici, A., & Levelt, W. J. M. (1987). Sprache. In K. Immelmann, K. Scherer, & C. Vogel (Eds.), Funkkolleg Psychobiologie (pp. 58-87). Weinheim: Beltz.
  • Friedrich, P., Thiebaut de Schotten, M., Forkel, S. J., Stacho, M., & Howells, H. (2020). An ancestral anatomical and spatial bias for visually guided behavior. PNAS, 117(5), 2251-2252. doi:10.1073/pnas.1918402117.

    Abstract

    Human behavioral asymmetries are commonly studied in the context of structural cortical and connectional asymmetries. Within this framework, Sreenivasan and Sridharan (1) provide intriguing evidence of a relationship between visual asymmetries and the lateralization of superior colliculi connections—a phylogenetically older mesencephalic structure. Specifically, response facilitation for cued locations (i.e., choice bias) in the contralateral hemifield was associated with differences in the connectivity of the superior colliculus. Given that the superior colliculus has a structural homolog—the optic tectum—which can be traced across all Vertebrata, these results may have meaningful evolutionary ramifications.
  • Friedrich, P., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Mapping the principal gradient onto the corpus callosum. NeuroImage, 223: 117317. doi:10.1016/j.neuroimage.2020.117317.

    Abstract

    Gradients capture some of the variance of the resting-state functional magnetic resonance imaging (rsfMRI) signal. Amongst these, the principal gradient depicts a functional processing hierarchy that spans from sensory-motor cortices to regions of the default-mode network. While the cortex has been well characterised in terms of gradients little is known about its underlying white matter. For instance, comprehensive mapping of the principal gradient on the largest white matter tract, the corpus callosum, is still missing. Here, we mapped the principal gradient onto the midsection of the corpus callosum using the 7T human connectome project dataset. We further explored how quantitative measures and variability in callosal midsection connectivity relate to the principal gradient values. In so doing, we demonstrated that the extreme values of the principal gradient are located within the callosal genu and the posterior body, have lower connectivity variability but a larger spatial extent along the midsection of the corpus callosum than mid-range values. Our results shed light on the relationship between the brain's functional hierarchy and the corpus callosum. We further speculate about how these results may bridge the gap between functional hierarchy, brain asymmetries, and evolution.

    Additional information

    supplementary file
  • Frost, R. (2014). Learning grammatical structures with and without sleep. PhD Thesis, Lancaster University, Lancaster.
  • Frost, R. L. A., Dunn, K., Christiansen, M. H., Gómez, R. L., & Monaghan, P. (2020). Exploring the "anchor word" effect in infants: Segmentation and categorisation of speech with and without high frequency words. PLoS One, 15(12): e0243436. doi:10.1371/journal.pone.0243436.

    Abstract

    High frequency words play a key role in language acquisition, with recent work suggesting they may serve both speech segmentation and lexical categorisation. However, it is not yet known whether infants can detect novel high frequency words in continuous speech, nor whether they can use them to help learning for segmentation and categorisation at the same time. For instance, when hearing “you eat the biscuit”, can children use the high-frequency words “you” and “the” to segment out “eat” and “biscuit”, and determine their respective lexical categories? We tested this in two experiments. In Experiment 1, we familiarised 12-month-old infants with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words that distinguished the targets into two distributional categories. In Experiment 2, we repeated the task using the same language but with additional phonological cues to word and category structure. In both studies, we measured learning with head-turn preference tests of segmentation and categorisation, and compared performance against a control group that heard the artificial speech without the marker words (i.e., just the targets). There was no evidence that high frequency words helped either speech segmentation or grammatical categorisation. However, segmentation was seen to improve when the distributional information was supplemented with phonological cues (Experiment 2). In both experiments, exploratory analysis indicated that infants’ looking behaviour was related to their linguistic maturity (indexed by infants’ vocabulary scores) with infants with high versus low vocabulary scores displaying novelty and familiarity preferences, respectively. We propose that high-frequency words must reach a critical threshold of familiarity before they can be of significant benefit to learning.

    Additional information

    data
  • Frost, R. L. A., Jessop, A., Durrant, S., Peter, M. S., Bidgood, A., Pine, J. M., Rowland, C. F., & Monaghan, P. (2020). Non-adjacent dependency learning in infancy, and its link to language development. Cognitive Psychology, 120: 101291. doi:10.1016/j.cogpsych.2020.101291.

    Abstract

    To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants’ capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants’ statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants’ vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants’ segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size.

    Additional information

    Supplementary data
  • Frost, R. L. A., & Monaghan, P. (2020). Insights from studying statistical learning. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 65-89). Amsterdam: John Benjamins. doi:10.1075/tilar.27.03fro.

    Abstract

    Acquiring language is notoriously complex, yet for the majority of children this feat is accomplished with remarkable ease. Usage-based accounts of language acquisition suggest that this success can be largely attributed to the wealth of experience with language that children accumulate over the course of language acquisition. One field of research that is heavily underpinned by this principle of experience is statistical learning, which posits that learners can perform powerful computations over the distribution of information in a given input, which can help them to discern precisely how that input is structured, and how it operates. A growing body of work brings this notion to bear in the field of language acquisition, due to a developing understanding of the richness of the statistical information contained in speech. In this chapter we discuss the role that statistical learning plays in language acquisition, emphasising the importance of both the distribution of information within language, and the situation in which language is being learnt. First, we address the types of statistical learning that apply to a range of language learning tasks, asking whether the statistical processes purported to support language learning are the same or distinct across different tasks in language acquisition. Second, we expand the perspective on what counts as environmental input, by determining how statistical learning operates over the situated learning environment, and not just sequences of sounds in utterances. Finally, we address the role of variability in children’s input, and examine how statistical learning can accommodate (and perhaps even exploit) this during language acquisition.
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Galbiati, A., Sforza, M., Poletti, M., Verga, L., Zucconi, M., Ferini-Strambi, L., & Castronovo, V. (2020). Insomnia patients with subjective short total sleep time have a boosted response to cognitive behavioral therapy for insomnia despite residual symptoms. Behavioral Sleep Medicine, 18(1), 58-67. doi:10.1080/15402002.2018.1545650.

    Abstract

    Background: Two distinct insomnia disorder (ID) phenotypes have been proposed, distinguished on the basis of an objective total sleep time less or more than 6 hr. In particular, it has been recently reported that patients with objective short sleep duration have a blunted response to cognitive behavioral therapy for insomnia (CBT-I). The aim of this study was to investigate the differences of CBT-I response in two groups of ID patients subdivided according to total sleep time. Methods: Two hundred forty-six ID patients were subdivided into two groups, depending on their reported total sleep time (TST) assessed by sleep diaries. Patients with a TST greater than 6 hr were classified as “normal sleepers” (NS), while those with a total sleep time less than 6 hr were classified as “short sleepers” (SS). Results: The delta between Insomnia Severity Index scores and sleep efficiency at the beginning as compared to the end of the treatment was significantly higher for SS in comparison to NS, even if they still exhibit more insomnia symptoms. No difference was found between groups in terms of remitters; however, more responders were observed in the SS group in comparison to the NS group. Conclusions: Our results demonstrate that ID patients with reported short total sleep time had a beneficial response to CBT-I of greater magnitude in comparison to NS. However, these patients may still experience the presence of residual insomnia symptoms after treatment.
  • Gallotto, S., Duecker, F., Ten Oever, S., Schuhmann, T., De Graaf, T. A., & Sack, A. T. (2020). Relating alpha power modulations to competing visuospatial attention theories. NeuroImage, 207: 116429. doi:10.1016/j.neuroimage.2019.116429.

    Abstract

    Visuospatial attention theories often propose hemispheric asymmetries underlying the control of attention. In general support of these theories, previous EEG/MEG studies have shown that spatial attention is associated with hemispheric modulation of posterior alpha power (gating by inhibition). However, since measures of alpha power are typically expressed as lateralization scores, or collapsed across left and right attention shifts, the individual hemispheric contribution to the attentional control mechanism remains unclear. This is, however, the most crucial and decisive aspect in which the currently competing attention theories continue to disagree. To resolve this long-standing conflict, we derived predictions regarding alpha power modulations from Heilman's hemispatial theory and Kinsbourne's interhemispheric competition theory and tested them empirically in an EEG experiment. We used an attention paradigm capable of isolating alpha power modulation in two attentional states, namely attentional bias in a neutral cue condition and spatial orienting following directional cues. Differential alpha modulations were found for both hemispheres across conditions. When anticipating peripheral visual targets without preceding directional cues (neutral condition), posterior alpha power in the left hemisphere was generally lower and more strongly modulated than in the right hemisphere, in line with the interhemispheric competition theory. Intriguingly, however, while alpha power in the right hemisphere was modulated by both, cue-directed leftward and rightward attention shifts, the left hemisphere only showed modulations by rightward shifts of spatial attention, in line with the hemispatial theory. This suggests that the two theories may not be mutually exclusive, but rather apply to different attentional states.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Garcia, R., Roeser, J., & Höhle, B. (2020). Children’s online use of word order and morphosyntactic markers in Tagalog thematic role assignment: An eye-tracking study. Journal of Child Language, 47(3), 533-555. doi:10.1017/S0305000919000618.

    Abstract

    We investigated whether Tagalog-speaking children incrementally interpret the first noun
    as the agent, even if verbal and nominal markers for assigning thematic roles are given
    early in Tagalog sentences. We asked five- and seven-year-old children and adult
    controls to select which of two pictures of reversible actions matched the sentence they
    heard, while their looks to the pictures were tracked. Accuracy and eye-tracking data
    showed that agent-initial sentences were easier to comprehend than patient-initial
    sentences, but the effect of word order was modulated by voice. Moreover, our eyetracking
    data provided evidence that, by the first noun phrase, seven-year-old children
    looked more to the target in the agent-initial compared to the patient-initial conditions,
    but this word order advantage was no longer observed by the second noun phrase. The
    findings support language processing and acquisition models which emphasize the role
    of frequency in developing heuristic strategies (e.g., Chang, Dell, & Bock, 2006).
  • Garcia, R., & Kidd, E. (2020). The acquisition of the Tagalog symmetrical voice system: Evidence from structural priming. Language Learning and Development, 16(4), 399-425. doi:10.1080/15475441.2020.1814780.

    Abstract

    We report on two experiments that investigated the acquisition of the Tagalog symmetrical voice system, a typologically rare feature of Western Austronesian languages in which there are more than one basic transitive construction and no preference for agents to be syntactic subjects. In the experiments, 3-, 5-, and 7-year-old Tagalog-speaking children and adults completed a structural priming task that manipulated voice and word order, with the uniqueness of Tagalog allowing us to tease apart priming of thematic role order from that of syntactic roles. Participants heard a description of a picture showing a transitive action, and were then asked to complete a sentence of an unrelated picture using a voice-marked verb provided by the experimenter. Our results show that children gradually acquire an agent-before-patient preference, instead of having a default mapping of the agent to the first noun position. We also found an earlier mastery of the patient voice verbal and nominal marker configuration (patient is the subject), suggesting that children do not initially map the agent to the subject. Children were primed by thematic role but not syntactic role order, suggesting that they prioritize mapping of the thematic roles to sentence positions.
  • Garcia, M., & Ravignani, A. (2020). Acoustic allometry and vocal learning in mammals. Biology Letters, 16: 20200081. doi:10.1098/rsbl.2020.0081.

    Abstract

    Acoustic allometry is the study of how animal vocalisations reflect their body size. A key aim of this research is to identify outliers to acoustic allometry principles and pinpoint the evolutionary origins of such outliers. A parallel strand of research investigates species capable of vocal learning, the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalisations. Modification of vocalizations is a common feature found when studying both acoustic allometry and vocal learning. Yet, these two fields have only been investigated separately to date. Here, we review and connect acoustic allometry and vocal learning across mammalian clades, combining perspectives from bioacoustics, anatomy and evolutionary biology. Based on this, we hypothesize that, as a precursor to vocal learning, some species might have evolved the capacity for volitional vocal modulation via sexual selection for ‘dishonest’ signalling. We provide preliminary support for our hypothesis by showing significant associations between allometric deviation and vocal learning in a dataset of 164 mammals. Our work offers a testable framework for future empirical research linking allometric principles with the evolution of vocal learning.
  • Garcia, M., Theunissen, F., Sèbe, F., Clavel, J., Ravignani, A., Marin-Cudraz, T., Fuchs, J., & Mathevon, N. (2020). Evolution of communication signals and information during species radiation. Nature Communications, 11: 4970. doi:10.1038/s41467-020-18772-3.

    Abstract

    Communicating species identity is a key component of many animal signals. However, whether selection for species recognition systematically increases signal diversity during clade radiation remains debated. Here we show that in woodpecker drumming, a rhythmic signal used during mating and territorial defense, the amount of species identity information encoded remained stable during woodpeckers’ radiation. Acoustic analyses and evolutionary reconstructions show interchange among six main drumming types despite strong phylogenetic contingencies, suggesting evolutionary tinkering of drumming structure within a constrained acoustic space. Playback experiments and quantification of species discriminability demonstrate sufficient signal differentiation to support species recognition in local communities. Finally, we only find character displacement in the rare cases where sympatric species are also closely related. Overall, our results illustrate how historical contingencies and ecological interactions can promote conservatism in signals during a clade radiation without impairing the effectiveness of information transfer relevant to inter-specific discrimination.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gast, V., & Levshina, N. (2014). Motivating w(h)-Clefts in English and German: A hypothesis-driven parallel corpus study. In A.-M. De Cesare (Ed.), Frequency, Forms and Functions of Cleft Constructions in Romance and Germanic: Contrastive, Corpus-Based Studies (pp. 377-414). Berlin: De Gruyter.
  • Geambasu, A., Toron, L., Ravignani, A., & Levelt, C. C. (2020). Rhythmic recursion? Human sensitivity to a Lindenmayer grammar with self-similar structure in a musical task. Music & Science. doi:10.1177%2F2059204320946615.

    Abstract

    Processing of recursion has been proposed as the foundation of human linguistic ability. Yet this ability may be shared with other domains, such as the musical or rhythmic domain. Lindenmayer grammars (L-systems) have been proposed as a recursive grammar for use in artificial grammar experiments to test recursive processing abilities, and previous work had shown that participants are able to learn such a grammar using linguistic stimuli (syllables). In the present work, we used two experimental paradigms (a yes/no task and a two-alternative forced choice) to test whether adult participants are able to learn a recursive Lindenmayer grammar composed of drum sounds. After a brief exposure phase, we found that participants at the group level were sensitive to the exposure grammar and capable of distinguishing the grammatical and ungrammatical test strings above chance level in both tasks. While we found evidence of participants’ sensitivity to a very complex L-system grammar in a non-linguistic, potentially musical domain, the results were not robust. We discuss the discrepancy within our results and with the previous literature using L-systems in the linguistic domain. Furthermore, we propose directions for future music cognition research using L-system grammars.
  • Gebre, B. G., Wittenburg, P., Heskes, T., & Drude, S. (2014). Motion history images for online speaker/signer diarization. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1537-1541). Piscataway, NJ: IEEE.

    Abstract

    We present a solution to the problem of online speaker/signer diarization - the task of determining "who spoke/signed when?". Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of a publicly available dataset (the AMI meeting data) and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences.

    Files private

    Request files
  • Gebre, B. G., Wittenburg, P., Drude, S., Huijbregts, M., & Heskes, T. (2014). Speaker diarization using gesture and speech. In H. Li, & P. Ching (Eds.), Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 582-586).

    Abstract

    We demonstrate how the problem of speaker diarization can be solved using both gesture and speaker parametric models. The novelty of our solution is that we approach the speaker diarization problem as a speaker recognition problem after learning speaker models from speech samples corresponding to gestures (the occurrence of gestures indicates the presence of speech and the location of gestures indicates the identity of the speaker). This new approach offers many advantages: comparable state-of-the-art performance, faster computation and more adaptability. In our implementation, parametric models are used to model speakers' voice and their gestures: more specifically, Gaussian mixture models are used to model the voice characteristics of each person and all persons, and gamma distributions are used to model gestural activity based on features extracted from Motion History Images. Tests on 4.24 hours of the AMI meeting data show that our solution makes DER score improvements of 19% on speech-only segments and 4% on all segments including silence (the comparison is with the AMI system).
  • Gebre, B. G., Crasborn, O., Wittenburg, P., Drude, S., & Heskes, T. (2014). Unsupervised feature learning for visual sign language identification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 370-376). Redhook, NY: Curran Proceedings.

    Abstract

    Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on video samples involving 30 signers (running for a total of 6 hours). Using leave-one-signer-out cross-validation, our evaluation on short video samples shows an average best accuracy of 84%. Given that sign languages are under-resourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification.
  • Gentzsch, W., Lecarpentier, D., & Wittenburg, P. (2014). Big data in science and the EUDAT project. In Proceeding of the 2014 Annual SRII Global Conference.
  • Gerakaki, S. (2020). The moment in between: Planning speech while listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gilbers, S., Hoeksema, N., De Bot, K., & Lowie, W. (2020). Regional variation in West and East Coast African-American English prosody and rap flows. Language and Speech, 63(4), 713-745. doi:10.1177/0023830919881479.

    Abstract

    Regional variation in African-American English (AAE) is especially salient to its speakers involved with hip-hop culture, as hip-hop assigns great importance to regional identity and regional accents are a key means of expressing regional identity. However, little is known about AAE regional variation regarding prosodic rhythm and melody. In hip-hop music, regional variation can also be observed, with different regions’ rap performances being characterized by distinct “flows” (i.e., rhythmic and melodic delivery), an observation which has not been quantitatively investigated yet. This study concerns regional variation in AAE speech and rap, specifically regarding the United States’ East and West Coasts. It investigates how East Coast and West Coast AAE prosody are distinct, how East Coast and West Coast rap flows differ, and whether the two domains follow a similar pattern: more rhythmic and melodic variation on the West Coast compared to the East Coast for both speech and rap. To this end, free speech and rap recordings of 16 prominent African-American members of the East Coast and West Coast hip-hop communities were phonetically analyzed regarding rhythm (e.g., syllable isochrony and musical timing) and melody (i.e., pitch fluctuation) using a combination of existing and novel methodological approaches. The results mostly confirm the hypotheses that East Coast AAE speech and rap are less rhythmically diverse and more monotone than West Coast AAE speech and rap, respectively. They also show that regional variation in AAE prosody and rap flows pattern in similar ways, suggesting a connection between rhythm and melody in language and music.
  • Goldsborough, Z., Van Leeuwen, E. J. C., Kolff, K. W. T., De Waal, F. B. M., & Webb, C. E. (2020). Do chimpanzees (Pan troglodytes) console a bereaved mother? Primates, 61: 20190695, pp. 93-102. doi:10.1007/s10329-019-00752-x.

    Abstract

    Comparative thanatology encompasses the study of death-related responses in non-human animals and aspires to elucidate the evolutionary origins of human behavior in the context of death. Many reports have revealed that humans are not the only species affected by the death of group members. Non-human primates in particular show behaviors such as congregating around the deceased, carrying the corpse for prolonged periods of time (predominantly mothers carrying dead infants), and inspecting the corpse for signs of life. Here, we extend the focus on death-related responses in non-human animals by exploring whether chimpanzees are inclined to console the bereaved: the individual(s) most closely associated with the deceased. We report a case in which a chimpanzee (Pan troglodytes) mother experienced the loss of her fully developed infant (presumed stillborn). Using observational data to compare the group members’ behavior before and after the death, we found that a substantial number of group members selectively increased their affiliative expressions toward the bereaved mother. Moreover, on the day of the death, we observed heightened expressions of species-typical reassurance behaviors toward the bereaved mother. After ruling out several alternative explanations, we propose that many of the chimpanzees consoled the bereaved mother by means of affiliative and selective empathetic expressions.
  • González Alonso, J., Alemán Bañón, J., DeLuca, V., Miller, D., Pereira Soares, S. M., Puig-Mayenco, E., Slaats, S., & Rothman, J. (2020). Event related potentials at initial exposure in third language acquisition: Implications from an artificial mini-grammar study. Journal of Neurolinguistics, 56: 100939. doi:10.1016/j.jneuroling.2020.100939.

    Abstract

    The present article examines the proposal that typology is a major factor guiding transfer selectivity in L3/Ln acquisition. We tested first exposure in L3/Ln using two artificial languages (ALs) lexically based in English and Spanish, focusing on gender agreement between determiners and nouns, and between nouns and adjectives. 50 L1 Spanish-L2 English speakers took part in the experiment. After receiving implicit training in one of the ALs (Mini-Spanish, N = 26; Mini-English, N = 24), gender violations elicited a fronto-lateral negativity in Mini-English in the earliest time window (200–500 ms), although this was not followed by any other differences in subsequent periods. This effect was highly localized, surfacing only in electrodes of the right-anterior region. In contrast, gender violations in Mini-Spanish elicited a broadly distributed positivity in the 300–600 ms time window. While we do not find typical indices of grammatical processing such as the P600 component, we believe that the between-groups differential appearance of the positivity for gender violations in the 300–600 ms time window reflects differential allocation of attentional resources as a function of the ALs’ lexical similarity to English or Spanish. We take these differences in attention to be precursors of the processes involved in transfer source selection in L3/Ln.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., & Kidd, E. (2020). Bliss is blue and bleak is grey: Abstract word-colour associations influence objective performance even when not task relevant. Acta Psychologica, 206: 103067. doi:10.1016/j.actpsy.2020.103067.

    Abstract

    Humans associate abstract words with physical stimulus dimensions, such as linking upward locations with positive concepts (e.g., happy = up). These associations manifest both via subjective reports of associations and on objective performance metrics. Humans also report subjective associations between colours and abstract words (e.g., joy is linked to yellow). Here we tested whether such associations manifest on objective task performance, even when not task-relevant. Across three experiments, participants were presented with abstract words in physical colours that were either congruent with previously-reported subjective word-colour associations (e.g., victory in red and unhappy in blue), or were incongruent (e.g., victory in blue and unhappy in red). In Experiment 1, participants' task was to identify the valence of words. This congruency manipulation systematically affected objective task performance. In Experiment 2, participants completed two blocks, a valence-identification and a colour-identification task block. Both tasks produced congruency effects on performance, however, the results of the colour identification block could have reflected learning effects (i.e., associating the more common congruent colour with the word). This issue was rectified in Experiment 3, whereby participants completed the same two tasks as Experiment 2, but now matched congruent and incongruent pairs were used for both tasks. Again, both tasks produced reliable congruency effects. Item analyses in each experiment revealed that these effects demonstrated a degree of item specificity. Overall, there was clear evidence that at least some abstract word-colour pairings can systematically affect behaviour.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gordon, J. K., & Clough, S. (2020). How fluent? Part B. Underlying contributors to continuous measures of fluency in aphasia. Aphasiology, 34(5), 643-663. doi:10.1080/02687038.2020.1712586.

    Abstract

    Background: While persons with aphasia (PwA) are often dichotomised as fluent or nonfluent, agreement that fluency is not an all-or-nothing construct has led to the use of continuous variables as a way to quantify fluency, such as multi-dimensional rating scales, speech rate, and utterance length. Though these measures are often used in research, they provide little information about the underlying fluency deficit.
    Aim: The aim of the study was to identify how well commonly used continuous measures of fluency capture variability in spontaneous speech variables at lexical, grammatical, and speech production levels. Methods & Procedures: Speech samples of 254 English-speaking PwA from the AphasiaBank database were analyzed to examine the distributions of four continuous measures of fluency: the WAB-R fluency scale, utterance length, retracing, and speech rate. Linear regression was used to identify spontaneous speech predictors contributing to each fluency outcome measure.
    Outcomes & Results: All the outcome measures reflected the influence of multiple underlying dimensions, although the predictors varied. The WAB-R fluency scale, speech rate, and retracing were influenced by measures of grammatical competence, lexical retrieval, and speech production, whereas utterance length was influenced only by measures of grammatical competence and lexical retrieval. The strongest predictor of WAB-R fluency was aphasia severity, whereas the strongest predictor for all other fluency proxy measures was grammatical complexity.
    Conclusions: Continuous measures allow a variety of ways to objectively quantify speech fluency; however, they reflect superficial manifestations of fluency that may be affected by multiple underlying deficits. Furthermore, the deficits underlying different measures vary, which may reduce the reliability of fluency diagnoses. Capturing these differences at the individual level is critical to accurate diagnosis and appropriately targeted therapy.
  • Goregliad Fjaellingsdal, T., Schwenke, D., Scherbaum, S., Kuhlen, A. K., Bögels, S., Meekes, J., & Bleichner, M. G. (2020). Expectancy effects in the EEG during joint and spontaneous word-by-word sentence production in German. Scientific Reports, 10: 5460. doi:10.1038/s41598-020-62155-z.

    Abstract

    Our aim in the present study is to measure neural correlates during spontaneous interactive sentence production. We present a novel approach using the word-by-word technique from improvisational theatre, in which two speakers jointly produce one sentence. This paradigm allows the assessment of behavioural aspects, such as turn-times, and electrophysiological responses, such as event-related-potentials (ERPs). Twenty-five participants constructed a cued but spontaneous four-word German sentence together with a confederate, taking turns for each word of the sentence. In 30% of the trials, the confederate uttered an unexpected gender-marked article. To complete the sentence in a meaningful way, the participant had to detect the violation and retrieve and utter a new fitting response. We found significant increases in response times after unexpected words and – despite allowing unscripted language production and naturally varying speech material – successfully detected significant N400 and P600 ERP effects for the unexpected word. The N400 EEG activity further significantly predicted the response time of the subsequent turn. Our results show that combining behavioural and neuroscientific measures of verbal interactions while retaining sufficient experimental control is possible, and that this combination provides promising insights into the mechanisms of spontaneous spoken dialogue.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goriot, C., McQueen, J. M., Unsworth, S., & Van Hout, R. (2020). Perception of English phonetic contrasts by Dutch children: How bilingual are early-English learners? PLoS One, 15(3): e0229902. doi:10.1371/journal.pone.0229902.

    Abstract

    The aim of this study was to investigate whether early-English education benefits the perception
    of English phonetic contrasts that are known to be perceptually confusable for Dutch
    native speakers, comparing Dutch pupils who were enrolled in an early-English programme
    at school from the age of four with pupils in a mainstream programme with English instruction
    from the age of 11, and English-Dutch early bilingual children. Children were 4-5-yearolds
    (start of primary school), 8-9-year-olds, or 11-12-year-olds (end of primary school).
    Children were tested on four contrasts that varied in difficulty: /b/-/s/ (easy), /k/-/ɡ/ (intermediate),
    /f/-/θ/ (difficult), /ε/-/æ/ (very difficult). Bilingual children outperformed the two other
    groups on all contrasts except /b/-/s/. Early-English pupils did not outperform mainstream
    pupils on any of the contrasts. This shows that early-English education as it is currently
    implemented is not beneficial for pupils’ perception of non-native contrasts.

    Additional information

    Supporting information
  • De Graaf, T. A., Thomson, A., Janssens, S. E. W., Van Bree, S., Ten Oever, S., & Sack, A. T. (2020). Does alpha phase modulate visual target detection? Three experiments with tACS-phase-based stimulus presentation. European Journal of Neuroscience, 51(11), 2299-2313. doi:10.1111/ejn.14677.

    Abstract

    In recent years, the influence of alpha (7–13 Hz) phase on visual processing has received a lot of attention. Magneto‐/encephalography (M/EEG) studies showed that alpha phase indexes visual excitability and task performance. Studies with transcranial alternating current stimulation (tACS) aim to modulate oscillations and causally impact task performance. Here, we applied right occipital tACS (O2 location) to assess the functional role of alpha phase in a series of experiments. We presented visual stimuli at different pre‐determined, experimentally controlled, phases of the entraining tACS signal, hypothesizing that this should result in an oscillatory pattern of visual performance in specifically left hemifield detection tasks. In experiment 1, we applied 10 Hz tACS and used separate psychophysical staircases for six equidistant tACS‐phase conditions, obtaining contrast thresholds for detection of visual gratings in left or right hemifield. In experiments 2 and 3, tACS was at EEG‐based individual peak alpha frequency. In experiment 2, we measured detection rates for gratings with (pseudo‐)fixed contrast. In experiment 3, participants detected brief luminance changes in a custom‐built LED device, at eight equidistant alpha phases. In none of the experiments did the primary outcome measure over phase conditions consistently reflect a one‐cycle sinusoid. However, post hoc analyses of reaction times (RT) suggested that tACS alpha phase did modulate RT for specifically left hemifield targets in both experiments 1 and 2 (not measured in experiment 3). This observation requires future confirmation, but is in line with the idea that alpha phase causally gates visual inputs through cortical excitability modulation.

    Additional information

    Supporting Information
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K. and 341 moreGrasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K., Andersson, M., Ard, T., Armstrong, N. J., Ashley-Koch, A., Atkins, J. R., Bernard, M., Brouwer, R. M., Buimer, E. E. L., Bülow, R., Bürger, C., Cannon, D. M., Chakravarty, M., Chen, Q., Cheung, J. W., Couvy-Duchesne, B., Dale, A. M., Dalvie, S., De Araujo, T. K., De Zubicaray, G. I., De Zwarte, S. M. C., Den Braber, A., Doan, N. T., Dohm, K., Ehrlich, S., Engelbrecht, H.-R., Erk, S., Fan, C. C., Fedko, I. O., Foley, S. F., Ford, J. M., Fukunaga, M., Garrett, M. E., Ge, T., Giddaluru, S., Goldman, A. L., Green, M. J., Groenewold, N. A., Grotegerd, D., Gurholt, T. P., Gutman, B. A., Hansell, N. K., Harris, M. A., Harrison, M. B., Haswell, C. C., Hauser, M., Herms, S., Heslenfeld, D. J., Ho, N. F., Hoehn, D., Hoffmann, P., Holleran, L., Hoogman, M., Hottenga, J.-J., Ikeda, M., Janowitz, D., Jansen, I. E., Jia, T., Jockwitz, C., Kanai, R., Karama, S., Kasperaviciute, D., Kaufmann, T., Kelly, S., Kikuchi, M., Klein, M., Knapp, M., Knodt, A. R., Krämer, B., Lam, M., Lancaster, T. M., Lee, P. H., Lett, T. A., Lewis, L. B., Lopes-Cendes, I., Luciano, M., Macciardi, F., Marquand, A. F., Mathias, S. R., Melzer, T. R., Milaneschi, Y., Mirza-Schreiber, N., Moreira, J. C. V., Mühleisen, T. W., Müller-Myhsok, B., Najt, P., Nakahara, S., Nho, K., Olde Loohuis, L. M., Orfanos, D. P., Pearson, J. F., Pitcher, T. L., Pütz, B., Quidé, Y., Ragothaman, A., Rashid, F. M., Reay, W. R., Redlich, R., Reinbold, C. S., Repple, J., Richard, G., Riedel, B. C., Risacher, S. L., Rocha, C. S., Mota, N. R., Salminen, L., Saremi, A., Saykin, A. J., Schlag, F., Schmaal, L., Schofield, P. R., Secolin, R., Shapland, C. Y., Shen, L., Shin, J., Shumskaya, E., Sønderby, I. E., Sprooten, E., Tansey, K. E., Teumer, A., Thalamuthu, A., Tordesillas-Gutiérrez, D., Turner, J. A., Uhlmann, A., Vallerga, C. L., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, L., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Tol, M.-J., Veldink, J. H., Verhoef, E., Walton, E., Wang, M., Wang, Y., Wardlaw, J. M., Wen, W., Westlye, L. T., Whelan, C. D., Witt, S. H., Wittfeld, K., Wolf, C., Wolfers, T., Wu, J. Q., Yasuda, C. L., Zaremba, D., Zhang, Z., Zwiers, M. P., Artiges, E., Assareh, A. A., Ayesa-Arriola, R., Belger, A., Brandt, C. L., Brown, G. G., Cichon, S., Curran, J. E., Davies, G. E., Degenhardt, F., Dennis, M. F., Dietsche, B., Djurovic, S., Doherty, C. P., Espiritu, R., Garijo, D., Gil, Y., Gowland, P. A., Green, R. C., Häusler, A. N., Heindel, W., Ho, B.-C., Hoffmann, W. U., Holsboer, F., Homuth, G., Hosten, N., Jack Jr., C. R., Jang, M., Jansen, A., Kimbrel, N. A., Kolskår, K., Koops, S., Krug, A., Lim, K. O., Luykx, J. J., Mathalon, D. H., Mather, K. A., Mattay, V. S., Matthews, S., Mayoral Van Son, J., McEwen, S. C., Melle, I., Morris, D. W., Mueller, B. A., Nauck, M., Nordvik, J. E., Nöthen, M. M., O’Leary, D. S., Opel, N., Paillère Martinot, M.-L., Pike, G. B., Preda, A., Quinlan, E. B., Rasser, P. E., Ratnakar, V., Reppermund, S., Steen, V. M., Tooney, P. A., Torres, F. R., Veltman, D. J., Voyvodic, J. T., Whelan, R., White, T., Yamamori, H., Adams, H. H. H., Bis, J. C., Debette, S., Decarli, C., Fornage, M., Gudnason, V., Hofer, E., Ikram, M. A., Launer, L., Longstreth, W. T., Lopez, O. L., Mazoyer, B., Mosley, T. H., Roshchupkin, G. V., Satizabal, C. L., Schmidt, R., Seshadri, S., Yang, Q., Alzheimer’s Disease Neuroimaging Initiative, CHARGE Consortium, EPIGEN Consortium, IMAGEN Consortium, SYS Consortium, Parkinson’s Progression Markers Initiative, Alvim, M. K. M., Ames, D., Anderson, T. J., Andreassen, O. A., Arias-Vasquez, A., Bastin, M. E., Baune, B. T., Beckham, J. C., Blangero, J., Boomsma, D. I., Brodaty, H., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bustillo, J. R., Cahn, W., Cairns, M. J., Calhoun, V., Carr, V. J., Caseras, X., Caspers, S., Cavalleri, G. L., Cendes, F., Corvin, A., Crespo-Facorro, B., Dalrymple-Alford, J. C., Dannlowski, U., De Geus, E. J. C., Deary, I. J., Delanty, N., Depondt, C., Desrivières, S., Donohoe, G., Espeseth, T., Fernández, G., Fisher, S. E., Flor, H., Forstner, A. J., Francks, C., Franke, B., Glahn, D. C., Gollub, R. L., Grabe, H. J., Gruber, O., Håberg, A. K., Hariri, A. R., Hartman, C. A., Hashimoto, R., Heinz, A., Henskens, F. A., Hillegers, M. H. J., Hoekstra, P. J., Holmes, A. J., Hong, L. E., Hopkins, W. D., Hulshoff Pol, H. E., Jernigan, T. L., Jönsson, E. G., Kahn, R. S., Kennedy, M. A., Kircher, T. T. J., Kochunov, P., Kwok, J. B. J., Le Hellard, S., Loughland, C. M., Martin, N. G., Martinot, J.-L., McDonald, C., McMahon, K. L., Meyer-Lindenberg, A., Michie, P. T., Morey, R. A., Mowry, B., Nyberg, L., Oosterlaan, J., Ophoff, R. A., Pantelis, C., Paus, T., Pausova, Z., Penninx, B. W. J. H., Polderman, T. J. C., Posthuma, D., Rietschel, M., Roffman, J. L., Rowland, L. M., Sachdev, P. S., Sämann, P. G., Schall, U., Schumann, G., Scott, R. J., Sim, K., Sisodiya, S. M., Smoller, J. W., Sommer, I. E., St Pourcain, B., Stein, D. J., Toga, A. W., Trollor, J. N., Van der Wee, N. J. A., van 't Ent, D., Völzke, H., Walter, H., Weber, B., Weinberger, D. R., Wright, M. J., Zhou, J., Stein, J. L., Thompson, P. M., & Medland, S. E. (2020). The genetic architecture of the human cerebral cortex. Science, 367(6484): eaay6690. doi:10.1126/science.aay6690.

    Abstract

    The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson’s disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder.
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guerra, E., Huettig, F., & Knoeferle, P. (2014). Assessing the time course of the influence of featural, distributional and spatial representations during reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2309-2314). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/402/.

    Abstract

    What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributional representations) have distinctive effects on sentence reading times. In other words, we explored whether these measures of semantic similarity differ qualitatively. In addition, we examined whether visually perceived spatial distance interacts with either or both of these measures. Our results showed that the effect of featural and distributional representations on reading times can differ both in direction and in its time course. Moreover, both featural and distributional information interacted with spatial distance, yet in different sentence regions and reading measures. We conclude that featural and distributional representations are distinct components of semantic representation.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance modulates reading times for sentences about social relations: evidence from eye tracking. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2315-2320). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/403/.

    Abstract

    Recent evidence from eye tracking during reading showed that non-referential spatial distance presented in a visual context can modulate semantic interpretation of similarity relations rapidly and incrementally. In two eye-tracking reading experiments we extended these findings in two important ways; first, we examined whether other semantic domains (social relations) could also be rapidly influenced by spatial distance during sentence comprehension. Second, we aimed to further specify how abstract language is co-indexed with spatial information by varying the syntactic structure of sentences between experiments. Spatial distance rapidly modulated reading times as a function of the social relation expressed by a sentence. Moreover, our findings suggest that abstract language can be co-indexed as soon as critical information becomes available for the reader.
  • Guest, O., Caso, A., & Cooper, R. P. (2020). On simulating neural damage in connectionist networks. Computational Brain & Behavior, 3, 289-321. doi:10.1007/s42113-020-00081-z.

    Abstract

    A key strength of connectionist modelling is its ability to simulate both intact cognition and the behavioural effects of neural damage. We survey the literature, showing that models have been damaged in a variety of ways, e.g. by removing connections, by adding noise to connection weights, by scaling weights, by removing units and by adding noise to unit activations. While these different implementations of damage have often been assumed to be behaviourally equivalent, some theorists have made aetiological claims that rest on nonequivalence. They suggest that related deficits with different aetiologies might be accounted for by different forms of damage within a single model. We present two case studies that explore the effects of different forms of damage in two influential connectionist models, each of which has been applied to explain neuropsychological deficits. Our results indicate that the effect of simulated damage can indeed be sensitive to the way in which damage is implemented, particularly when the environment comprises subsets of items that differ in their statistical properties, but such effects are sensitive to relatively subtle aspects of the model’s training environment. We argue that, as a consequence, substantial methodological care is required if aetiological claims about simulated neural damage are to be justified, and conclude more generally that implementation assumptions, including those concerning simulated damage, must be fully explored when evaluating models of neurological deficits, both to avoid over-extending the explanatory power of specific implementations and to ensure that reported results are replicable.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Güldemann, T., & Hammarström, H. (2020). Geographical axis effects in large-scale linguistic distributions. In M. Crevels, & P. Muysken (Eds.), Language Dispersal, Diversification, and Contact. Oxford: Oxford University Press.
  • Gullberg, M. (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press.

    Abstract

    Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work pre­sented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communi­cation strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish lear­ners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implica­tions for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
  • Gumperz, J. J., & Levinson, S. C. (1996). Introduction to part I. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 21-36). Cambridge: Cambridge University Press.
  • Gumperz, J. J., & Levinson, S. C. (1996). Introduction to part III. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 225-231). Cambridge: Cambridge University Press.
  • Gumperz, J. J., & Levinson, S. C. (1996). Introduction: Linguistic relativity re-examined. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 1-20). Cambridge: Cambridge University Press.
  • Gumperz, J. J., & Levinson, S. C. (Eds.). (1996). Rethinking linguistic relativity. Cambridge: Cambridge University Press.
  • Gussenhoven, C., Chen, Y., & Dediu, D. (Eds.). (2014). 4th International Symposium on Tonal Aspects of Language, Nijmegen, The Netherlands, May 13-16, 2014. ISCA Archive.
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1996). Lexical-semantic event-related potential effects in patients with left hemisphere lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain, 119, 627-649. doi:10.1093/brain/119.2.627.

    Abstract

    Lexical-semantic processing impairments in aphasic patients with left hemisphere lesions and non-aphasic patients with right hemisphere lesions were investigated by recording event-related brain potentials (ERPs) while subjects listened to auditorily presented word pairs. The word pairs consisted of unrelated words, or words that were related in meaning. The related words were either associatively related, e.g. 'bread-butter', or were members of the same semantic category without being associatively related, e.g. 'churchvilla '. The latter relationships are assumed to be more distant than the former ones. The most relevant ERP component in this study is the N400. In elderly control subjects, the N400 amplitude to associatively and semantically related word targets is reduced relative to the N400 elicited by unrelated targets. Compared with this normal N400 effect, the different patient groups showed the following pattern of results: aphasic patients with only minor comprehension deficits (high comprehenders) showed N400 effects of a similar size as the control subjects. In aphasic patients with more severe comprehension deficits (low comprehenders) a clear reduction in the N400 effects was obtained, both for the associative and the semantic word pairs. The patients with right hemisphere lesions showed a normal N400 effect for the associatively related targets, but a trend towards a reduced N400 effect for the semantically related word pairs. A dissociation between the N400 results in the word pair paradigm and P300 results in a classical tone oddball task indicated that the N400 effects were not an aspecific consequence of brain lesion, but were related to the nature of the language comprehension impairment. The conclusions drawn from the ERP results are that comprehension deficits in the aphasic patients are due to an impairment in integrating individual word meanings into an overall meaning representation. Right hemisphere patients are more specifically impaired in the processing of semantically more distant relationships, suggesting the involvement of the right hemisphere in semantically coarse coding.
  • Hagoort, P. (2014). Introduction to section on language and abstract thought. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 615-618). Cambridge, Mass: MIT Press.
  • Hagoort, P., & Levinson, S. C. (2014). Neuropragmatics. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 667-674). Cambridge, Mass: MIT Press.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P. (2020). Taal. In O. Van den Heuvel, Y. Van der Werf, B. Schmand, & B. Sabbe (Eds.), Leerboek neurowetenschappen voor de klinische psychiatrie (pp. 234-239). Amsterdam: Boom Uitgevers.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hahn, L. E., Ten Buuren, M., Snijders, T. M., & Fikkert, P. (2020). Learning words in a second language while cycling and listening to children’s songs: The Noplica Energy Center. International Journal of Music in Early Childhood, 15(1), 95-108. doi:10.1386/ijmec_00014_1.

    Abstract

    Children’s songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring children’s songs in a playhouse. The playhouse is designed by the Noplica foundation (www.noplica.nl) to advance language learning through unsupervised play. We present data from three experiments that serve to scientifically proof the functionality of one game of the playhouse: the Energy Center. For this game, children move three hand-bikes mounted on a panel within the playhouse. Once the children cycle, a song starts playing that is accompanied by musical instruments. In our experiments, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our experiments were run in the field, one at a Dutch and one at an Indian pre-school. The third experiment features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from the songs they heard. We will pick up upon these promising observations during future studies.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2020). Six-month-old infants recognize phrases in song and speech. Infancy, 25(5), 699-718. doi:10.1111/infa.12357.

    Abstract

    Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well‐attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six‐month‐old Dutch infants (n = 80) were tested in the song or speech modality in the head‐turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well‐formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well‐formed sequence, but only in a more fine‐grained analysis. The preference for well‐formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.

    Additional information

    infa12357-sup-0001-supinfo.zip
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). Basic vocabulary comparison in South American languages. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 56-72). Cambridge: Cambridge University Press.
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Hammarström, H. (2014). Papuan languages. In M. Aronoff (Ed.), Oxford bibliographies in linguistics. New York: Oxford University Press. doi:10.1093/OBO/9780199772810-0165.
  • Hammond, J. (2014). Switch-reference antecedence and subordination in Whitesands (Oceanic). In R. van Gijn, J. Hammond, D. Matić, S. van Putten, & A. V. Galucio (Eds.), Information structure and reference tracking in complex sentences. (pp. 263-290). Amsterdam: Benjamins.

    Abstract

    Whitesands is an Oceanic language of the southern Vanuatu subgroup. Like the related languages of southern Vanuatu, Whitesands has developed a clause-linkage system which monitors referent continuity on new clauses – typically contrasting with the previous clause. In this chapter I address how the construction interacts with topic continuity in discourse. I outline the morphosyntactic form of this anaphoric co-reference device. From a functionalist perspective, I show how the system is used in natural discourse and discuss its restrictions with respect to relative and complement clauses. I conclude with a discussion on its interactions with theoretical notions of information structure – in particular the nature of presupposed versus asserted clauses, information back- and foregrounding and how these affect the use of the switch-reference system
  • Harmon, Z., & Kapatsinski, V. (2020). The best-laid plan of mice and men: Competition between top-down and preceding-item cues in plan execution. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 1674-1680). Montreal, QB: Cognitive Science Society.

    Abstract

    There is evidence that the process of executing a planned utterance involves the use of both preceding-context and top-down cues. Utterance-initial words are cued only by the top-down plan. In contrast, non-initial words are cued both by top-down cues and preceding-context cues. Co-existence of both cue types raises the question of how they interact during learning. We argue that this interaction is competitive: items that tend to be preceded by predictive preceding-context cues are harder to activate from the plan without this predictive context. A novel computational model of this competition is developed. The model is tested on a corpus of repetition disfluencies and shown to account for the influences on patterns of restarts during production. In particular, this model predicts a novel Initiation Effect: following an interruption, speakers re-initiate production from words that tend to occur in utterance-initial position, even when they are not initial in the interrupted utterance.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research - A primer. Infancy, 25(5), 734-754. doi:10.1111/infa.12353.

    Abstract

    Preregistration, the act of specifying a research plan in advance, is becoming more common in scientific research. Infant researchers contend with unique problems that might make preregistration particularly challenging. Infants are a hard‐to‐reach population, usually yielding small sample sizes, they can only complete a limited number of trials, and they can be excluded based on hard‐to‐predict complications (e.g., parental interference, fussiness). In addition, as effects themselves potentially change with age and population, it is hard to calculate an a priori effect size. At the same time, these very factors make preregistration in infant studies a valuable tool. A priori examination of the planned study, including the hypotheses, sample size, and resulting statistical power, increases the credibility of single studies and adds value to the field. Preregistration might also improve explicit decision making to create better studies. We present an in‐depth discussion of the issues uniquely relevant to infant researchers, and ways to contend with them in preregistration and study planning. We provide recommendations to researchers interested in following current best practices.

    Additional information

    Preprint version on OSF
  • De Heer Kloots, M., Carlson, D., Garcia, M., Kotz, S., Lowry, A., Poli-Nardi, L., de Reus, K., Rubio-García, A., Sroka, M., Varola, M., & Ravignani, A. (2020). Rhythmic perception, production and interactivity in harbour and grey seals. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 59-62). Nijmegen: The Evolution of Language Conferences.
  • Heidlmayr, K., Kihlstedt, M., & Isel, F. (2020). A review on the electroencephalography markers of Stroop executive control processes. Brain and Cognition, 146: 105637. doi:10.1016/j.bandc.2020.105637.

    Abstract

    The present article on executive control addresses the issue of the locus of the Stroop effect by examining neurophysiological components marking conflict monitoring, interference suppression, and conflict resolution. Our goal was to provide an overview of a series of determining neurophysiological findings including neural source reconstruction data on distinct executive control processes and sub-processes involved in the Stroop task. Consistently, a fronto-central N2 component is found to reflect conflict monitoring processes, with its main neural generator being the anterior cingulate cortex (ACC). Then, for cognitive control tasks that involve a linguistic component like the Stroop task, the N2 is followed by a centro-posterior N400 and subsequently a late sustained potential (LSP). The N400 is mainly generated by the ACC and the prefrontal cortex (PFC) and is thought to reflect interference suppression, whereas the LSP plausibly reflects conflict resolution processes. The present overview shows that ERP constitute a reliable methodological tool for tracing with precision the time course of different executive processes and sub-processes involved in experimental tasks involving a cognitive conflict. Future research should shed light on the fine-grained mechanisms of control respectively involved in linguistic and non-linguistic tasks.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2020). No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Cortex, 130, 111-126. doi:10.1016/j.cortex.2020.04.035.

    Abstract

    Speakers and listeners usually interact in larger discourses than single words or even single sentences. The goal of the present study was to identify the neural bases reflecting how the mental representation of the situation denoted in a multi-sentence discourse (situation model) is constructed and shared between speakers and listeners. An fMRI study using a variant of the ambiguous text paradigm was designed. Speakers (n=15) produced ambiguous texts in the scanner and listeners (n=27) subsequently listened to these texts in different states of ambiguity: preceded by a highly informative, intermediately informative or no title at all. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation analyses between the speakers’ and the listeners’ hemodynamic time courses were performed. Critically, only the processing of disambiguated, coherent discourse with an intelligible situation model representation involved (shared) activation in bilateral lateral parietal and medial prefrontal regions. This shared spatiotemporal pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval in medial prefrontal regions and the binding of retrieved information in the lateral parietal cortex constitutes a core mechanism underlying the communication of complex conceptual representations.

    Additional information

    supplementary data
  • Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & De Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11: 321. doi:10.1038/s41467-019-13996-4.

    Abstract

    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.

    Additional information

    Supplementary information
  • Heinrich, T., Ravignani, A., & Hanke, F. H. (2020). Visual timing abilities of a harbour seal (Phoca vitulina) and a South African fur seal (Arctocephalus pusillus pusillus) for sub- and supra-second time intervals. Animal Cognition, 23(5), 851-859. doi:10.1007/s10071-020-01390-3.

    Abstract

    Timing is an essential parameter influencing many behaviours. A previous study demonstrated a high sensitivity of a phocid, the harbour seal (Phoca vitulina), in discriminating time intervals. In the present study, we compared the harbour seal’s timing abilities with the timing abilities of an otariid, the South African fur seal (Arctocephalus pusillus pusillus). This comparison seemed essential as phocids and otariids differ in many respects and might, thus, also differ regarding their timing abilities. We determined time difference thresholds for sub- and suprasecond time intervals marked by a white circle on a black background displayed for a specific time interval on a monitor using a staircase method. Contrary to our expectation, the timing abilities of the fur seal and the harbour seal were comparable. Over a broad range of time intervals, 0.8–7 s in the fur seal and 0.8–30 s in the harbour seal, the difference thresholds followed Weber’s law. In this range, both animals could discriminate time intervals differing only by 12 % and 14 % on average. Timing might, thus be a fundamental cue for pinnipeds in general to be used in various contexts, thereby complementing information provided by classical sensory systems. Future studies will help to clarify if timing is indeed involved in foraging decisions or the estimation of travel speed or distance.

    Additional information

    supplementary material
  • Henson, R. N., Suri, S., Knights, E., Rowe, J. B., Kievit, R. A., Lyall, D. M., Chan, D., Eising, E., & Fisher, S. E. (2020). Effect of apolipoprotein E polymorphism on cognition and brain in the Cambridge Centre for Ageing and Neuroscience cohort. Brain and Neuroscience Advances, 4: 2398212820961704. doi:10.1177/2398212820961704.

    Abstract

    Polymorphisms in the apolipoprotein E (APOE) gene have been associated with individual differences in cognition, brain structure and brain function. For example, the ε4 allele has been associated with cognitive and brain impairment in old age and increased risk of dementia, while the ε2 allele has been claimed to be neuroprotective. According to the ‘antagonistic pleiotropy’ hypothesis, these polymorphisms have different effects across the lifespan, with ε4, for example, postulated to confer benefits on cognitive and brain functions earlier in life. In this stage 2 of the Registered Report – https://osf.io/bufc4, we report the results from the cognitive and brain measures in the Cambridge Centre for Ageing and Neuroscience cohort (www.cam-can.org). We investigated the antagonistic pleiotropy hypothesis by testing for allele-by-age interactions in approximately 600 people across the adult lifespan (18–88 years), on six outcome variables related to cognition, brain structure and brain function (namely, fluid intelligence, verbal memory, hippocampal grey-matter volume, mean diffusion within white matter and resting-state connectivity measured by both functional magnetic resonance imaging and magnetoencephalography). We found no evidence to support the antagonistic pleiotropy hypothesis. Indeed, Bayes factors supported the null hypothesis in all cases, except for the (linear) interaction between age and possession of the ε4 allele on fluid intelligence, for which the evidence for faster decline in older ages was ambiguous. Overall, these pre-registered analyses question the antagonistic pleiotropy of APOE polymorphisms, at least in healthy adults.

    Additional information

    supplementary material
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Hestvik, A., Shinohara, Y., Durvasula, K., Verdonschot, R. G., & Sakai, H. (2020). Abstractness of human speech sound representations. Brain Research, 1732: 146664. doi:10.1016/j.brainres.2020.146664.

    Abstract

    We argue, based on a study of brain responses to speech sound differences in Japanese, that memory encoding of functional speech sounds-phonemes-are highly abstract. As an example, we provide evidence for a theory where the consonants/p t k b d g/ are not only made up of symbolic features but are underspecified with respect to voicing or laryngeal features, and that languages differ with respect to which feature value is underspecified. In a previous study we showed that voiced stops are underspecified in English [Hestvik, A., & Durvasula, K. (2016). Neurobiological evidence for voicing underspecification in English. Brain and Language], as shown by asymmetries in Mismatch Negativity responses to /t/ and /d/. In the current study, we test the prediction that the opposite asymmetry should be observed in Japanese, if voiceless stops are underspecified in that language. Our results confirm this prediction. This matches a linguistic architecture where phonemes are highly abstract and do not encode actual physical characteristics of the corresponding speech sounds, but rather different subsets of abstract distinctive features.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.

    Abstract

    The use of virtual reality (VR) as a methodological tool is
    becoming increasingly popular in behavioural research due
    to its seemingly limitless possibilities. This new method has
    not been used frequently in the field of psycholinguistics,
    however, possibly due to the assumption that humancomputer
    interaction does not accurately reflect human-human
    interaction. In the current study we compare participants’
    language behaviour in a syntactic priming task with human
    versus avatar partners. Our study shows comparable priming
    effects between human and avatar partners (Human: 12.3%;
    Avatar: 12.6% for passive sentences) suggesting that VR is a
    valid platform for conducting language research and studying
    dialogue interactions.
  • Hildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A. and 11 moreHildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A., Davis, N., Reilly, S., Delatycki, M., Liégeois, F. J., Connelly, A., Gecz, J., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2020). Severe childhood speech disorder: Gene discovery highlights transcriptional dysregulation. Neurology, 94(20), e2148-e2167. doi:10.1212/WNL.0000000000009441.

    Abstract

    Objective
    Determining the genetic basis of speech disorders provides insight into the neurobiology of
    human communication. Despite intensive investigation over the past 2 decades, the etiology of
    most speech disorders in children remains unexplained. To test the hypothesis that speech
    disorders have a genetic etiology, we performed genetic analysis of children with severe speech
    disorder, specifically childhood apraxia of speech (CAS).
    Methods
    Precise phenotyping together with research genome or exome analysis were performed on
    children referred with a primary diagnosis of CAS. Gene coexpression and gene set enrichment
    analyses were conducted on high-confidence gene candidates.
    Results
    Thirty-four probands ascertained for CAS were studied. In 11/34 (32%) probands, we identified
    highly plausible pathogenic single nucleotide (n = 10; CDK13, EBF3, GNAO1, GNB1,
    DDX3X, MEIS2, POGZ, SETBP1, UPF2, ZNF142) or copy number (n = 1; 5q14.3q21.1 locus)
    variants in novel genes or loci for CAS. Testing of parental DNA was available for 9 probands
    and confirmed that the variants had arisen de novo. Eight genes encode proteins critical for
    regulation of gene transcription, and analyses of transcriptomic data found CAS-implicated
    genes were highly coexpressed in the developing human brain.
    Conclusion
    We identify the likely genetic etiology in 11 patients with CAS and implicate 9 genes for the first
    time. We find that CAS is often a sporadic monogenic disorder, and highly genetically heterogeneous.
    Highly penetrant variants implicate shared pathways in broad transcriptional
    regulation, highlighting the key role of transcriptional regulation in normal speech development.
    CAS is a distinctive, socially debilitating clinical disorder, and understanding its
    molecular basis is the first step towards identifying precision medicine approaches.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.

    Abstract

    Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

    Additional information

    Supplemental Material
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.

    Abstract

    Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations.
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoeksema, N., Villanueva, S., Mengede, J., Salazar-Casals, A., Rubio-García, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2020). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 162-164). Nijmegen: The Evolution of Language Conferences.
  • Hoeksema, N., Wiesmann, M., Kiliaan, A., Hagoort, P., & Vernes, S. C. (2020). Bats and the comparative neurobiology of vocal learning. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 165-167). Nijmegen: The Evolution of Language Conferences.

Share this page