Publications

Displaying 401 - 500 of 660
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Otake, T., & Cutler, A. (2013). Lexical selection in action: Evidence from spontaneous punning. Language and Speech, 56(4), 555-573. doi:10.1177/0023830913478933.

    Abstract

    Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.
  • Ozturk, O., Shayan, S., Liszkowski, U., & Majid, A. (2013). Language is not necessary for color categories. Developmental Science, 16, 111-115. doi:10.1111/desc.12008.

    Abstract

    The origin of color categories is under debate. Some researchers argue that color categories are linguistically constructed, while others claim they have a pre-linguistic, and possibly even innate, basis. Although there is some evidence that 4–6-month-old infants respond categorically to color, these empirical results have been challenged in recent years. First, it has been claimed that previous demonstrations of color categories in infants may reflect color preferences instead. Second, and more seriously, other labs have reported failing to replicate the basic findings at all. In the current study we used eye-tracking to test 8-month-old infants’ categorical perception of a previously attested color boundary (green–blue) and an additional color boundary (blue–purple). Our results show that infants are faster and more accurate at fixating targets when they come from a different color category than when from the same category (even though the chromatic separation sizes were equated). This is the case for both blue–green and blue–purple. Our findings provide independent evidence for the existence of color categories in pre-linguistic infants, and suggest that categorical perception of color can occur without color language.
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Perlman, M., & Gibbs, R. W. (2013). Pantomimic gestures reveal the sensorimotor imagery of a human-fostered gorilla. Journal of Mental Imagery, 37(3/4), 73-96.

    Abstract

    This article describes the use of pantomimic gestures by the human-fostered gorilla, Koko, as evidence of her sensorimotor imagery. We present five video recorded instances of Koko's spontaneously created pantomimes during her interactions with human caregivers. The precise movements and context of each gesture are described in detail to examine how it functions to communicate Koko's requests for various objects and actions to be performed. Analysis assess the active "iconicity" of each targeted gesture and examines the underlying elements of sensorimotor imagery that are incorporated by the gesture. We suggest that Koko's pantomimes reflect an imaginative understanding of different actions, objects, and events that is similar in important respects with humans' embodied imagery capabilities.
  • Peter, M. S., & Rowland, C. F. (2019). Aligning developmental and processing accounts of implicit and statistical learning. Topics in Cognitive Science, 11, 555-572. doi:10.1111/tops.12396.

    Abstract

    A long‐standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult‐like. In this work, we consider an alternative explanation—that children use error‐based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual‐path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error‐based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
  • Peter, M. S., Durrant, S., Jessop, A., Bidgood, A., Pine, J. M., & Rowland, C. F. (2019). Does speed of processing or vocabulary size predict later language growth in toddlers? Cognitive Psychology, 115: 101238. doi:10.1016/j.cogpsych.2019.101238.

    Abstract

    It is becoming increasingly clear that the way that children acquire cognitive representations
    depends critically on how their processing system is developing. In particular, recent studies
    suggest that individual differences in language processing speed play an important role in explaining
    the speed with which children acquire language. Inconsistencies across studies, however,
    mean that it is not clear whether this relationship is causal or correlational, whether it is
    present right across development, or whether it extends beyond word learning to affect other
    aspects of language learning, like syntax acquisition. To address these issues, the current study
    used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test
    the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed
    language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UKCDI,
    Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing
    speed correlated with vocabulary size - though this relationship changed over time, and was
    observed only when there was variation in how well the items used in the looking-while-listening
    task were known. Fast processing speed was a positive predictor of subsequent vocabulary
    growth, but only for children with smaller vocabularies. Faster processing speed did, however,
    predict faster syntactic growth across the whole sample, even when controlling for concurrent
    vocabulary. The results indicate a relatively direct relationship between processing speed and
    syntactic development, but point to a more complex interaction between processing speed, vocabulary
    size and subsequent vocabulary growth.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Petras, K., Ten Oever, S., Jacobs, C., & Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage, 186, 103-112. doi:10.1016/j.neuroimage.2018.10.086.

    Abstract

    Coarse-to-fine theories of vision propose that the coarse information carried by the low spatial frequencies (LSF) of visual input guides the integration of finer, high spatial frequency (HSF) detail. Whether and how LSF modulates HSF processing in naturalistic broad-band stimuli is still unclear. Here we used multivariate decoding of EEG signals to separate the respective contribution of LSF and HSF to the neural response evoked by broad-band images. Participants viewed images of human faces, monkey faces and phase-scrambled versions that were either broad-band or filtered to contain LSF or HSF. We trained classifiers on EEG scalp-patterns evoked by filtered scrambled stimuli and evaluated the derived models on broad-band scrambled and intact trials. We found reduced HSF contribution when LSF was informative towards image content, indicating that coarse information does guide the processing of fine detail, in line with coarse-to-fine theories. We discuss the potential cortical mechanisms underlying such coarse-to-fine feedback.

    Additional information

    Supplementary figures
  • Petzell, M., & Hammarström, H. (2013). Grammatical and lexical subclassification of the Morogoro region, Tanzania. Nordic journal of African Studies, 22(3), 129-157.

    Abstract

    This article discusses lexical and grammatical comparison and sub-grouping in a set of closely related Bantu language varieties in the Morogoro region, Tanzania. The Greater Ruvu Bantu language varieties include Kagulu [G12], Zigua [G31], Kwere [G32], Zalamo [G33], Nguu [G34], Luguru [G35], Kami [G36] and Kutu [G37]. The comparison is based on 27 morphophonological and morphosyntactic parameters, supplemented by a lexicon of 500 items. In order to determine the relationships and boundaries between the varieties, grammatical phenomena constitute a valuable complement to counting the number of identical words or cognates. We have used automated cognate judgment methods, as well as manual cognate judgments based on older sources, in order to compare lexical data. Finally, we have included speaker attitudes (i.e. self-assessment of linguistic similarity) in an attempt to map whether the languages that are perceived by speakers as being linguistically similar really are closely related.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Poort, E. D., & Rodd, J. M. (2019). A database of Dutch–English cognates, interlingual homographs and translation equivalents. Journal of Cognition, 2(1): 15. doi:10.5334/joc.67.

    Abstract

    To investigate the structure of the bilingual mental lexicon, researchers in the field of bilingualism often use words that exist in multiple languages: cognates (which have the same meaning) and interlingual homographs (which have a different meaning). A high proportion of these studies have investigated language processing in Dutch–English bilinguals. Despite the abundance of research using such materials, few studies exist that have validated such materials. We conducted two rating experiments in which Dutch–English bilinguals rated the meaning, spelling and pronunciation similarity of pairs of Dutch and English words. On the basis of these results, we present a new database of Dutch–English identical cognates (e.g. “wolf”–“wolf”; n = 58), non-identical cognates (e.g. “kat”–“cat”; n = 74), interlingual homographs (e.g. “angel”–“angel”; n = 72) and translation equivalents (e.g. “wortel”–“carrot”; n = 78). The database can be accessed at http://osf.io/tcdxb/.

    Additional information

    database
  • Poort, E. D., & Rodd, J. M. (2019). Towards a distributed connectionist account of cognates and interlingual homographs: Evidence from semantic relatedness tasks. PeerJ, 7: e6725. doi:10.7717/peerj.6725.

    Abstract

    Background

    Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments.
    Methods

    In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task.
    Results

    In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task.
    Conclusion

    After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case.
  • Postema, M., De Marco, M., Colato, E., & Venneri, A. (2019). A study of within-subject reliability of the brain’s default-mode network. Magnetic Resonance Materials in Physics, Biology and Medicine, 32(3), 391-405. doi:10.1007/s10334-018-00732-0.

    Abstract

    Objective

    Resting-state functional magnetic resonance imaging (fMRI) is promising for Alzheimer’s disease (AD). This study aimed to examine short-term reliability of the default-mode network (DMN), one of the main haemodynamic patterns of the brain.
    Materials and methods

    Using a 1.5 T Philips Achieva scanner, two consecutive resting-state fMRI runs were acquired on 69 healthy adults, 62 patients with mild cognitive impairment (MCI) due to AD, and 28 patients with AD dementia. The anterior and posterior DMN and, as control, the visual-processing network (VPN) were computed using two different methodologies: connectivity of predetermined seeds (theory-driven) and dual regression (data-driven). Divergence and convergence in network strength and topography were calculated with paired t tests, global correlation coefficients, voxel-based correlation maps, and indices of reliability.
    Results

    No topographical differences were found in any of the networks. High correlations and reliability were found in the posterior DMN of healthy adults and MCI patients. Lower reliability was found in the anterior DMN and in the VPN, and in the posterior DMN of dementia patients.
    Discussion

    Strength and topography of the posterior DMN appear relatively stable and reliable over a short-term period of acquisition but with some degree of variability across clinical samples.
  • Postema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X. and 38 morePostema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X., Fitzgerald, J., Floris, D. L., Freitag, C. M., Gallagher, L., Glahn, D. C., Gori, I., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Kong, X., Lazaro, L., Lerch, J. P., Luna, B., Martinho, M. M., McGrath, J., Medland, S. E., Muratori, F., Murphy, C. M., Murphy, D. G. M., O'Hearn, K., Oranje, B., Parellada, M., Puig, O., Retico, A., Rosa, P., Rubia, K., Shook, D., Taylor, M., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2019). Altered structural brain asymmetry in autism spectrum disorder in a study of 54 datasets. Nature Communications, 10: 4958. doi:10.1038/s41467-019-13005-8.
  • St Pourcain, B., Whitehouse, A. J. O., Ang, W. Q., Warrington, N. M., Glessner, J. T., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., Hakonarson, H., Pennell, C. E., & Smith, G. (2013). Common variation contributes to the genetic architecture of social communication traits. Molecular Autism, 4: 34. doi:10.1186/2040-2392-4-34.

    Abstract

    Background: Social communication difficulties represent an autistic trait that is highly heritable and persistent during the course of development. However, little is known about the underlying genetic architecture of this phenotype. Methods: We performed a genome-wide association study on parent-reported social communication problems using items of the children’s communication checklist (age 10 to 11 years) studying single and/or joint marker effects. Analyses were conducted in a large UK population-based birth cohort (Avon Longitudinal Study of Parents and their Children, ALSPAC, N = 5,584) and followed-up within a sample of children with comparable measures from Western Australia (RAINE, N = 1364). Results: Two of our seven independent top signals (P- discovery <1.0E-05) were replicated (0.009 < P- replication ≤0.02) within RAINE and suggested evidence for association at 6p22.1 (rs9257616, meta-P = 2.5E-07) and 14q22.1 (rs2352908, meta-P = 1.1E-06). The signal at 6p22.1 was identified within the olfactory receptor gene cluster within the broader major histocompatibility complex (MHC) region. The strongest candidate locus within this genomic area was TRIM27. This gene encodes an ubiquitin E3 ligase, which is an interaction partner of methyl-CpG-binding domain (MBD) proteins, such as MBD3 and MBD4, and rare protein-coding mutations within MBD3 and MBD4 have been linked to autism. The signal at 14q22.1 was found within a gene-poor region. Single-variant findings were complemented by estimations of the narrow-sense heritability in ALSPAC suggesting that approximately a fifth of the phenotypic variance in social communication traits is accounted for by joint additive effects of genotyped single nucleotide polymorphisms throughout the genome (h2(SE) = 0.18(0.066), P = 0.0027). Conclusion: Overall, our study provides both joint and single-SNP-based evidence for the contribution of common polymorphisms to variation in social communication phenotypes.
  • Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture-speech synchrony under delayed auditory feedback. Cognitive Science, 43(3): e12721. doi:10.1111/cogs.12721.

    Abstract

    Gesture–speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed
    feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it
    has also been argued from case studies in perceptual–motor pathology that hand gestures are a special
    kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with
    speech. In the current pre-registered within-subject study, we used motion tracking to conceptually
    replicate McNeill’s (1992) classic study on gesture–speech synchrony under normal and 150 ms
    delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending
    McNeill’s original results, we obtain evidence that (a) gesture-speech synchrony is more stable
    under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably
    entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony
    offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are codependent.
    We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive
    system to stabilize rhythmic activity under interfering conditions.

    Additional information

    https://osf.io/pcde3/
  • Pouw, W., Rop, G., De Koning, B., & Paas, F. (2019). The cognitive basis for the split-attention effect. Journal of Experimental Psychology: General, 148(11), 2058-2075. doi:10.1037/xge0000578.

    Abstract

    The split-attention effect entails that learning from spatially separated, but mutually referring information
    sources (e.g., text and picture), is less effective than learning from the equivalent spatially integrated
    sources. According to cognitive load theory, impaired learning is caused by the working memory load
    imposed by the need to distribute attention between the information sources and mentally integrate them.
    In this study, we directly tested whether the split-attention effect is caused by spatial separation per se.
    Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text–picture
    combinations (Experiment 2; preregistered study), and in more ecologically valid learning materials
    (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances
    diminished performance on a secondary visual working memory task, but did not lead to slower
    integration. When participants had to integrate a picture and written text in Experiment 2, a greater
    distance led to slower integration of the stimuli, but not to diminished performance on the secondary task.
    Experiment 3 showed that presenting spatially separated (compared with integrated) textual and pictorial
    information yielded fewer integrative eye movements, but this was not further exacerbated when
    increasing spatial distance even further. This effect on learning processes did not lead to differences in
    learning outcomes between conditions. In conclusion, we provide evidence that larger distances between
    spatially separated information sources influence learning processes, but that spatial separation on its
    own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.

    Files private

    Request files
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Prystauka, Y., & Lewis, A. G. (2019). The power of neural oscillations to inform sentence comprehension: A linguistic perspective. Language and Linguistics Compass, 13 (9): e12347. doi:10.1111/lnc3.12347.

    Abstract

    The field of psycholinguistics is currently experiencing an explosion of interest in the analysis of neural oscillations—rhythmic brain activity synchronized at different temporal and spatial levels. Given that language comprehension relies on a myriad of processes, which are carried out in parallel in distributed brain networks, there is hope that this methodology might bring the field closer to understanding some of the more basic (spatially and temporally distributed, yet at the same time often overlapping) neural computations that support language function. In this review, we discuss existing proposals linking oscillatory dynamics in different frequency bands to basic neural computations and review relevant theories suggesting associations between band-specific oscillations and higher-level cognitive processes. More or less consistent patterns of oscillatory activity related to certain types of linguistic processing can already be derived from the evidence that has accumulated over the past few decades. The centerpiece of the current review is a synthesis of such patterns grouped by linguistic phenomenon. We restrict our review to evidence linking measures of oscillatory
    power to the comprehension of sentences, as well as linguistically (and/or pragmatically) more complex structures. For each grouping, we provide a brief summary and a table of associated oscillatory signatures that a psycholinguist might expect to find when employing a particular linguistic task. Summarizing across different paradigms, we conclude that a handful of basic neural oscillatory mechanisms are likely recruited in different ways and at different times for carrying out a variety of linguistic computations.
  • Quinn, S., & Kidd, E. (2019). Symbolic play promotes non‐verbal communicative exchange in infant–caregiver dyads. British Journal of Developmental Psychology, 37(1), 33-50. doi:10.1111/bjdp.12251.

    Abstract

    Symbolic play has long been considered a fertile context for communicative development (Bruner, 1983, Child's talk: Learning to use language, Oxford University Press, Oxford; Vygotsky, 1962, Thought and language, MIT Press, Cambridge, MA; Vygotsky, 1978, Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge, MA). In the current study, we examined caregiver–infant interaction during symbolic play and compared it to interaction in a comparable but non‐symbolic context (i.e., ‘functional’ play). Fifty‐four (N = 54) caregivers and their 18‐month‐old infants were observed engaging in 20 min of play (symbolic, functional). Play interactions were coded and compared across play conditions for joint attention (JA) and gesture use. Compared with functional play, symbolic play was characterized by greater frequency and duration of JA and greater gesture use, particularly the use of iconic gestures with an object in hand. The results suggest that symbolic play provides a rich context for the exchange and negotiation of meaning, and thus may contribute to the development of important skills underlying communicative development.
  • Radenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C. and 5 moreRadenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C., Vermeersch, P., Cassiman, D., Beamer, L., Morava, E., & Ghesquiere, B. (2019). The metabolic map into the pathomechanism and treatment of PGM1-CDG. American Journal of Human Genetics, 104(5), 835-846. doi:10.1016/j.ajhg.2019.03.003.

    Abstract

    Phosphoglucomutase 1 (PGM1) encodes the metabolic enzyme that interconverts glucose-6-P and glucose-1-P. Mutations in PGM1 cause impairment in glycogen metabolism and glycosylation, the latter manifesting as a congenital disorder of glycosylation (CDG). This unique metabolic defect leads to abnormal N-glycan synthesis in the endoplasmic reticulum (ER) and the Golgi apparatus (GA). On the basis of the decreased galactosylation in glycan chains, galactose was administered to individuals with PGM1-CDG and was shown to markedly reverse most disease-related laboratory abnormalities. The disease and treatment mechanisms, however, have remained largely elusive. Here, we confirm the clinical benefit of galactose supplementation in PGM1-CDG-affected individuals and obtain significant insights into the functional and biochemical regulation of glycosylation. We report here that, by using tracer-based metabolomics, we found that galactose treatment of PGM1-CDG fibroblasts metabolically re-wires their sugar metabolism, and as such replenishes the depleted levels of galactose-1-P, as well as the levels of UDP-glucose and UDP-galactose, the nucleotide sugars that are required for ER- and GA-linked glycosylation, respectively. To this end, we further show that the galactose in UDP-galactose is incorporated into mature, de novo glycans. Our results also allude to the potential of monosaccharide therapy for several other CDG.
  • Räsänen, O., Seshadri, S., Karadayi, J., Riebling, E., Bunce, J., Cristia, A., Metze, F., Casillas, M., Rosemberg, C., Bergelson, E., & Soderstrom, M. (2019). Automatic word count estimation from daylong child-centered recordings in various language environments using language-independent syllabification of speech. Speech Communication, 113, 63-80. doi:10.1016/j.specom.2019.08.005.

    Abstract

    Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
  • Ravignani, A., Sonnweber, R.-S., Stobbe, N., & Fitch, W. T. (2013). Action at a distance: Dependency sensitivity in a New World primate. Biology Letters, 9(6): 0130852. doi:10.1098/rsbl.2013.0852.

    Abstract

    Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans’ last common ancestor with squirrel monkeys, and perhaps before.
  • Ravignani, A. (2019). [Review of the book Animal beauty: On the evolution of bological aesthetics by C. Nüsslein-Volhard]. Animal Behaviour, 155, 171-172. doi:10.1016/j.anbehav.2019.07.005.
  • Ravignani, A. (2019). [Review of the book The origins of musicality ed. by H. Honing]. Perception, 48(1), 102-105. doi:10.1177/0301006618817430.
  • Ravignani, A. (2019). Humans and other musical animals [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Current Biology, 29(8), R271-R273. doi:10.1016/j.cub.2019.03.013.
  • Ravignani, A., & de Reus, K. (2019). Modelling animal interactive rhythms in communication. Evolutionary Bioinformatics, 15, 1-14. doi:10.1177/1176934318823558.

    Abstract

    Time is one crucial dimension conveying information in animal communication. Evolution has shaped animals’ nervous systems to produce signals with temporal properties fitting their socio-ecological niches. Many quantitative models of mechanisms underlying rhythmic behaviour exist, spanning insects, crustaceans, birds, amphibians, and mammals. However, these computational and mathematical models are often presented in isolation. Here, we provide an overview of the main mathematical models employed in the study of animal rhythmic communication among conspecifics. After presenting basic definitions and mathematical formalisms, we discuss each individual model. These computational models are then compared using simulated data to uncover similarities and key differences in the underlying mechanisms found across species. Our review of the empirical literature is admittedly limited. We stress the need of using comparative computer simulations – both before and after animal experiments – to better understand animal timing in interaction. We hope this article will serve as a potential first step towards a common computational framework to describe temporal interactions in animals, including humans.

    Additional information

    Supplemental material files
  • Ravignani, A., Verga, L., & Greenfield, M. D. (2019). Interactive rhythms across species: The evolutionary biology of animal chorusing and turn-taking. Annals of the New York Academy of Sciences, 1453(1), 12-21. doi:10.1111/nyas.14230.

    Abstract

    The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn‐taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross‐species turn‐taking should consider three key points. First, animal turn‐taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn‐taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn‐taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work.
  • Ravignani, A. (2019). Everything you always wanted to know about sexual selection in 129 pages [Review of the book Sexual selection: A very short introduction by M. Zuk and L. W. Simmons]. Journal of Mammalogy, 100(6), 2004-2005. doi:10.1093/jmammal/gyz168.
  • Ravignani, A., & Gamba, M. (2019). Evolving musicality [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Trends in Ecology and Evolution, 34(7), 583-584. doi:10.1016/j.tree.2019.04.016.
  • Ravignani, A., Kello, C. T., de Reus, K., Kotz, S. A., Dalla Bella, S., Mendez-Arostegui, M., Rapado-Tamarit, B., Rubio-Garcia, A., & de Boer, B. (2019). Ontogeny of vocal rhythms in harbor seal pups: An exploratory study. Current Zoology, 65(1), 107-120. doi:10.1093/cz/zoy055.

    Abstract

    Puppyhood is a very active social and vocal period in a harbor seal's life Phoca vitulina. An important feature of vocalizations is their temporal and rhythmic structure, and understanding vocal timing and rhythms in harbor seals is critical to a cross-species hypothesis in evolutionary neuroscience that links vocal learning, rhythm perception, and synchronization. This study utilized analytical techniques that may best capture rhythmic structure in pup vocalizations with the goal of examining whether (1) harbor seal pups show rhythmic structure in their calls and (2) rhythms evolve over time. Calls of 3 wild-born seal pups were recorded daily over the course of 1-3 weeks; 3 temporal features were analyzed using 3 complementary techniques. We identified temporal and rhythmic structure in pup calls across different time windows. The calls of harbor seal pups exhibit some degree of temporal and rhythmic organization, which evolves over puppyhood and resembles that of other species' interactive communication. We suggest next steps for investigating call structure in harbor seal pups and propose comparative hypotheses to test in other pinniped species.
  • Ravignani, A., Filippi, P., & Fitch, W. T. (2019). Perceptual tuning influences rule generalization: Testing humans with monkey-tailored stimuli. i-Perception, 10(2), 1-5. doi:10.1177/2041669519846135.

    Abstract

    Comparative research investigating how nonhuman animals generalize patterns of auditory stimuli often uses sequences of human speech syllables and reports limited generalization abilities in animals. Here, we reverse this logic, testing humans with stimulus sequences tailored to squirrel monkeys. When test stimuli are familiar (human voices), humans succeed in two types of generalization. However, when the same structural rule is instantiated over unfamiliar but perceivable sounds within squirrel monkeys’ optimal hearing frequency range, human participants master only one type of generalization. These findings have methodological implications for the design of comparative experiments, which should be fair towards all tested species’ proclivities and limitations.

    Additional information

    Supplemental material files
  • Ravignani, A., Olivera, M. V., Gingras, B., Hofer, R., Hernandez, R. C., Sonnweber, R. S., & Fitch, T. W. (2013). Primate drum kit: A system for studying acoustic pattern production by non-human primates using acceleration and strain sensors. Sensors, 13(8), 9790-9820. doi:10.3390/s130809790.

    Abstract

    The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
  • Ravignani, A. (2019). Singing seals imitate human speech. Journal of Experimental Biology, 222: jeb208447. doi:10.1242/jeb.208447.
  • Ravignani, A. (2019). Rhythm and synchrony in animal movement and communication. Current Zoology, 65(1), 77-81. doi:10.1093/cz/zoy087.

    Abstract

    Animal communication and motoric behavior develop over time. Often, this temporal dimension has communicative relevance and is organized according to structural patterns. In other words, time is a crucial dimension for rhythm and synchrony in animal movement and communication. Rhythm is defined as temporal structure at a second-millisecond time scale (Kotz et al. 2018). Synchrony is defined as precise co-occurrence of 2 behaviors in time (Ravignani 2017).

    Rhythm, synchrony, and other forms of temporal interaction are taking center stage in animal behavior and communication. Several critical questions include, among others: what species show which rhythmic predispositions? How does a species’ sensitivity for, or proclivity towards, rhythm arise? What are the species-specific functions of rhythm and synchrony, and are there functional trends across species? How did similar or different rhythmic behaviors evolved in different species? This Special Column aims at collecting and contrasting research from different species, perceptual modalities, and empirical methods. The focus is on timing, rhythm and synchrony in the second-millisecond range.

    Three main approaches are commonly adopted to study animal rhythms, with a focus on: 1) spontaneous individual rhythm production, 2) group rhythms, or 3) synchronization experiments. I concisely introduce them below (see also Kotz et al. 2018; Ravignani et al. 2018).
  • Ravignani, A., Dalla Bella, S., Falk, S., Kello, C. T., Noriega, F., & Kotz, S. A. (2019). Rhythm in speech and animal vocalizations: A cross‐species perspective. Annals of the New York Academy of Sciences, 1453(1), 79-98. doi:10.1111/nyas.14166.

    Abstract

    Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross‐species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross‐species perspective of speech rhythm, our review puts some pieces of the puzzle together.
  • Ravignani, A. (2019). Seeking shared ground in space. Science, 366(6466), 696. doi:10.1126/science.aay6955.
  • Ravignani, A. (2019). Timing of antisynchronous calling: A case study in a harbor seal pup (Phoca vitulina). Journal of Comparative Psychology, 133(2), 272-277. doi:10.1037/com0000160.

    Abstract

    Alternative mathematical models predict differences in how animals adjust the timing of their calls. Differences can be measured as the effect of the timing of a conspecific call on the rate and period of calling of a focal animal, and the lag between the two. Here, I test these alternative hypotheses by tapping into harbor seals’ (Phoca vitulina) mechanisms for spontaneous timing. Both socioecology and vocal behavior of harbor seals make them an interesting model species to study call rhythm and timing. Here, a wild-born seal pup was tested in controlled laboratory conditions. Based on previous recordings of her vocalizations and those of others, I designed playback experiments adapted to that specific animal. The call onsets of the animal were measured as a function of tempo, rhythmic regularity, and spectral properties of the playbacks. The pup adapted the timing of her calls in response to conspecifics’ calls. Rather than responding at a fixed time delay, the pup adjusted her calls’ onset to occur at a fraction of the playback tempo, showing a relative-phase antisynchrony. Experimental results were confirmed via computational modeling. This case study lends preliminary support to a classic mathematical model of animal behavior—Hamilton’s selfish herd—in the acoustic domain.
  • Ravignani, A. (2019). Understanding mammals, hands-on [Review of the book Mammalogy techniques lab manual by J. M. Ryan]. Journal of Mammalogy, 100(5), 1695-1696. doi:10.1093/jmammal/gyz132.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Larger communities create more systematic languages. Proceedings of the Royal Society B: Biological Sciences, 286(1907): 20191262. doi:10.1098/rspb.2019.1262.

    Abstract

    Understanding worldwide patterns of language diversity has long been a goal for evolutionary scientists, linguists and philosophers. Research over the past decade has suggested that linguistic diversity may result from differences in the social environments in which languages evolve. Specifically, recent work found that languages spoken in larger communities typically have more systematic grammatical structures. However, in the real world, community size is confounded with other social factors such as network structure and the number of second languages learners in the community, and it is often assumed that linguistic simplification is driven by these factors instead. Here, we show that in contrast to previous assumptions, community size has a unique and important influence on linguistic structure. We experimentally examine the live formation of new languages created in the laboratory by small and larger groups, and find that larger groups of interacting participants develop more systematic languages over time, and do so faster and more consistently than small groups. Small groups also vary more in their linguistic behaviours, suggesting that small communities are more vulnerable to drift. These results show that community size predicts patterns of language diversity, and suggest that an increase in community size might have contributed to language evolution.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Compositional structure can emerge without generational transmission. Cognition, 182, 151-164. doi:10.1016/j.cognition.2018.09.010.

    Abstract

    Experimental work in the field of language evolution has shown that novel signal systems become more structured over time. In a recent paper, Kirby, Tamariz, Cornish, and Smith (2015) argued that compositional languages can emerge only when languages are transmitted across multiple generations. In the current paper, we show that compositional languages can emerge in a closed community within a single generation. We conducted a communication experiment in which we tested the emergence of linguistic structure in different micro-societies of four participants, who interacted in alternating dyads using an artificial language to refer to novel meanings. Importantly, the communication included two real-world aspects of language acquisition and use, which introduce compressibility pressures: (a) multiple interaction partners and (b) an expanding meaning space. Our results show that languages become significantly more structured over time, with participants converging on shared, stable, and compositional lexicons. These findings indicate that new learners are not necessary for the formation of linguistic structure within a community, and have implications for related fields such as developing sign languages and creoles.
  • Reber, S. A., Šlipogor, V., Oh, J., Ravignani, A., Hoeschele, M., Bugnyar, T., & Fitch, W. T. (2019). Common marmosets are sensitive to simple dependencies at variable distances in an artificial grammar. Evolution and Human Behavior, 40(2), 214-221. doi:10.1016/j.evolhumbehav.2018.11.006.

    Abstract

    Recognizing that two elements within a sequence of variable length depend on each other is a key ability in understanding the structure of language and music. Perception of such interdependencies has previously been documented in chimpanzees in the visual domain and in human infants and common squirrel monkeys with auditory playback experiments, but it remains unclear whether it typifies primates in general. Here, we investigated the ability of common marmosets (Callithrix jacchus) to recognize and respond to such dependencies. We tested subjects in a familiarization-discrimination playback experiment using stimuli composed of pure tones that either conformed or did not conform to a grammatical rule. After familiarization to sequences with dependencies, marmosets spontaneously discriminated between sequences containing and lacking dependencies (‘consistent’ and ‘inconsistent’, respectively), independent of stimulus length. Marmosets looked more often to the sound source when hearing sequences consistent with the familiarization stimuli, as previously found in human infants. Crucially, looks were coded automatically by computer software, avoiding human bias. Our results support the hypothesis that the ability to perceive dependencies at variable distances was already present in the common ancestor of all anthropoid primates (Simiiformes).
  • Redmann, A., FitzPatrick, I., & Indefrey, P. (2019). The time course of colour congruency effects in picture naming. Acta Psychologica, 196, 96-108. doi:10.1016/j.actpsy.2019.04.005.

    Abstract

    In our interactions with people and objects in the world around us, as well as in communicating our thoughts, we
    rely on the use of conceptual knowledge stored in long-term memory. From a frame-theoretic point of view, a
    concept is represented by a central node and recursive attribute-value structures further specifying the concept.
    The present study explores whether and how the activation of an attribute within a frame might influence access
    to the concept's name in language production, focussing on the colour attribute. Colour has been shown to
    contribute to object recognition, naming, and memory retrieval, and there is evidence that colour plays a different
    role in naming objects that have a typical colour (high colour-diagnostic objects such as tomatoes) than in
    naming objects without a typical colour (low colour-diagnostic objects such as bicycles). We report two behavioural
    experiments designed to reveal potential effects of the activation of an object's typical colour on naming
    the object in a picture-word interference paradigm. This paradigm was used to investigate whether naming is
    facilitated when typical colours are presented alongside the to-be-named picture (e.g., the word “red” superimposed
    on the picture of a tomato), compared to atypical colours (such as “brown”), unrelated adjectives (such
    as “fast”), or random letter strings. To further explore the time course of these potential effects, the words were
    presented at different time points relative to the to-be-named picture (Exp. 1: −400 ms, Exp. 2: −200 ms, 0 ms,
    and+200 ms). By including both high and low colour-diagnostic objects, it was possible to explore whether the
    activation of a colour differentially affects naming of objects that have a strong association with a typical colour.
    The results showed that (pre-)activation of the appropriate colour attribute facilitated naming compared to an
    inappropriate colour. This was only the case for objects closely connected with a typical colour. Consequences of
    these findings for frame-theoretic accounts of conceptual representation are discussed.
  • Reesink, G. (2013). Expressing the GIVE event in Papuan languages: A preliminary survey. Linguistic Typology, 17(2), 217-266. doi:10.1515/lity-2013-0010.

    Abstract

    The linguistic expression of the GIVE event is investigated in a sample of 72 Papuan languages, 33 belonging to the Trans New Guinea family, 39 of various non-TNG lineages. Irrespective of the verbal template (prefix, suffix, or no indexation of undergoer), in the majority of languages the recipient is marked as the direct object of a monotransitive verb, which sometimes involves stem suppletion for the recipient. While a few languages allow verbal affixation for all three arguments, a number of languages challenge the universal claim that the `give' verb always has three arguments.
  • Regier, T., Khetarpal, N., & Majid, A. (2013). Inferring semantic maps. Linguistic Typology, 17, 89-105. doi:10.1515/lity-2013-0003.

    Abstract

    Semantic maps are a means of representing universal structure underlying crosslanguage semantic variation. However, no algorithm has existed for inferring a graph-based semantic map from data. Here, we note that this open problem is formally identical to the known problem of inferring a social network from disease outbreaks. From this identity it follows that semantic map inference is computationally intractable, but that an efficient approximation algorithm for it exists. We demonstrate that this algorithm produces sensible semantic maps from two existing bodies of data. We conclude that universal semantic graph structure can be automatically approximated from cross-language semantic data.
  • Reinisch, E., Weber, A., & Mitterer, H. (2013). Listeners retune phoneme categories across languages. Journal of Experimental Psychology: Human Perception and Performance, 39, 75-86. doi:10.1037/a0027979.

    Abstract

    Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge. We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1). During a Dutch lexical-decision task, German and Dutch listeners were exposed to unusual pronunciation variants in which word-final /f/ or /s/ was replaced by an ambiguous sound. At test, listeners categorized Dutch minimal word pairs ending in sounds along an /f/–/s/ continuum. Dutch L1 and German L2 listeners showed boundary shifts of a similar magnitude. Moreover, following exposure to Dutch-accented English, Dutch listeners also showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test. The former result suggests that lexical representations in a second language are specific enough to support lexically guided retuning, and the latter implies that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages.
  • Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41, 101-116. doi:10.1016/j.wocn.2013.01.002.

    Abstract

    Speech perception is dependent on auditory information within phonemes such as spectral or temporal cues. The perception of those cues, however, is affected by auditory information in surrounding context (e.g., a fast context sentence can make a target vowel sound subjectively longer). In a two-by-two design the current experiments investigated when these different factors influence vowel perception. Dutch listeners categorized minimal word pairs such as /tɑk/–/taːk/ (“branch”–“task”) embedded in a context sentence. Critically, the Dutch /ɑ/–/aː/ contrast is cued by spectral and temporal information. We varied the second formant (F2) frequencies and durations of the target vowels. Independently, we also varied the F2 and duration of all segments in the context sentence. The timecourse of cue uptake on the targets was measured in a printed-word eye-tracking paradigm. Results show that the uptake of spectral cues slightly precedes the uptake of temporal cues. Furthermore, acoustic manipulations of the context sentences influenced the uptake of cues in the target vowel immediately. That is, listeners did not need additional time to integrate spectral or temporal cues of a target sound with auditory information in the context. These findings argue for an early locus of contextual influences in speech perception.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2013). Tone of voice guides word learning in informative referential contexts. Quarterly Journal of Experimental Psychology, 66, 1227-1240. doi:10.1080/17470218.2012.736525.

    Abstract

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • De Resende, N. C. A., Mota, M. B., & Seuren, P. A. M. (2019). The processing of grammatical gender agreement in Brazilian Portuguese: ERP evidence in favor of a single route. Journal of Psycholinguistic Research, 48(1), 181-198. doi:10.1007/s10936-018-9598-z.

    Abstract

    The present study used event-related potentials to investigate whether the processing of grammatical gender agreement involving gender regular and irregular forms recruit the same or distinct neurocognitive mechanisms and whether different grammatical gender agreement conditions elicit the same or diverse ERP signals. Native speakers of Brazilian Portuguese read sentences containing congruent and incongruent grammatical gender agreement between a determiner and a regular or an irregular form (condition 1) and between a regular or an irregular form and an adjective (condition 2). However, in condition 2, trials with incongruent regular forms elicited more positive ongoing waveforms than trial with incongruent irregular forms. We found a biphasic LAN/P600 effect for gender agreement violation involving regular and irregular forms in both conditions. Our findings suggest that gender agreement between determiner and nouns recruits the same neurocognitive mechanisms regardless of the nouns’ form and that, depending on the grammatical class of the words involved in gender agreement, differences in ERP signals can emerge
  • Riedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A. and 9 moreRiedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A., Buechner, O., Manzano, C., Memon, S., Memon, S., Helin, H., Suhonen, J., Lecarpentier, D., Koski, K., & Lippert, T. (2013). A data infrastructure reference model with applications: Towards realization of a ScienceTube vision with a data replication service. Journal of Internet Services and Applications, 4, 1-17. doi:10.1186/1869-0238-4-1.

    Abstract

    The wide variety of scientific user communities work with data since many years and thus have already a wide variety of data infrastructures in production today. The aim of this paper is thus not to create one new general data architecture that would fail to be adopted by each and any individual user community. Instead this contribution aims to design a reference model with abstract entities that is able to federate existing concrete infrastructures under one umbrella. A reference model is an abstract framework for understanding significant entities and relationships between them and thus helps to understand existing data infrastructures when comparing them in terms of functionality, services, and boundary conditions. A derived architecture from such a reference model then can be used to create a federated architecture that builds on the existing infrastructures that could align to a major common vision. This common vision is named as ’ScienceTube’ as part of this contribution that determines the high-level goal that the reference model aims to support. This paper will describe how a well-focused use case around data replication and its related activities in the EUDAT project aim to provide a first step towards this vision. Concrete stakeholder requirements arising from scientific end users such as those of the European Strategy Forum on Research Infrastructure (ESFRI) projects underpin this contribution with clear evidence that the EUDAT activities are bottom-up thus providing real solutions towards the so often only described ’high-level big data challenges’. The followed federated approach taking advantage of community and data centers (with large computational resources) further describes how data replication services enable data-intensive computing of terabytes or even petabytes of data emerging from ESFRI projects.
  • Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G. and 184 moreRietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G., de Leeuw, C., Eklund, N., Evans, D. S., Ferhmann, R., Fischer, K., Gieger, C., Gjessing, H. K., Hägg, S., Harris, J. R., Hayward, C., Holzapfel, C., Ibrahim-Verbaas, C. A., Ingelsson, E., Jacobsson, B., Joshi, P. K., Jugessur, A., Kaakinen, M., Kanoni, S., Karjalainen, J., Kolcic, I., Kristiansson, K., Kutalik, Z., Lahti, J., Lee, S. H., Lin, P., Lind, P. A., Liu, Y., Lohman, K., Loitfelder, M., McMahon, G., Vidal, P. M., Meirelles, O., Milani, L., Myhre, R., Nuotio, M.-L., Oldmeadow, C. J., Petrovic, K. E., Peyrot, W. J., Polasek, O., Quaye, L., Reinmaa, E., Rice, J. P., Rizzi, T. S., Schmidt, H., Schmidt, R., Smith, A. V., Smith, J. A., Tanaka, T., Terracciano, A., van der Loos, M. J. H. M., Vitart, V., Völzke, H., Wellmann, J., Yu, L., Zhao, W., Allik, J., Attia, J. R., Bandinelli, S., Bastardot, F., Beauchamp, J., Bennett, D. A., Berger, K., Bierut, L. J., Boomsma, D. I., Bültmann, U., Campbell, H., Chabris, C. F., Cherkas, L., Chung, M. K., Cucca, F., de Andrade, M., De Jager, P. L., De Neve, J.-E., Deary, I. J., Dedoussis, G. V., Deloukas, P., Dimitriou, M., Eiríksdóttir, G., Elderson, M. F., Eriksson, J. G., Evans, D. M., Faul, J. D., Ferrucci, L., Garcia, M. E., Grönberg, H., Guðnason, V., Hall, P., Harris, J. M., Harris, T. B., Hastie, N. D., Heath, A. C., Hernandez, D. G., Hoffmann, W., Hofman, A., Holle, R., Holliday, E. G., Hottenga, J.-J., Iacono, W. G., Illig, T., Järvelin, M.-R., Kähönen, M., Kaprio, J., Kirkpatrick, R. M., Kowgier, M., Latvala, A., Launer, L. J., Lawlor, D. A., Lehtimäki, T., Li, J., Lichtenstein, P., Lichtner, P., Liewald, D. C., Madden, P. A., Magnusson, P. K. E., Mäkinen, T. E., Masala, M., McGue, M., Metspalu, A., Mielck, A., Miller, M. B., Montgomery, G. W., Mukherjee, S., Nyholt, D. R., Oostra, B. A., Palmer, L. J., Palotie, A., Penninx, B. W. J. H., Perola, M., Peyser, P. A., Preisig, M., Räikkönen, K., Raitakari, O. T., Realo, A., Ring, S. M., Ripatti, S., Rivadeneira, F., Rudan, I., Rustichini, A., Salomaa, V., Sarin, A.-P., Schlessinger, D., Scott, R. J., Snieder, H., St Pourcain, B., Starr, J. M., Sul, J. H., Surakka, I., Svento, R., Teumer, A., Tiemeier, H., van Rooij, F. J. A., Van Wagoner, D. R., Vartiainen, E., Viikari, J., Vollenweider, P., Vonk, J. M., Waeber, G., Weir, D. R., Wichmann, H.-E., Widen, E., Willemsen, G., Wilson, J. F., Wright, A. F., Conley, D., Davey-Smith, G., Franke, L., Groenen, P. J. F., Hofman, A., Johannesson, M., Kardia, S. L. R., Krueger, R. F., Laibson, D., Martin, N. G., Meyer, M. N., Posthuma, D., Thurik, A. R., Timpson, N. J., Uitterlinden, A. G., van Duijn, C. M., Visscher, P. M., Benjamin, D. J., Cesarini, D., Koellinger, P. D., & Study LifeLines Cohort (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 340(6139), 1467-1471. doi:10.1126/science.1235488.

    Abstract

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R(2) ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics.

    Additional information

    Rietveld.SM.revision.2.pdf
  • Rissman, L., & Majid, A. (2019). Thematic roles: Core knowledge or linguistic construct? Psychonomic Bulletin & Review, 26(6), 1850-1869. doi:10.3758/s13423-019-01634-5.

    Abstract

    The status of thematic roles such as Agent and Patient in cognitive science is highly controversial: To some they are universal components of core knowledge, to others they are scholarly fictions without psychological reality. We address this debate by posing two critical questions: to what extent do humans represent events in terms of abstract role categories, and to what extent are these categories shaped by universal cognitive biases? We review a range of literature that contributes answers to these questions: psycholinguistic and event cognition experiments with adults, children, and infants; typological studies grounded in cross-linguistic data; and studies of emerging sign languages. We pose these questions for a variety of roles and find that the answers depend on the role. For Agents and Patients, there is strong evidence for abstract role categories and a universal bias to distinguish the two roles. For Goals and Recipients, we find clear evidence for abstraction but mixed evidence as to whether there is a bias to encode Goals and Recipients as part of one or two distinct categories. Finally, we discuss the Instrumental role and do not find clear evidence for either abstraction or universal biases to structure instrumental categories.
  • Roberts, S. G. (2013). [Review of the book The Language of Gaming by A. Ensslin]. Discourse & Society, 24(5), 651-653. doi:10.1177/0957926513487819a.
  • Roberts, S. G., & Winters, J. (2013). Linguistic diversity and traffic accidents: Lessons from statistical studies of cultural traits. PLoS One, 8(8): e70902. doi:doi:10.1371/journal.pone.0070902.

    Abstract

    The recent proliferation of digital databases of cultural and linguistic data, together with new statistical techniques becoming available has lead to a rise in so-called nomothetic studies [1]–[8]. These seek relationships between demographic variables and cultural traits from large, cross-cultural datasets. The insights from these studies are important for understanding how cultural traits evolve. While these studies are fascinating and are good at generating testable hypotheses, they may underestimate the probability of finding spurious correlations between cultural traits. Here we show that this kind of approach can find links between such unlikely cultural traits as traffic accidents, levels of extra-martial sex, political collectivism and linguistic diversity. This suggests that spurious correlations, due to historical descent, geographic diffusion or increased noise-to-signal ratios in large datasets, are much more likely than some studies admit. We suggest some criteria for the evaluation of nomothetic studies and some practical solutions to the problems. Since some of these studies are receiving media attention without a widespread understanding of the complexities of the issue, there is a risk that poorly controlled studies could affect policy. We hope to contribute towards a general skepticism for correlational studies by demonstrating the ease of finding apparently rigorous correlations between cultural traits. Despite this, we see well-controlled nomothetic studies as useful tools for the development of theories.
  • Roberts, L. (2013). Processing of gender and number agreement in late Spanish bilinguals: A commentary on Sagarra and Herschensohn. International Journal of Bilingualism, 17(5), 628-633. doi:10.1177/1367006911435693.

    Abstract

    Sagarra and Herschensohn’s article examines English L2 learners’ knowledge of Spanish gender and number agreement and their sensitivity to gender and number agreement violations (e.g. *El ingeniero presenta el prototipo *famosa/*famosos en la conferencia) during real-time sentence processing. It raises some interesting questions that are central to both acquisition and processing research. In the following paper, I discuss a selection of these topics, for instance, what types of knowledge may or may not be available/accessible during real-time L2 processing at different proficiency levels, what the differences may be between the processing of number versus gender concord, and perhaps most importantly, the problem of how to characterize the relationship between the grammar and the parser, both in general terms and in the context of language acquisition.
  • Roberts, L., Matsuo, A., & Duffield, N. (2013). Processing VP-ellipsis and VP-anaphora with structurally parallel and nonparallel antecedents: An eyetracking study. Language and Cognitive Processes, 28, 29-47. doi:10.1080/01690965.2012.676190.

    Abstract

    In this paper, we report on an eye-tracking study investigating the processing of English VP-ellipsis (John took the rubbish out. Fred did [] too) (VPE) and VP-anaphora (John took the rubbish out. Fred did it too) (VPA) constructions, with syntactically parallel versus nonparallel antecedent clauses (e.g., The rubbish was taken out by John. Fred did [] too/Fred did it too). The results show first that VPE involves greater processing costs than VPA overall. Second, although the structural nonparallelism of the antecedent clause elicited a processing cost for both anaphor types, there was a difference in the timing and the strength of this parallelism effect: it was earlier and more fleeting for VPA, as evidenced by regression path times, whereas the effect occurred later with VPE completions, showing up in second and total fixation times measures, and continuing on into the reading of the adjacent text. Taking the observed differences between the processing of the two anaphor types together with other research findings in the literature, we argue that our data support the idea that in the case of VPE, the VP from the antecedent clause necessitates more computation at the elision site before it is linked to its antecedent than is the case for VPA.

    Files private

    Request files
  • Rodd, J., Bosker, H. R., Ten Bosch, L., & Ernestus, M. (2019). Deriving the onset and offset times of planning units from acoustic and articulatory measurements. The Journal of the Acoustical Society of America, 145(2), EL161-EL167. doi:10.1121/1.5089456.

    Abstract

    Many psycholinguistic models of speech sequence planning make claims about the onset and offset times of planning units, such as words, syllables, and phonemes. These predictions typically go untested, however, since psycholinguists have assumed that the temporal dynamics of the speech signal is a poor index of the temporal dynamics of the underlying speech planning process. This article argues that this problem is tractable, and presents and validates two simple metrics that derive planning unit onset and offset times from the acoustic signal and articulatographic data.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A., & Piai, V. (2013). Associative facilitation in the Stroop task: Comment on Mahon et al. Cortex, 49, 1767-1769. doi:10.1016/j.cortex.2013.03.001.

    Abstract

    First paragraph: A fundamental issue in psycholinguistics concerns how speakers retrieve intended words from long-term memory. According to a selection by competition account (e.g., Levelt
    et al., 1999), conceptually driven word retrieval involves the activation of a set of candidate words and a competitive selection
    of the intended word from this set.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Context effects and selective attention in picture naming and word reading: Competition versus response exclusion. Language and Cognitive Processes, 28, 655-671. doi:10.1080/01690965.2011.615663.

    Abstract

    For several decades, context effects in picture naming and word reading have been extensively investigated. However, researchers have found no agreement on the explanation of the effects. Whereas it has long been assumed that several types of effect reflect competition in word selection, recently it has been argued that these effects reflect the exclusion of articulatory responses from an output buffer. Here, we first critically evaluate the findings on context effects in picture naming that have been taken as evidence against the competition account, and we argue that the findings are, in fact, compatible with the competition account. Moreover, some of the findings appear to challenge rather than support the response exclusion account. Next, we compare the response exclusion and competition accounts with respect to their ability to explain data on word reading. It appears that response exclusion does not account well for context effects on word reading times, whereas computer simulations reveal that a competition model like WEAVER++ accounts for the findings.

    Files private

    Request files
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Roelofs, A., Dijkstra, T., & Gerakaki, S. (2013). Modeling of word translation: Activation flow from concepts to lexical items. Bilingualism: Language and Cognition, 16, 343-353. doi:10.1017/S1366728912000612.

    Abstract

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to support the discrete-flow model. Here, we report results of computer simulations with the continuous-flow WEAVER++ model (Roelofs, 1992, 2006) demonstrating that the empirical observation taken to be in favor of discrete models is, in fact, only consistent with those models and equally compatible with more continuous models of word production by monolingual and bilingual speakers. Continuous models are specifically and independently supported by other empirical evidence on the effect of context pictures on native word production.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Selection by competition in word production: Rejoinder to Janssen (2012). Language and Cognitive Processes, 28, 679-683. doi:10.1080/01690965.2013.770890.

    Abstract

    Roelofs, Piai, and Schriefers argue that several findings on the effect of distractor words and pictures in producing words support a selection-by-competition account and challenge a non-competitive response-exclusion account. Janssen argues that the findings do not challenge response exclusion, and he conjectures that both competitive and non-competitive mechanisms underlie word selection. Here, we maintain that the findings do challenge the response-exclusion account and support the assumption of a single competitive mechanism underlying word selection.

    Files private

    Request files
  • Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not routinely influence performance during language processing. Psychological Science, 24, 2218-2225. doi:10.1177/0956797613490746.

    Abstract

    The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.

    Additional information

    DS_10.1177_0956797613490746.pdf
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to. Neuropsychologia, 51(3), 437-447. doi:10.1016/j.neuropsychologia.2012.12.002.

    Abstract

    When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, “In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice”). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance.
  • Rommers, J., Dijkstra, T., & Bastiaansen, M. C. M. (2013). Context-dependent semantic processing in the human brain: Evidence from idiom comprehension. Journal of Cognitive Neuroscience, 25(5), 762-776. doi:10.1162/jocn_a_00337.

    Abstract

    Language comprehension involves activating word meanings and integrating them with the sentence context. This study examined whether these routines are carried out even when they are theoretically unnecessary, namely in the case of opaque idiomatic expressions, for which the literal word meanings are unrelated to the overall meaning of the expression. Predictable words in sentences were replaced by a semantically related or unrelated word. In literal sentences, this yielded previously established behavioral and electrophysiological signatures of semantic processing: semantic facilitation in lexical decision, a reduced N400 for semantically related relative to unrelated words, and a power increase in the gamma frequency band that was disrupted by semantic violations. However, the same manipulations in idioms yielded none of these effects. Instead, semantic violations elicited a late positivity in idioms. Moreover, gamma band power was lower in correct idioms than in correct literal sentences. It is argued that the brain's semantic expectancy and literal word meaning integration operations can, to some extent, be “switched off” when the context renders them unnecessary. Furthermore, the results lend support to models of idiom comprehension that involve unitary idiom representations.
  • Rossano, F., Carpenter, M., & Tomasello, M. (2013). One-year-old infants follow others’ voice direction. Psychological Science, 23, 1298-1302. doi:10.1177/0956797612450032.

    Abstract

    We investigated 1-year-old infants’ ability to infer an adult’s focus of attention solely on the basis of her voice direction. In Studies 1 and 2, 12- and 16-month-olds watched an adult go behind a barrier and then heard her verbally express excitement about a toy hidden in one of two boxes at either end of the barrier. Even though they could not see the adult, infants of both ages followed her voice direction to the box containing the toy. Study 2 showed that infants could do this even when the adult was positioned closer to the incorrect box while she vocalized toward the correct one (and thus ruled out the possibility that infants were merely approaching the source of the sound). In Study 3, using the same methods as in Study 2, we found that chimpanzees performed the task at chance level. Our results show that infants can determine the focus of another person’s attention through auditory information alone—a useful skill for establishing joint attention.

    Additional information

    Rossano_Suppl_Mat.pdf
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rubio-Fernández, P. (2019). Memory and inferential processes in false-belief tasks: An investigation of the unexpected-contents paradigm. Journal of Experimental Child Psychology, 177, 297-312. doi:10.1016/j.jecp.2018.08.011.

    Abstract

    This study investigated the extent to which 3- and 4-year-old children may rely on associative memory representations to pass an unexpected-contents false-belief task. In Experiment 1, 4-year-olds performed at chance in both a standard Smarties task and a modified version highlighting the secrecy of the contents of the tube. These results were interpreted as evidence that having to infer the answer to a false-belief question (without relying on memory representations) is generally difficult for preschool children. In Experiments 2a, 2b, and 2c, 3-year-olds were tested at 3-month intervals during their first year of preschool and showed better performance in a narrative version of the Smarties task (chance level) than in the standard version (below-chance level). These children performed even better in an associative version of the narrative task (above-chance level) where they could form a memory representation associating the protagonist with the expected contents of a box. The results of a true-belief control suggest that some of these children may have relied on their memory of the protagonist’s preference for the original contents of the box (rather than their understanding of what the protagonist was expecting to find inside). This suggests that when 3-year-olds passed the associative unexpected-contents task, some may have been keeping track of the protagonist’s initial preference and not only (or not necessarily) of the protagonist’s false belief. These results are interpreted in the light of current accounts of Theory of Mind development and failed replications of verbal false-belief tasks.
  • Rubio-Fernández, P. (2019). Publication standards in infancy research: Three ways to make Violation-of-Expectation studies more reliable. Infant Behavior and Development, 54, 177-188. doi:10.1016/j.infbeh.2018.09.009.

    Abstract

    The Violation-of-Expectation paradigm is a widespread paradigm in infancy research that relies on looking time as an index of surprise. This methodological review aims to increase the reliability of future VoE studies by proposing to standardize reporting practices in this literature. I review 15 VoE studies on false-belief reasoning, which used a variety of experimental parameters. An analysis of the distribution of p-values across experiments suggests an absence of p-hacking. However, there are potential concerns with the accuracy of their measures of infants’ attention, as well as with the lack of a consensus on the parameters that should be used to set up VoE studies. I propose that (i) future VoE studies ought to report not only looking times (as a measure of attention) but also looking-away times (as an equally important measure of distraction); (ii) VoE studies must offer theoretical justification for the parameters they use, and (iii) when parameters are selected through piloting, pilot data must be reported in order to understand how parameters were selected. Future VoE studies ought to maximize the accuracy of their measures of infants’ attention since the reliability of their results and the validity of their conclusions both depend on the accuracy of their measures.
  • Rubio-Fernández, P., Mollica, F., Oraa Ali, M., & Gibson, E. (2019). How do you know that? Automatic belief inferences in passing conversation. Cognition, 193: 104011. doi:10.1016/j.cognition.2019.104011.

    Abstract

    There is an ongoing debate, both in philosophy and psychology, as to whether people are able to automatically infer what others may know, or whether they can only derive belief inferences by deploying cognitive resources. Evidence from laboratory tasks, often involving false beliefs or visual-perspective taking, has suggested that belief inferences are cognitively costly, controlled processes. Here we suggest that in everyday conversation, belief reasoning is pervasive and therefore potentially automatic in some cases. To test this hypothesis, we conducted two pre-registered self-paced reading experiments (N1 = 91, N2 = 89). The results of these experiments showed that participants slowed down when a stranger commented ‘That greasy food is bad for your ulcer’ relative to conditions where a stranger commented on their own ulcer or a friend made either comment – none of which violated participants’ common-ground expectations. We conclude that Theory of Mind models need to account for belief reasoning in conversation as it is at the center of everyday social interaction
  • Rubio-Fernández, P. (2019). Overinformative Speakers Are Cooperative: Revisiting the Gricean Maxim of Quantity. Cognitive Science, 43: e12797. doi:10.1111/cogs.12797.

    Abstract

    A pragmatic account of referential communication is developed which presents an alternative to traditional Gricean accounts by focusing on cooperativeness and efficiency, rather than informativity. The results of four language-production experiments support the view that speakers can be cooperative when producing redundant adjectives, doing so more often when color modification could facilitate the listener's search for the referent in the visual display (Experiment 1a). By contrast, when the listener knew which shape was the target, speakers did not produce redundant color adjectives (Experiment 1b). English speakers used redundant color adjectives more often than Spanish speakers, suggesting that speakers are sensitive to the differential efficiency of prenominal and postnominal modification (Experiment 2). Speakers were also cooperative when using redundant size adjectives (Experiment 3). Overall, these results show how discriminability affects a speaker's choice of referential expression above and beyond considerations of informativity, supporting the view that redundant speakers can be cooperative.
  • Rubio-Fernández, P. (2013). Associative and inferential processes in pragmatic enrichment: The case of emergent properties. Language and Cognitive Processes, 28(6), 723-745. doi:10.1080/01690965.2012.659264.

    Abstract

    Experimental research on word processing has generally focused on properties that are associated to a concept in long-term memory (e.g., basketball—round). The present study addresses a related issue: the accessibility of “emergent properties” or conceptual properties that have to be inferred in a given context (e.g., basketball—floats). This investigation sheds light on a current debate in cognitive pragmatics about the number of pragmatic systems that are there (Carston, 2002a, 2007; Recanati, 2004, 2007). Two experiments using a self-paced reading task suggest that inferential processes are fully integrated in the processing system. Emergent properties are accessed early on in processing, without delaying later discourse integration processes. I conclude that the theoretical distinction between explicit and implicit meaning is not paralleled by that between associative and inferential processes.
  • Rubio-Fernández, P. (2013). Perspective tracking in progress: Do not disturb. Cognition, 129(2), 264-272. doi:10.1016/j.cognition.2013.07.005.

    Abstract

    Two experiments tested the hypothesis that indirect false-belief tests allow participants to track a protagonist’s perspective uninterruptedly, whereas direct false-belief tests disrupt the process of perspective tracking in various ways. For this purpose, adults’ performance was compared on indirect and direct false-belief tests by means of continuous eye-tracking. Experiment 1 confirmed that the false-belief question used in direct tests disrupts perspective tracking relative to what is observed in an indirect test. Experiment 2 confirmed that perspective tracking is a continuous process that can be easily disrupted in adults by a subtle visual manipulation in both indirect and direct tests. These results call for a closer analysis of the demands of the false-belief tasks that have been used in developmental research.
  • Rubio-Fernández, P., & Geurts, B. (2013). How to pass the false-belief task before your fourth birthday. Psychological Science, 24(1), 27-33. doi:10.1177/0956797612447819.

    Abstract

    The experimental record of the last three decades shows that children under 4 years old fail all sorts of variations on the standard false-belief task, whereas more recent studies have revealed that infants are able to pass nonverbal versions of the task. We argue that these paradoxical results are an artifact of the type of false-belief tasks that have been used to test infants and children: Nonverbal designs allow infants to keep track of a protagonist’s perspective over a course of events, whereas verbal designs tend to disrupt the perspective-tracking process in various ways, which makes it too hard for younger children to demonstrate their capacity for perspective tracking. We report three experiments that confirm this hypothesis by showing that 3-year-olds can pass a suitably streamlined version of the verbal false-belief task. We conclude that young children can pass the verbal false-belief task provided that they are allowed to keep track of the protagonist’s perspective without too much disruption.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Sadakata, M., & McQueen, J. M. (2013). High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates. Journal of the Acoustical Society of America, 134(2), 1324-1335. doi:10.1121/1.4812767.

    Abstract

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.
  • Sakarias, M., & Flecken, M. (2019). Keeping the result in sight and mind: General cognitive principles and language-specific influences in the perception and memory of resultative events. Cognitive Science, 43(1), 1-30. doi:10.1111/cogs.12708.

    Abstract

    We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.
  • Sakkalou, E., Ellis-Davies, K., Fowler, N., Hilbrink, E., & Gattis, M. (2013). Infants show stability of goal-directed imitation. Journal of Experimental Child Psychology, 114, 1-9. doi:10.1016/j.jecp.2012.09.005.

    Abstract

    Previous studies have reported that infants selectively reproduce observed actions and have argued that this selectivity reflects understanding of intentions and goals, or goal-directed imitation. We reasoned that if selective imitation of goal-directed actions reflects understanding of intentions, infants should demonstrate stability across perceptually and causally dissimilar imitation tasks. To this end, we employed a longitudinal within-participants design to compare the performance of 37 infants on two imitation tasks, with one administered at 13 months and one administered at 14 months. Infants who selectively imitated goal-directed actions in an object-cued task at 13 months also selectively imitated goal-directed actions in a vocal-cued task at 14 months. We conclude that goal-directed imitation reflects a general ability to interpret behavior in terms of mental states.
  • Salomo, D., & Liszkowski, U. (2013). Sociocultural settings influence the emergence of prelinguistic deictic gestures. Child development, 84(4), 1296-1307. doi:10.1111/cdev.12026.

    Abstract

    Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants were exposed differed systematically across settings, allowing testing for the role of social–interactional input in the ontogeny of prelinguistic gestures. Infants gestured more and at an earlier age depending on the amount of joint action and gestures infants were exposed to, revealing early prelinguistic sociocultural differences. The study shows that the emergence of basic prelinguistic gestures is socially mediated, suggesting that others' actions structure the ontogeny of human communication from early on.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.

Share this page