Publications

Displaying 501 - 600 of 832
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.
  • Mitterer, H., & Ernestus, M. (2008). The link between speech perception and production is phonological and abstract: Evidence from the shadowing task. Cognition, 109(1), 168-173. doi:10.1016/j.cognition.2008.08.002.

    Abstract

    This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.
  • Mitterer, H., Reinisch, E., & McQueen, J. M. (2018). Allophones, not phonemes in spoken-word recognition. Journal of Memory and Language, 98, 77-92. doi:10.1016/j.jml.2017.09.005.

    Abstract

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic representational units in spoken-word recognition. But recent evidence from a selective-adaptation paradigm seems to suggest that context-independent phonemes also play a role. We present three experiments using selective adaptation that constitute strong tests of these representational hypotheses. In Experiment 1, we tested generalization of selective adaptation using different allophones of Dutch /r/ and /l/ – a case where generalization has not been found with perceptual learning. In Experiments 2 and 3, we tested generalization of selective adaptation using German back fricatives in which allophonic and phonemic identity were varied orthogonally. In all three experiments, selective adaptation was observed only if adaptors and test stimuli shared allophones. Phonemic identity, in contrast, was neither necessary nor sufficient for generalization of selective adaptation to occur. These findings and other recent data using the perceptual-learning paradigm suggest that pre-lexical processing during spoken-word recognition is based on allophones, and not on context-independent phonemes
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Mitterer, H., Yoneyama, K., & Ernestus, M. (2008). How we hear what is hardly there: Mechanisms underlying compensation for /t/-reduction in speech comprehension. Journal of Memory and Language, 59, 133-152. doi:10.1016/j.jml.2008.02.004.

    Abstract

    In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer,H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, 73–103] showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction,and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
  • Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.

    Abstract

    Propagation of novel linguistic terms is an important aspect of language use and language
    change. Here, we test how social network size influences people’s likelihood of adopting novel
    labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
    users leads to more varied and malleable hashtag use on Twitter , because each followed user is
    ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
    users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
    fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
    malleability and variability in use. These results have implications for theories of language learning, language use, and language change.
  • Morgan, A. T., van Haaften, L., van Hulst, K., Edley, C., Mei, C., Tan, T. Y., Amor, D., Fisher, S. E., & Koolen, D. A. (2018). Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia. European journal of human genetics, 26, 75-84. doi:10.1038/s41431-017-0035-9.

    Abstract

    Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0–27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5–3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding ‘double hit’ of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.

    Additional information

    41431_2017_35_MOESM1_ESM.docx
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Morgan, J., & Meyer, A. S. (2005). Processing of extrafoveal objects during multiple-object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 428-442. doi:10.1037/0278-7393.31.3.428.

    Abstract

    In 3 experiments, the authors investigated the extent to which objects that are about to be named are processed prior to fixation. Participants named pairs or triplets of objects. One of the objects, initially seen extrafoveally (the interloper), was replaced by a different object (the target) during the saccade toward it. The interloper-target pairs were identical or unrelated objects or visually and conceptually unrelated objects with homophonous names (e.g., animal-baseball bat). The mean latencies and gaze durations for the targets were shorter in the identity and homophone conditions than in the unrelated condition. This was true when participants viewed a fixation mark until the interloper appeared and when they fixated on another object and prepared to name it while viewing the interloper. These results imply that objects that are about to be named may undergo far-reaching processing, including access to their names, prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Moscoso del Prado Martín, F., Deutsch, A., Frost, R., Schreuder, R., De Jong, N. H., & Baayen, R. H. (2005). Changing places: A cross-language perspective on frequency and family size in Dutch and Hebrew. Journal of Memory and Language, 53(4), 496-512. doi:10.1016/j.jml.2005.07.003.

    Abstract

    This study uses the morphological family size effect as a tool for exploring the degree of isomorphism in the networks of morphologically related words in the Hebrew and Dutch mental lexicon. Hebrew and Dutch are genetically unrelated, and they structure their morphologically complex words in very different ways. Two visual lexical decision experiments document substantial cross-language predictivity for the family size measure after partialing out the effect of word frequency and word length. Our data show that the morphological family size effect is not restricted to Indo-European languages but extends to languages with non-concatenative morphology. In Hebrew, a new inhibitory component of the family size effect emerged that arises when a Hebrew root participates in different semantic fields.
  • Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., & De Lange, F. P. (2018). Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro, 5(4): ENEURO.0401-17.2018. doi:10.1523/ENEURO.0401-17.2018.

    Abstract

    A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
  • Mulder, K., Van Heuven, W. J., & Dijkstra, T. (2018). Revisiting the neighborhood: How L2 proficiency and neighborhood manipulation affect bilingual processing. Frontiers in Psychology, 9: 1860. doi:10.3389/fpsyg.2018.01860.

    Abstract

    We conducted three neighborhood experiments with Dutch-English bilinguals to test effects of L2 proficiency and neighborhood characteristics within and between languages. In the past 20 years, the English (L2) proficiency of this population has considerably increased. To consider the impact of this development on neighborhood effects, we conducted a strict replication of the English lexical decision task by van Heuven, Dijkstra, & Grainger (1998, Exp. 4). In line with our prediction, English characteristics (neighborhood size, word and bigram frequency) dominated the word and nonword responses, while the nonwords also revealed an interaction of English and Dutch neighborhood size.
    The prominence of English was tested again in two experiments introducing a stronger neighborhood manipulation. In English lexical decision and progressive demasking, English items with no orthographic neighbors at all were contrasted with items having neighbors in English or Dutch (‘hermits’) only, or in both languages. In both tasks, target processing was affected strongly by the presence of English neighbors, but only weakly by Dutch neighbors. Effects are interpreted in terms of two underlying processing mechanisms: language-specific global lexical activation and lexical competition.
  • Mulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S. and 31 moreMulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S., Roovers, J., Djemie, T., Barca, D., Ceulemans, B., Craiu, D., Lemke, J. R., Korff, C., Mefford, H. C., Meyers, C. T., Siegler, Z., Hiatt, S. M., Cooper, G. M., Bebin, E. M., Snijders Blok, L., Veenstra-Knol, H. E., Baugh, E. H., Brilstra, E. H., Volker-Touw, C. M. L., Van Binsbergen, E., Revah-Politi, A., Pereira, E., McBrian, D., Pacault, M., Isidor, B., Le Caignec, C., Gilbert-Dussardier, B., Bilan, F., Heinzen, E. L., Goldstein, D. B., Stevens, S. J. C., & Sands, T. T. (2018). NBEA: Developmental disease gene with early generalized epilepsy phenotypes. Annals of Neurology, 84(5), 788-795. doi:10.1002/ana.25350.

    Abstract

    NBEA is a candidate gene for autism, and de novo variants have been reported in neurodevelopmental disease (NDD) cohorts. However, NBEA has not been rigorously evaluated as a disease gene, and associated phenotypes have not been delineated. We identified 24 de novo NBEA variants in patients with NDD, establishing NBEA as an NDD gene. Most patients had epilepsy with onset in the first few years of life, often characterized by generalized seizure types, including myoclonic and atonic seizures. Our data show a broader phenotypic spectrum than previously described, including a myoclonic-astatic epilepsy–like phenotype in a subset of patients.

    Files private

    Request files
  • Narasimhan, B. (2005). Splitting the notion of 'agent': Case-marking in early child Hindi. Journal of Child Language, 32(4), 787-803. doi:10.1017/S0305000905007117.

    Abstract

    Two construals of agency are evaluated as possible innate biases guiding case-marking in children. A BROAD construal treats agentive arguments of multi-participant and single-participant events as being similar. A NARROWER construal is restricted to agents of multi-participant events. In Hindi, ergative case-marking is associated with agentive participants of multi-participant, perfective actions. Children relying on a broad or narrow construal of agent are predicted to overextend ergative case-marking to agentive participants of transitive imperfective actions and/or intransitive actions. Longitudinal data from three children acquiring Hindi (1;7 to 3;9) reveal no overextension errors, suggesting early sensitivity to distributional patterns in the input.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Narasimhan, B., Budwig, N., & Murty, L. (2005). Argument realization in Hindi caregiver-child discourse. Journal of Pragmatics, 37(4), 461-495. doi:10.1016/j.pragma.2004.01.005.

    Abstract

    An influential claim in the child language literature posits that children use structural cues in the input language to acquire verb meaning (Gleitman, 1990). One such cue is the number of arguments co-occurring with the verb, which provides an indication as to the event type associated with the verb (Fisher, 1995). In some languages however (e.g. Hindi), verb arguments are ellipted relatively freely, subject to certain discourse-pragmatic constraints. In this paper, we address three questions: Is the pervasive argument ellipsis characteristic of adult Hindi also found in Hindi-speaking caregivers’ input ? If so, do children consequently make errors in verb transitivity? How early do children learning a split-ergative language, such as Hindi, exhibit sensitivity to discourse-pragmatic influences on argument realization? We show that there is massive argument ellipsis in caregivers’ input to 3–4 year-olds. However, children acquiring Hindi do not make transitivity errors in their own speech. Nor do they elide arguments randomly. Rather, even at this early age, children appear to be sensitive to discourse-pragmatics in their own spontaneous speech production. These findings in a split-ergative language parallel patterns of argument realization found in children acquiring both nominative-accusative languages (e.g. Korean) and ergative-absolutive languages (e.g. Tzeltal, Inuktitut).
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2005). Testing the limits of the semantic illusion phenomenon: ERPs reveal temporary semantic change deafness in discourse comprehension. Cognitive Brain Research, 24(3), 691-701. doi:10.1016/j.cogbrainres.2005.04.003.

    Abstract

    In general, language comprehension is surprisingly reliable. Listeners very rapidly extract meaning from the unfolding speech signal, on a word-by-word basis, and usually successfully. Research on ‘semantic illusions’ however suggests that under certain conditions, people fail to notice that the linguistic input simply doesn't make sense. In the current event-related brain potentials (ERP) study, we examined whether listeners would, under such conditions, spontaneously detect an anomaly in which a human character central to the story at hand (e.g., “a tourist”) was suddenly replaced by an inanimate object (e.g., “a suitcase”). Because this replacement introduced a very powerful coherence break, we expected listeners to immediately notice the anomaly and generate the standard ERP effect associated with incoherent language, the N400 effect. However, instead of the standard N400 effect, anomalous words elicited a positive ERP effect from about 500–600 ms onwards. The absence of an N400 effect suggests that subjects did not immediately notice the anomaly, and that for a few hundred milliseconds the comprehension system has converged on an apparently coherent but factually incorrect interpretation. The presence of the later ERP effect indicates that subjects were processing for comprehension and did ultimately detect the anomaly. Therefore, we take the absence of a regular N400 effect as the online manifestation of a temporary semantic illusion. Our results also show that even attentive listeners sometimes fail to notice a radical change in the nature of a story character, and therefore suggest a case of short-lived ‘semantic change deafness’ in language comprehension.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific
    aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond
    temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise
    magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data
    are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a
    principled solution to store, organise, process and share the multidimensional data volumes produced
    by the modality. The standard also includes well-defined metadata, to facilitate future data
    harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging
    community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS
    builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics
    software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data
    resources available to everyone.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Noppeney, U., Jones, S. A., Rohe, T., & Ferrari, A. (2018). See what you hear – How the brain forms representations across the senses. Neuroforum, 24(4), 257-271. doi:10.1515/nf-2017-A066.

    Abstract

    Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norman, D. A., & Levelt, W. J. M. (1988). Life at the center. In W. Hirst (Ed.), The making of cognitive science: essays in honor of George A. Miller (pp. 100-109). Cambridge University Press.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Obleser, J., Eisner, F., & Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32), 8116-8124. doi:doi:10.1523/JNEUROSCI.1290-08.2008.

    Abstract

    Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • O'Shannessy, C. (2005). Light Warlpiri: A new language. Australian Journal of Linguistics, 25(1), 31-57. doi:10.1080/07268600500110472.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Converging evidence from electrocorticography and BOLD fMRI for a sharp functional boundary in superior temporal gyrus related to multisensory speech processing. Frontiers in Human Neuroscience, 12: 141. doi:10.3389/fnhum.2018.00141.

    Abstract

    Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Frontal cortex selects representations of the talker’s mouth to aid in speech perception. eLife, 7: e30387. doi:10.7554/eLife.30387.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5(1/2), 219-240.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Penke, M., Janssen, U., Indefrey, P., & Seitz, R. (2005). No evidence for a rule/procedural deficit in German patients with Parkinson's disease. Brain and Language, 95(1), 139-140. doi:10.1016/j.bandl.2005.07.078.
  • Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9: 1433. doi:10.3389/fpsyg.2018.01433.

    Abstract

    Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages – American Sign Language and British Sign Language, and two spoken languages – English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions in Kata Kolok (Bali). In Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions: Introduction and overview. In Possessive and existential constructions in sign languages (pp. 1-31). Nijmegen: Ishara Press.
  • Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21: e12572. doi:10.1111/desc.12572.

    Abstract

    Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early-learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested
    that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear
    lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last
    word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture
    onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However,
    the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram
    from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral
    frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect
    in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the
    alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior
    lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical
    clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two
    patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left
    lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context-
    driven word production.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Pika, S., Wilkinson, R., Kendrick, K. H., & Vernes, S. C. (2018). Taking turns: Bridging the gap between human and animal communication. Proceedings of the Royal Society B: Biological Sciences, 285(1880): 20180598. doi:10.1098/rspb.2018.0598.

    Abstract

    Language, humans’ most distinctive trait, still remains a ‘mystery’ for evolutionary theory. It is underpinned by a universal infrastructure—cooperative turn-taking—which has been suggested as an ancient mechanism bridging the existing gap between the articulate human species and their inarticulate primate cousins. However, we know remarkably little about turn-taking systems of non-human animals, and methodological confounds have often prevented meaningful cross-species comparisons. Thus, the extent to which cooperative turn-taking is uniquely human or represents a homologous and/or analogous trait is currently unknown. The present paper draws attention to this promising research avenue by providing an overview of the state of the art of turn-taking in four animal taxa—birds, mammals, insects and anurans. It concludes with a new comparative framework to spur more research into this research domain and to test which elements of the human turn-taking system are shared across species and taxa.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Poletiek, F. H. (2008). Het probleem van escalerende beschuldigingen [Boekbespreking van Kindermishandeling door H. Crombag en den Hartog]. Maandblad voor Geestelijke Volksgezondheid, (2), 163-166.
  • Poletiek, F. H. (2005). The proof of the pudding is in the eating: Translating Popper's philosophy into a model for testing behaviour. In K. I. Manktelow, & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 333-347). Hove: Psychology Press.
  • Poletiek, F. H., Conway, C. M., Ellefson, M. R., Lai, J., Bocanegra, B. R., & Christiansen, M. H. (2018). Under what conditions can recursion be learned? Effects of starting small in artificial grammar learning of recursive structure. Cognitive Science, 42(8), 2855-2889. doi:10.1111/cogs.12685.

    Abstract

    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
  • Popov, T., Jensen, O., & Schoffelen, J.-M. (2018). Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage, 178, 277-286. doi:10.1016/j.neuroimage.2018.05.054.

    Abstract

    Oscillatory activity in the alpha and gamma bands is considered key in shaping functional brain architecture. Power
    increases in the high-frequency gamma band are typically reported in parallel to decreases in the low-frequency alpha
    band. However, their functional significance and in particular their interactions are not well understood. The present
    study shows that, in the context of an N-backworking memory task, alpha power decreases in the dorsal visual stream
    are related to gamma power increases in early visual areas. Granger causality analysis revealed directed interregional
    interactions from dorsal to ventral stream areas, in accordance with task demands. Present results reveal a robust,
    behaviorally relevant, and architectonically decisive power-to-power relationship between alpha and gamma activity.
    This relationship suggests that anatomically distant power fluctuations in oscillatory activity can link cerebral network
    dynamics on trial-by-trial basis during cognitive operations such as working memory
  • Popov, T., Oostenveld, R., & Schoffelen, J.-M. (2018). FieldTrip made easy: An analysis protocol for group analysis of the auditory steady state brain response in time, frequency, and space. Frontiers in Neuroscience, 12: 711. doi:10.3389/fnins.2018.00711.

    Abstract

    The auditory steady state evoked response (ASSR) is a robust and frequently utilized
    phenomenon in psychophysiological research. It reflects the auditory cortical response
    to an amplitude-modulated constant carrier frequency signal. The present report
    provides a concrete example of a group analysis of the EEG data from 29 healthy human
    participants, recorded during an ASSR paradigm, using the FieldTrip toolbox. First, we
    demonstrate sensor-level analysis in the time domain, allowing for a description of the
    event-related potentials (ERPs), as well as their statistical evaluation. Second, frequency
    analysis is applied to describe the spectral characteristics of the ASSR, followed by
    group level statistical analysis in the frequency domain. Third, we show how timeand
    frequency-domain analysis approaches can be combined in order to describe
    the temporal and spectral development of the ASSR. Finally, we demonstrate source
    reconstruction techniques to characterize the primary neural generators of the ASSR.
    Throughout, we pay special attention to explaining the design of the analysis pipeline
    for single subjects and for the group level analysis. The pipeline presented here can be
    adjusted to accommodate other experimental paradigms and may serve as a template
    for similar analyses.
  • Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.

    Abstract

    A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
  • St Pourcain, B., Eaves, L. J., Ring, S. M., Fisher, S. E., Medland, S., Evans, D. M., & Smith, G. D. (2018). Developmental changes within the genetic architecture of social communication behaviour: A multivariate study of genetic variance in unrelated individuals. Biological Psychiatry, 83(7), 598-606. doi:10.1016/j.biopsych.2017.09.020.

    Abstract

    Background: Recent analyses of trait-disorder overlap suggest that psychiatric dimensions may relate to distinct sets of genes that exert their maximum influence during different periods of development. This includes analyses of social-communciation difficulties that share, depending on their developmental stage, stronger genetic links with either Autism Spectrum Disorder or schizophrenia. Here we developed a multivariate analysis framework in unrelated individuals to model directly the developmental profile of genetic influences contributing to complex traits, such as social-communication difficulties, during a ~10-year period spanning childhood and adolescence. Methods: Longitudinally assessed quantitative social-communication problems (N ≤ 5,551) were studied in participants from a UK birth cohort (ALSPAC, 8 to 17 years). Using standardised measures, genetic architectures were investigated with novel multivariate genetic-relationship-matrix structural equation models (GSEM) incorporating whole-genome genotyping information. Analogous to twin research, GSEM included Cholesky decomposition, common pathway and independent pathway models. Results: A 2-factor Cholesky decomposition model described the data best. One genetic factor was common to SCDC measures across development, the other accounted for independent variation at 11 years and later, consistent with distinct developmental profiles in trait-disorder overlap. Importantly, genetic factors operating at 8 years explained only ~50% of the genetic variation at 17 years. Conclusion: Using latent factor models, we identified developmental changes in the genetic architecture of social-communication difficulties that enhance the understanding of ASD and schizophrenia-related dimensions. More generally, GSEM present a framework for modelling shared genetic aetiologies between phenotypes and can provide prior information with respect to patterns and continuity of trait-disorder overlap
  • St Pourcain, B., Robinson, E. B., Anttila, V., Sullivan, B. B., Maller, J., Golding, J., Skuse, D., Ring, S., Evans, D. M., Zammit, S., Fisher, S. E., Neale, B. M., Anney, R., Ripke, S., Hollegaard, M. V., Werge, T., iPSYCH-SSI-Broad Autism Group, Ronald, A., Grove, J., Hougaard, D. M., Børglum, A. D. and 3 moreSt Pourcain, B., Robinson, E. B., Anttila, V., Sullivan, B. B., Maller, J., Golding, J., Skuse, D., Ring, S., Evans, D. M., Zammit, S., Fisher, S. E., Neale, B. M., Anney, R., Ripke, S., Hollegaard, M. V., Werge, T., iPSYCH-SSI-Broad Autism Group, Ronald, A., Grove, J., Hougaard, D. M., Børglum, A. D., Mortensen, P. B., Daly, M., & Davey Smith, G. (2018). ASD and schizophrenia show distinct developmental profiles in common genetic overlap with population-based social-communication difficulties. Molecular Psychiatry, 23, 263-270. doi:10.1038/mp.2016.198.

    Abstract

    Difficulties in social communication are part of the phenotypic overlap between autism spectrum disorders (ASD) and
    schizophrenia. Both conditions follow, however, distinct developmental patterns. Symptoms of ASD typically occur during early childhood, whereas most symptoms characteristic of schizophrenia do not appear before early adulthood. We investigated whether overlap in common genetic in fluences between these clinical conditions and impairments in social communication depends on
    the developmental stage of the assessed trait. Social communication difficulties were measured in typically-developing youth
    (Avon Longitudinal Study of Parents and Children,N⩽5553, longitudinal assessments at 8, 11, 14 and 17 years) using the Social
    Communication Disorder Checklist. Data on clinical ASD (PGC-ASD: 5305 cases, 5305 pseudo-controls; iPSYCH-ASD: 7783 cases,
    11 359 controls) and schizophrenia (PGC-SCZ2: 34 241 cases, 45 604 controls, 1235 trios) were either obtained through the
    Psychiatric Genomics Consortium (PGC) or the Danish iPSYCH project. Overlap in genetic in fluences between ASD and social
    communication difficulties during development decreased with age, both in the PGC-ASD and the iPSYCH-ASD sample. Genetic overlap between schizophrenia and social communication difficulties, by contrast, persisted across age, as observed within two independent PGC-SCZ2 subsamples, and showed an increase in magnitude for traits assessed during later adolescence. ASD- and schizophrenia-related polygenic effects were unrelated to each other and changes in trait-disorder links reflect the heterogeneity of
    genetic factors in fluencing social communication difficulties during childhood versus later adolescence. Thus, both clinical ASD and schizophrenia share some genetic in fluences with impairments in social communication, but reveal distinct developmental profiles in their genetic links, consistent with the onset of clinical symptoms

    Additional information

    mp2016198x1.docx
  • Pouw, W., Van Gog, T., Zwaan, R. A., Agostinho, S., & Paas, F. (2018). Co-thought gestures in children's mental problem solving: Prevalence and effects on subsequent performance. Applied Cognitive Psychology, 32(1), 66-80. doi:10.1002/acp.3380.

    Abstract

    Co-thought gestures are understudied as compared to co-speech gestures yet, may provide insight into cognitive functions of gestures that are independent of speech processes. A recent study with adults showed that co-thought gesticulation occurred spontaneously during mental preparation of problem solving. Moreover, co-thought gesturing (either spontaneous or instructed) during mental preparation was effective for subsequent solving of the Tower of Hanoi under conditions of high cognitive load (i.e., when visual working memory capacity was limited and when the task was more difficult). In this preregistered study (), we investigated whether co-thought gestures would also spontaneously occur and would aid problem-solving processes in children (N=74; 8-12years old) under high load conditions. Although children also spontaneously used co-thought gestures during mental problem solving, this did not aid their subsequent performance when physically solving the problem. If these null results are on track, co-thought gesture effects may be different in adults and children.

    Files private

    Request files
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Proios, H., Asaridou, S. S., & Brugger, P. (2008). Random number generation in patients with aphasia: A test of executive functions. Acta Neuropsychologica, 6(2), 157-168.

    Abstract

    Randomization performance was studied using the "Mental Dice Task" in 20 patients with aphasia (APH) and 101 elderly normal control subjects (NC). The produced sequences were compared to 100 computer-generated pseudorandom sequences with respect to 7 measures of sequential bias. The performance of APH differed significantly from NC participants, according to all but one measure, i.e. Turning Point Index (points of change between ascending and descending sequences). NC participants differed significantly from the computer generated sequences, according to all measures of randomness. Finally, APH differed significantly from the computer simulator, according to all measures but mean Repetition Gap score (gap between a digit and its reoccurrence). Despite the heterogeneity of our APH group, there were no significant differences in randomization performance between patients with different language impairments. All the APH displayed a distinct performance profile, with more response stereotypy, counting tendencies, and inhibition problems, as hypothesised, while at the same time responding more randomly than NC by showing less of a cycling strategy and more number repetitions.
  • Quinn, S., Donnelly, S., & Kidd, E. (2018). The relationship between symbolic play and language acquisition: A meta-analytic review. Developmental Review, 49, 121-135. doi:10.1016/j.dr.2018.05.005.

    Abstract

    A developmental relationship between symbolic play and language has been long proposed, going as far back as the writings of Piaget and Vygotsky. In the current paper we build on recent qualitative reviews of the literature by reporting the first quantitative analysis of the relationship. We conducted a three-level meta-analysis of past studies that have investigated the relationship between symbolic play and language acquisition. Thirty-five studies (N = 6848) met the criteria for inclusion. Overall, we observed a significant small-to-medium association between the two domains (r = .35). Several moderating variables were included in the analyses, including: (i) study design (longitudinal, concurrent), (ii) the manner in which language was measured (comprehension, production), and (iii) the age at which this relationship is measured. The effect was weakly moderated by these three variables, but overall the association was robust, suggesting that symbolic play and language are closely related in development.

    Additional information

    Quinn_Donnelly_Kidd_2018sup.docx
  • Rapold, C. J., & Widlok, T. (2008). Dimensions of variability in Northern Khoekhoe language and culture. Southern African Humanities, 20, 133-161. Retrieved from http://www.sahumanities.org.za/RapoldWidlok_203.aspx.

    Abstract

    This article takes an interdisciplinary route towards explaining the complex history of Hai//om culture and language. We begin this article with a short review of ideas relating to 'origins' and historical reconstructions as they are currently played out among Khoekhoe groups in Namibia, in particular with regard to the Hai//om. We then take a comparative look at parts of the kinship system and the tonology of ≠Âkhoe Hai//om and other variants of Khoekhoe. With regard to the kinship and naming system, we see patterns that show similarities with Nama and Damara on the one hand but also with 'San' groups on the other hand. With regard to tonology, new data from three northern Khoekoe varieties shows similarities as well as differences with Standard Namibian Khoekhoe and Ju and Tuu varieties. The historical scenarios that might explain these facts suggest different centres of innovations and opposite directions of diffusion. The anthropological and linguistic data demonstrates that only a fine-grained and multi-layered approach that goes far beyond any simplistic dichotomies can do justice to the Hai//om riddle.
  • Ravignani, A. (2018). Darwin, sexual selection, and the origins of music. Trends in Ecology and Evolution, 33(10), 716-719. doi:10.1016/j.tree.2018.07.006.

    Abstract

    Humans devote ample time to produce and perceive music. How and why this behavioral propensity originated in our species is unknown. For centuries, speculation dominated the study of the evolutionary origins of musicality. Following Darwin’s early intuitions, recent empirical research is opening a new chapter to tackle this mystery.
  • Ravignani, A. (2018). Comment on “Temporal and spatial variation in harbor seal (Phoca vitulina L.) roar calls from southern Scandinavia” [J. Acoust. Soc. Am. 141, 1824-1834 (2017)]. The Journal of the Acoustical Society of America, 143, 504-508. doi:10.1121/1.5021770.

    Abstract

    In their recent article, Sabinsky and colleagues investigated heterogeneity in harbor seals' vocalizations. The authors found seasonal and geographical variation in acoustic parameters, warning readers that recording conditions might account for some of their results. This paper expands on the temporal aspect of the encountered heterogeneity in harbor seals' vocalizations. Temporal information is the least susceptible to variable recording conditions. Hence geographical and seasonal variability in roar timing constitutes the most robust finding in the target article. In pinnipeds, evidence of timing and rhythm in the millisecond range—as opposed to circadian and seasonal rhythms—has theoretical and interdisciplinary relevance. In fact, the study of rhythm and timing in harbor seals is particularly decisive to support or confute a cross-species hypothesis, causally linking the evolution of vocal production learning and rhythm. The results by Sabinsky and colleagues can shed light on current scientific questions beyond pinniped bioacoustics, and help formulate empirically testable predictions.
  • Ravignani, A., Chiandetti, C., & Gamba, M. (2018). L'evoluzione del ritmo. Le Scienze, (04 maggio 2018).
  • Ravignani, A., Thompson, B., Grossi, T., Delgado, T., & Kirby, S. (2018). Evolving building blocks of rhythm: How human cognition creates music via cultural transmission. Annals of the New York Academy of Sciences, 1423(1), 176-187. doi:10.1111/nyas.13610.

    Abstract

    Why does musical rhythm have the structure it does? Musical rhythm, in all its cross-cultural diversity, exhibits
    commonalities across world cultures. Traditionally, music research has been split into two fields. Some scientists
    focused onmusicality, namely the human biocognitive predispositions formusic, with an emphasis on cross-cultural
    similarities. Other scholars investigatedmusic, seen as a cultural product, focusing on the variation in worldmusical
    cultures.Recent experiments founddeep connections betweenmusicandmusicality, reconciling theseopposing views.
    Here, we address the question of how individual cognitive biases affect the process of cultural evolution of music.
    Data from two experiments are analyzed using two complementary techniques. In the experiments, participants
    hear drumming patterns and imitate them. These patterns are then given to the same or another participant to
    imitate. The structure of these initially random patterns is tracked along experimental “generations.” Frequentist
    statistics show how participants’ biases are amplified by cultural transmission, making drumming patterns more
    structured. Structure is achieved faster in transmission within rather than between participants. A Bayesian model
    approximates the motif structures participants learned and created. Our data and models suggest that individual
    biases for musicality may shape the cultural transmission of musical rhythm.

    Additional information

    nyas13610-sup-0001-suppmat.pdf
  • Ravignani, A., Thompson, B., & Filippi, P. (2018). The evolution of musicality: What can be learned from language evolution research? Frontiers in Neuroscience, 12: 20. doi:10.3389/fnins.2018.00020.

    Abstract

    Language and music share many commonalities, both as natural phenomena and as subjects of intellectual inquiry. Rather than exhaustively reviewing these connections, we focus on potential cross-pollination of methodological inquiries and attitudes. We highlight areas in which scholarship on the evolution of language may inform the evolution of music. We focus on the value of coupled empirical and formal methodologies, and on the futility of mysterianism, the declining view that the nature, origins and evolution of language cannot be addressed empirically. We identify key areas in which the evolution of language as a discipline has flourished historically, and suggest ways in which these advances can be integrated into the study of the evolution of music.
  • Ravignani, A. (2018). Spontaneous rhythms in a harbor seal pup calls. BMC Research Notes, 11: 3. doi:10.1186/s13104-017-3107-6.

    Abstract

    Objectives: Timing and rhythm (i.e. temporal structure) are crucial, though historically neglected, dimensions of animal communication. When investigating these in non-human animals, it is often difficult to balance experimental control and ecological validity. Here I present the first step of an attempt to balance the two, focusing on the timing of vocal rhythms in a harbor seal pup (Phoca vitulina). Collection of this data had a clear aim: To find spontaneous vocal rhythms in this individual in order to design individually-adapted and ecologically-relevant stimuli for a later playback experiment. Data description: The calls of one seal pup were recorded. The audio recordings were annotated using Praat, a free software to analyze vocalizations in humans and other animals. The annotated onsets and offsets of vocalizations were then imported in a Python script. The script extracted three types of timing information: the duration of calls, the intervals between calls’ onsets, and the intervals between calls’ maximum-intensity peaks. Based on the annotated data, available to download, I provide simple descriptive statistics for these temporal measures, and compare their distributions.
  • Ravignani, A., & Verhoef, T. (2018). Which melodic universals emerge from repeated signaling games?: A Note on Lumaca and Baggio (2017). Artificial Life, 24(2), 149-153. doi:10.1162/ARTL_a_00259.

    Abstract

    Music is a peculiar human behavior, yet we still know little as to why and how music emerged. For centuries, the study of music has been the sole prerogative of the humanities. Lately, however, music is being increasingly investigated by psychologists, neuroscientists, biologists, and computer scientists. One approach to studying the origins of music is to empirically test hypotheses about the mechanisms behind this structured behavior. Recent lab experiments show how musical rhythm and melody can emerge via the process of cultural transmission. In particular, Lumaca and Baggio (2017) tested the emergence of a sound system at the boundary between music and language. In this study, participants were given random pairs of signal-meanings; when participants negotiated their meaning and played a “ game of telephone ” with them, these pairs became more structured and systematic. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends, including the emergence, over the course of artificial human generations, of features resembling properties of language and music. In this Note, we highlight the importance of Lumaca and Baggio ʼ s experiment, place it in the broader literature on the evolution of language and music, and suggest refinements for future experiments. We conclude that, while psychological evidence for the emergence of proto-musical features is accumulating, complementary work is needed: Mathematical modeling and computer simulations should be used to test the internal consistency of experimentally generated hypotheses and to make new predictions.
  • Ravignani, A., Thompson, B., Lumaca, M., & Grube, M. (2018). Why do durations in musical rhythms conform to small integer ratios? Frontiers in Computational Neuroscience, 12: 86. doi:10.3389/fncom.2018.00086.

    Abstract

    One curious aspect of human timing is the organization of rhythmic patterns in small integer ratios. Behavioral and neural research has shown that adjacent time intervals in rhythms tend to be perceived and reproduced as approximate fractions of small numbers (e.g., 3/2). Recent work on iterated learning and reproduction further supports this: given a randomly timed drum pattern to reproduce, participants subconsciously transform it toward small integer ratios. The mechanisms accounting for this “attractor” phenomenon are little understood, but might be explained by combining two theoretical frameworks from psychophysics. The scalar expectancy theory describes time interval perception and reproduction in terms of Weber's law: just detectable durational differences equal a constant fraction of the reference duration. The notion of categorical perception emphasizes the tendency to perceive time intervals in categories, i.e., “short” vs. “long.” In this piece, we put forward the hypothesis that the integer-ratio bias in rhythm perception and production might arise from the interaction of the scalar property of timing with the categorical perception of time intervals, and that neurally it can plausibly be related to oscillatory activity. We support our integrative approach with mathematical derivations to formalize assumptions and provide testable predictions. We present equations to calculate durational ratios by: (i) parameterizing the relationship between durational categories, (ii) assuming a scalar timing constant, and (iii) specifying one (of K) category of ratios. Our derivations provide the basis for future computational, behavioral, and neurophysiological work to test our model.
  • Raviv, L., & Arnon, I. (2018). Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181, 160-173. doi:10.1016/j.cognition.2018.08.011.

    Abstract

    Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date no published study has demonstrated a similar emergence of linguistic structure in children. The lack of evidence from child learners constitutes a problematic
    2
    gap in the literature: if such learning biases impact the emergence of linguistic structure, they should also be found in children, who are the primary learners in real-life language transmission. However, children may differ from adults in their biases given age-related differences in general cognitive skills. Moreover, adults’ performance on iterated learning tasks may reflect existing (and explicit) linguistic biases, partially undermining the generality of the results. Examining children’s performance can also help evaluate contrasting predictions about their role in emerging languages: do children play a larger or smaller role than adults in the creation of structure? Here, we report a series of four iterated artificial language learning studies (based on Kirby, Cornish & Smith, 2008) with both children and adults, using a novel child-friendly paradigm. Our results show that linguistic structure does not emerge more readily in children compared to adults, and that adults are overall better in both language learning and in creating linguistic structure. When languages could become underspecified (by allowing homonyms), children and adults were similar in developing consistent mappings between meanings and signals in the form of structured ambiguities. However, when homonimity was not allowed, only adults created compositional structure. This study is a first step in using iterated language learning paradigms to explore child-adult differences. It provides the first demonstration that cultural transmission has a different effect on the languages produced by children and adults: While children were able to develop systematicity, their languages did not show compositionality. We focus on the relation between learning and structure creation as a possible explanation for our findings and discuss implications for children’s role in the emergence of linguistic structure.

    Additional information

    results A results B results D stimuli
  • Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.

    Abstract

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.

    Additional information

    Video abstract of the article
  • Razafindrazaka, H., & Brucato, N. (2008). Esclavage et diaspora Africaine. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 326-328). Issy-les-Moulineaux: Elsevier Masson.
  • Razafindrazaka, H., Brucato, N., & Mazières, S. (2008). Les Noirs marrons. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 319-320). Issy-les-Moulineaux: Elsevier Masson.
  • Redl, T., Eerland, A., & Sanders, T. J. M. (2018). The processing of the Dutch masculine generic zijn ‘his’ across stereotype contexts: An eye-tracking study. PLoS One, 13(10): e0205903. doi:10.1371/journal.pone.0205903.

    Abstract

    Language users often infer a person’s gender when it is not explicitly mentioned. This information is included in the mental model of the described situation, giving rise to expectations regarding the continuation of the discourse. Such gender inferences can be based on two types of information: gender stereotypes (e.g., nurses are female) and masculine generics, which are grammatically masculine word forms that are used to refer to all genders in certain contexts (e.g., To each his own). In this eye-tracking experiment (N = 82), which is the first to systematically investigate the online processing of masculine generic pronouns, we tested whether the frequently used Dutch masculine generic zijn ‘his’ leads to a male bias. In addition, we tested the effect of context by introducing male, female, and neutral stereotypes. We found no evidence for the hypothesis that the generically-intended masculine pronoun zijn ‘his’ results in a male bias. However, we found an effect of stereotype context. After introducing a female stereotype, reading about a man led to an increase in processing time. However, the reverse did not hold, which parallels the finding in social psychology that men are penalized more for gender-nonconforming behavior. This suggests that language processing is not only affected by the strength of stereotype contexts; the associated disapproval of violating these gender stereotypes affects language processing, too.

    Additional information

    pone.0205903.s001.pdf data files
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Rietbergen, M., Roelofs, A., Den Ouden, H., & Cools, R. (2018). Disentangling cognitive from motor control: Influence of response modality on updating, inhibiting, and shifting. Acta Psychologica, 191, 124-130. doi:10.1016/j.actpsy.2018.09.008.

    Abstract

    It is unclear whether cognitive and motor control are parallel and interactive or serial and independent processes. According to one view, cognitive control refers to a set of modality-nonspecific processes that act on supramodal representations and precede response modality-specific motor processes. An alternative view is that cognitive control represents a set of modality-specific operations that act directly on motor-related representations, implying dependence of cognitive control on motor control. Here, we examined the influence of response modality (vocal vs. manual) on three well-established subcomponent processes of cognitive control: shifting, inhibiting, and updating. We observed effects of all subcomponent processes in reaction times. The magnitude of these effects did not differ between response modalities for shifting and inhibiting, in line with a serial, supramodal view. However, the magnitude of the updating effect differed between modalities, in line with an interactive, modality-specific view. These results suggest that updating represents a modality-specific operation that depends on motor control, whereas shifting and inhibiting represent supramodal operations that act independently of motor control.
  • Roberts, L., Gullberg, M., & Indefrey, P. (2008). Online pronoun resolution in L2 discourse: L1 influence and general learner effects. Studies in Second Language Acquisition, 30(3), 333-357. doi:10.1017/S0272263108080480.

    Abstract

    This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.

Share this page