Publications

Displaying 301 - 400 of 526
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Miedema, J., & Reesink, G. (2004). One head, many faces: New perspectives on the bird's head Peninsula of New Guinea. Leiden: KITLV Press.

    Abstract

    Wider knowledge of New Guinea's Bird's Head Peninsula, home to an indigenous population of 114,000 people who share more than twenty languages, was recently gained through an extensive interdisciplinary research project involving anthropologists, archaeologists, botanists, demographers, geologists, linguists, and specialists in public administration. Analyzing the findings of the project, this book provides a systematic comparison with earlier studies, addressing the geological past, the latest archaeological evidence of early human habitation (dating back at least 26,000 years), and the region's diversity of languages and cultures. The peninsula is an important transitional area between Southeast Asia and Oceania, and this book provides valuable new insights for specialists in both the social and natural sciences into processes of state formation and globalization in the Asia-Pacific zone.
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.
  • Montero-Melis, G., Eisenbeiss, S., Narasimhan, B., Ibarretxe-Antuñano, I., Kita, S., Kopecka, A., Lüpke, F., Nikitina, T., Tragel, I., Jaeger, T. F., & Bohnemeyer, J. (2017). Satellite- vs. Verb-Framing Underpredicts Nonverbal Motion Categorization: Insights from a Large Language Sample and Simulations. Cognitive Semantics, 3(1), 36-61. doi:10.1163/23526416-00301002.

    Abstract

    Is motion cognition influenced by the large-scale typological patterns proposed in Talmy’s (2000) two-way distinction between verb-framed (V) and satellite-framed (S) languages? Previous studies investigating this question have been limited to comparing two or three languages at a time and have come to conflicting results. We present the largest cross-linguistic study on this question to date, drawing on data from nineteen genealogically diverse languages, all investigated in the same behavioral paradigm and using the same stimuli. After controlling for the different dependencies in the data by means of multilevel regression models, we find no evidence that S- vs. V-framing affects nonverbal categorization of motion events. At the same time, statistical simulations suggest that our study and previous work within the same behavioral paradigm suffer from insufficient statistical power. We discuss these findings in the light of the great variability between participants, which suggests flexibility in motion representation. Furthermore, we discuss the importance of accounting for language variability, something which can only be achieved with large cross-linguistic samples.
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Murakami, S., Verdonschot, R. G., Kreiborg, S., Kakimoto, N., & Kawaguchi, A. (2017). Stereoscopy in dental education: An investigation. Journal of Dental Education, 81(4), 450-457. doi:10.21815/JDE.016.002.

    Abstract

    The aim of this study was to investigate whether stereoscopy can play a meaningful role in dental education. The study used an anaglyph technique in which two images were presented separately to the left and right eyes (using red/cyan filters), which, combined in the brain, give enhanced depth perception. A positional judgment task was performed to assess whether the use of stereoscopy would enhance depth perception among dental students at Osaka University in Japan. Subsequently, the optimum angle was evaluated to obtain maximum ability to discriminate among complex anatomical structures. Finally, students completed a questionnaire on a range of matters concerning their experience with stereoscopic images including their views on using stereoscopy in their future careers. The results showed that the students who used stereoscopy were better able than students who did not to appreciate spatial relationships between structures when judging relative positions. The maximum ability to discriminate among complex anatomical structures was between 2 and 6 degrees. The students' overall experience with the technique was positive, and although most did not have a clear vision for stereoscopy in their own practice, they did recognize its merits for education. These results suggest that using stereoscopic images in dental education can be quite valuable as stereoscopy greatly helped these students' understanding of the spatial relationships in complex anatomical structures.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Newbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A. and 9 moreNewbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A., Cohen, W., Cowie, H., Steel, J., MacLean, A., Seckl, J., Bishop, D. V. M., Simkin, Z., Conti-Ramsden, G., & Pickles, A. (2004). Highly significant linkage to the SLI1 Locus in an expanded sample of Individuals affected by specific language impairment. American Journal of Human Genetics, 74(6), 1225-1238. doi:10.1086/421529.

    Abstract

    Specific language impairment (SLI) is defined as an unexplained failure to acquire normal language skills despite adequate intelligence and opportunity. We have reported elsewhere a full-genome scan in 98 nuclear families affected by this disorder, with the use of three quantitative traits of language ability (the expressive and receptive tests of the Clinical Evaluation of Language Fundamentals and a test of nonsense word repetition). This screen implicated two quantitative trait loci, one on chromosome 16q (SLI1) and a second on chromosome 19q (SLI2). However, a second independent genome screen performed by another group, with the use of parametric linkage analyses in extended pedigrees, found little evidence for the involvement of either of these regions in SLI. To investigate these loci further, we have collected a second sample, consisting of 86 families (367 individuals, 174 independent sib pairs), all with probands whose language skills are ⩾1.5 SD below the mean for their age. Haseman-Elston linkage analysis resulted in a maximum LOD score (MLS) of 2.84 on chromosome 16 and an MLS of 2.31 on chromosome 19, both of which represent significant linkage at the 2% level. Amalgamation of the wave 2 sample with the cohort used for the genome screen generated a total of 184 families (840 individuals, 393 independent sib pairs). Analysis of linkage within this pooled group strengthened the evidence for linkage at SLI1 and yielded a highly significant LOD score (MLS = 7.46, interval empirical P<.0004). Furthermore, linkage at the same locus was also demonstrated to three reading-related measures (basic reading [MLS = 1.49], spelling [MLS = 2.67], and reading comprehension [MLS = 1.99] subtests of the Wechsler Objectives Reading Dimensions).
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Ocklenburg, S., Schmitz, J., Moinfar, Z., Moser, D., Klose, R., Lor, S., Kunz, G., Tegenthoff, M., Faustmann, P., Francks, C., Epplen, J. T., Kumsta, R., & Güntürkün, O. (2017). Epigenetic regulation of lateralized fetal spinal gene expression underlies hemispheric asymmetries. eLife, 6: e22784. doi:10.7554/eLife.22784.001.

    Abstract

    Lateralization is a fundamental principle of nervous system organization but its molecular determinants are mostly unknown. In humans, asymmetric gene expression in the fetal cortex has been suggested as the molecular basis of handedness. However, human fetuses already show considerable asymmetries in arm movements before the motor cortex is functionally linked to the spinal cord, making it more likely that spinal gene expression asymmetries form the molecular basis of handedness. We analyzed genome-wide mRNA expression and DNA methylation in cervical and anterior thoracal spinal cord segments of five human fetuses and show development-dependent gene expression asymmetries. These gene expression asymmetries were epigenetically regulated by miRNA expression asymmetries in the TGF-β signaling pathway and lateralized methylation of CpG islands. Our findings suggest that molecular mechanisms for epigenetic regulation within the spinal cord constitute the starting point for handedness, implying a fundamental shift in our understanding of the ontogenesis of hemispheric asymmetries in humans
  • Ogdie, M. N., Fisher, S. E., Yang, M., Ishii, J., Francks, C., Loo, S. K., Cantor, R. M., McCracken, J. T., McGough, J. J., Smalley, S. L., & Nelson, S. F. (2004). Attention Deficit Hyperactivity Disorder: Fine mapping supports linkage to 5p13, 6q12, 16p13, and 17p11. American Journal of Human Genetics, 75(4), 661-668. doi:10.1086/424387.

    Abstract

    We completed fine mapping of nine positional candidate regions for attention-deficit/hyperactivity disorder (ADHD) in an extended population sample of 308 affected sibling pairs (ASPs), constituting the largest linkage sample of families with ADHD published to date. The candidate chromosomal regions were selected from all three published genomewide scans for ADHD, and fine mapping was done to comprehensively validate these positional candidate regions in our sample. Multipoint maximum LOD score (MLS) analysis yielded significant evidence of linkage on 6q12 (MLS 3.30; empiric P=.024) and 17p11 (MLS 3.63; empiric P=.015), as well as suggestive evidence on 5p13 (MLS 2.55; empiric P=.091). In conjunction with the previously reported significant linkage on the basis of fine mapping 16p13 in the same sample as this report, the analyses presented here indicate that four chromosomal regions—5p13, 6q12, 16p13, and 17p11—are likely to harbor susceptibility genes for ADHD. The refinement of linkage within each of these regions lays the foundation for subsequent investigations using association methods to detect risk genes of moderate effect size.
  • Ortega, G. (2017). Iconicity and sign lexical acquisition: A review. Frontiers in Psychology, 8: 1280. doi:10.3389/fpsyg.2017.01280.

    Abstract

    The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways
  • Ortega, G., Sumer, B., & Ozyurek, A. (2017). Type of iconicity matters in the vocabulary development of signing children. Developmental Psychology, 53(1), 89-99. doi:10.1037/dev0000161.

    Abstract

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children’s preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development.
  • Ostarek, M., & Huettig, F. (2017). Spoken words can make the invisible visible – Testing the involvement of low-level visual representations in spoken word processing. Journal of Experimental Psychology: Human Perception and Performance, 43, 499-508. doi:10.1037/xhp0000313.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" -> picture of a bottle) vs. incongruent ("bottle" -> picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at 600ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215-1224. doi:10.1037/xlm0000375.

    Abstract

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete vs. abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.

    Additional information

    XLM-2016-2822_supp.docx
  • Ostarek, M., & Vigliocco, G. (2017). Reading sky and seeing a cloud: On the relevance of events for perceptual simulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 579-590. doi:10.1037/xlm0000318.

    Abstract

    Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In three experiments, participants had to discriminate between two target pictures appearing at the top or the bottom of the screen by pressing the left vs. right button. Immediately before the targets appeared, they saw an up/down word belonging to the target’s event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at 250ms SOA (experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100ms (experiment 2) or 800ms (experiment 3), only a location non-specific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed.
  • Ozker, M., Schepers, I., Magnotti, J., Yoshor, D., & Beauchamp, M. (2017). A double dissociation between anterior and posterior superior temporal gyrus for processing audiovisual speech demonstrated by electrocorticography. Journal of Cognitive Neuroscience, 29(6), 1044-1060. doi:10.1162/jocn_a_01110.

    Abstract

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Perlman, M. (2017). Debunking two myths against vocal origins of language: Language is iconic and multimodal to the core. Interaction studies, 18(3), 376-401. doi:10.1075/is.18.3.05per.

    Abstract

    Gesture-first theories of language origins often raise two unsubstantiated arguments against vocal origins. First, they argue that great ape vocal behavior is highly constrained, limited to a fixed, species-typical repertoire of reflexive calls. Second, they argue that vocalizations lack any significant potential to ground meaning through iconicity, or resemblance between form and meaning. This paper reviews the considerable evidence that debunks these two “myths”. Accumulating evidence shows that the great apes exercise voluntary control over their vocal behavior, including their breathing, larynx, and supralaryngeal articulators. They are also able to learn new vocal behaviors, and even show some rudimentary ability for vocal imitation. In addition, an abundance of research demonstrates that the vocal modality affords rich potential for iconicity. People can understand iconicity in sound symbolism, and they can produce iconic vocalizations to communicate a diverse range of meanings. Thus, two of the primary arguments against vocal origins theories are not tenable. As an alternative, the paper concludes that the origins of language – going as far back as our last common ancestor with great apes – are rooted in iconicity in both gesture and vocalization.

    Files private

    Request files
  • Perlman, M., & Salmi, R. (2017). Gorillas may use their laryngeal air sacs for whinny-type vocalizations and male display. Journal of Language Evolution, 2(2), 126-140. doi:10.1093/jole/lzx012.

    Abstract

    Great apes and siamangs—but not humans—possess laryngeal air sacs, suggesting that they were lost over hominin evolution. The absence of air sacs in humans may hold clues to speech evolution, but little is known about their functions in extant apes. We investigated whether gorillas use their air sacs to produce the staccato ‘growling’ of the silverback chest beat display. This hypothesis was formulated after viewing a nature documentary showing a display by a silverback western gorilla (Kingo). As Kingo growls, the video shows distinctive vibrations in his chest and throat under which the air sacs extend. We also investigated whether other similarly staccato vocalizations—the whinny, sex whinny, and copulation grunt—might also involve the air sacs. To examine these hypotheses, we collected an opportunistic sample of video and audio evidence from research records and another documentary of Kingo’s group, and from videos of other gorillas found on YouTube. Analysis shows that the four vocalizations are each emitted in rapid pulses of a similar frequency (8–16 pulses per second), and limited visual evidence indicates that they may all occur with upper torso vibrations. Future research should determine how consistently the vibrations co-occur with the vocalizations, whether they are synchronized, and their precise location and timing. Our findings fit with the hypothesis that apes—especially, but not exclusively males—use their air sacs for vocalizations and displays related to size exaggeration for sex and territory. Thus changes in social structure, mating, and sexual dimorphism might have led to the obsolescence of the air sacs and their loss in hominin evolution.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., Forkstam, C., & Ingvar, M. (2004). Artificial syntactic violations activate Broca’s region. Cognitive Science, 28(3), 383-407. doi:10.1207/s15516709cog2803_4.

    Abstract

    In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar.We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this studywas to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann's areas 44 and 45) or Broca's region. In the present study, we observed that artificial grammaticality violations activated Broca's region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants.
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poort, E. D., & Rodd, J. M. (2017). The cognate facilitation effect in bilingual lexical decision is influenced by stimulus list composition. Acta Psychologica, 180, 52-63. doi:10.1016/j.actpsy.2017.08.008.

    Abstract

    Cognates share their form and meaning across languages: “winter” in English means the same as “winter” in Dutch. Research has shown that bilinguals process cognates more quickly than words that exist in one language only (e.g. “ant” in English). This finding is taken as strong evidence for the claim that bilinguals have one integrated lexicon and that lexical access is language non-selective. Two English lexical decision experiments with Dutch–English bilinguals investigated whether the cognate facilitation effect is influenced by stimulus list composition. In Experiment 1, the ‘standard’ version, which included only cognates, English control words and regular non-words, showed significant cognate facilitation (31 ms). In contrast, the ‘mixed’ version, which also included interlingual homographs, pseudohomophones (instead of regular non-words) and Dutch-only words, showed a significantly different profile: a non-significant disadvantage for the cognates (8 ms). Experiment 2 examined the specific impact of these three additional stimuli types and found that only the inclusion of Dutch words significantly reduced the cognate facilitation effect. Additional exploratory analyses revealed that, when the preceding trial was a Dutch word, cognates were recognised up to 50 ms more slowly than English controls. We suggest that when participants must respond ‘no’ to non-target language words, competition arises between the ‘yes’- and ‘no’-responses associated with the two interpretations of a cognate, which (partially) cancels out the facilitation that is a result of the cognate's shared form and meaning. We conclude that the cognate facilitation effect is a real effect that originates in the lexicon, but that cognates can be subject to competition effects outside the lexicon.

    Additional information

    supplementary materials
  • Pouw, W., van Gog, T., Zwaan, R. A., & Paas, F. (2017). Are gesture and speech mismatches produced by an integrated gesture-speech system? A more dynamically embodied perspective is needed for understanding gesture-related learning. Behavioral and Brain Sciences, 40: e68. doi:10.1017/S0140525X15003039.

    Abstract

    We observe a tension in the target article as it stresses an integrated gesture-speech system that can nevertheless consist of contradictory representational states, which are reflected by mismatches in gesture and speech or sign. Beyond problems of coherence, this prevents furthering our understanding of gesture-related learning. As a possible antidote, we invite a more dynamically embodied perspective to the stage.
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Ravignani, A., & Thompson, B. (2017). A note on ‘Noam Chomsky – What kind of creatures are we? Language in Society, 46(3), 446-447. doi:10.1017/S0047404517000288.
  • Ravignani, A., Honing, H., & Kotz, S. A. (2017). Editorial: The evolution of rhythm cognition: Timing in music and speech. Frontiers in Human Neuroscience, 11: 303. doi:10.3389/fnhum.2017.00303.

    Abstract

    This editorial serves a number of purposes. First, it aims at summarizing and discussing 33 accepted contributions to the special issue “The evolution of rhythm cognition: Timing in music and speech.” The major focus of the issue is the cognitive neuroscience of rhythm, intended as a neurobehavioral trait undergoing an evolutionary process. Second, this editorial provides the interested reader with a guide to navigate the interdisciplinary contributions to this special issue. For this purpose, we have compiled Table 1, where methods, topics, and study species are summarized and related across contributions. Third, we also briefly highlight research relevant to the evolution of rhythm that has appeared in other journals while this special issue was compiled. Altogether, this editorial constitutes a summary of rhythm research in music and speech spanning two years, from mid-2015 until mid-2017
  • Ravignani, A., & Sonnweber, R. (2017). Chimpanzees process structural isomorphisms across sensory modalities. Cognition, 161, 74-79. doi:10.1016/j.cognition.2017.01.005.
  • Ravignani, A., Gross, S., Garcia, M., Rubio-Garcia, A., & De Boer, B. (2017). How small could a pup sound? The physical bases of signaling body size in harbor seals. Current Zoology, 63(4), 457-465. doi:10.1093/cz/zox026.

    Abstract

    Vocal communication is a crucial aspect of animal behavior. The mechanism which most mammals use to vocalize relies on three anatomical components. First, air overpressure is generated inside the lower vocal tract. Second, as the airstream goes through the glottis, sound is produced via vocal fold vibration. Third, this sound is further filtered by the geometry and length of the upper vocal tract. Evidence from mammalian anatomy and bioacoustics suggests that some of these three components may covary with an animal’s body size. The framework provided by acoustic allometry suggests that, because vocal tract length (VTL) is more strongly constrained by the growth of the body than vocal fold length (VFL), VTL generates more reliable acoustic cues to an animal’s size. This hypothesis is often tested acoustically but rarely anatomically, especially in pinnipeds. Here, we test the anatomical bases of the acoustic allometry hypothesis in harbor seal pups Phoca vitulina. We dissected and measured vocal tract, vocal folds, and other anatomical features of 15 harbor seals post-mortem. We found that, while VTL correlates with body size, VFL does not. This suggests that, while body growth puts anatomical constraints on how vocalizations are filtered by harbor seals’ vocal tract, no such constraints appear to exist on vocal folds, at least during puppyhood. It is particularly interesting to find anatomical constraints on harbor seals’ vocal tracts, the same anatomical region partially enabling pups to produce individually distinctive vocalizations.
  • Ravignani, A., & Norton, P. (2017). Measuring rhythmic complexity: A primer to quantify and compare temporal structure in speech, movement, and animal vocalizations. Journal of Language Evolution, 2(1), 4-19. doi:10.1093/jole/lzx002.

    Abstract

    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history of human speech. Here we focus on analytical tools to measure and compare temporal structure in human speech and animal vocalizations. We introduce the reader to a range of statistical methods usable, on the one hand, to quantify rhythmic complexity in single vocalizations, and on the other hand, to compare rhythmic structure between multiple vocalizations. These methods include: time series analysis, distributional measures, variability metrics, Fourier transform, auto- and cross-correlation, phase portraits, and circular statistics. Using computer-generated data, we apply a range of techniques, walking the reader through the necessary software and its functions. We describe which techniques are most appropriate to test particular hypotheses on rhythmic structure, and provide possible interpretations of the tests. These techniques can be equally well applied to find rhythmic structure in gesture, movement, and any other behavior developing over time, when the research focus lies on its temporal structure. This introduction to quantitative techniques for rhythm and timing analysis will hopefully spur additional comparative research, and will produce comparable results across all disciplines working on the evolution of speech, ultimately advancing the field.

    Additional information

    lzx002_Supp.docx
  • Ravignani, A. (2017). Interdisciplinary debate: Agree on definitions of synchrony [Correspondence]. Nature, 545, 158. doi:10.1038/545158c.
  • Ravignani, A., & Madison, G. (2017). The paradox of isochrony in the evolution of human rhythm. Frontiers in Psychology, 8: 1820. doi:10.3389/fpsyg.2017.01820.

    Abstract

    Isochrony is crucial to the rhythm of human music. Some neural, behavioral and anatomical traits underlying rhythm perception and production are shared with a broad range of species. These may either have a common evolutionary origin, or have evolved into similar traits under different evolutionary pressures. Other traits underlying rhythm are rare across species, only found in humans and few other animals. Isochrony, or stable periodicity, is common to most human music, but isochronous behaviors are also found in many species. It appears paradoxical that humans are particularly good at producing and perceiving isochronous patterns, although this ability does not conceivably confer any evolutionary advantage to modern humans. This article will attempt to solve this conundrum. To this end, we define the concept of isochrony from the present functional perspective of physiology, cognitive neuroscience, signal processing, and interactive behavior, and review available evidence on isochrony in the signals of humans and other animals. We then attempt to resolve the paradox of isochrony by expanding an evolutionary hypothesis about the function that isochronous behavior may have had in early hominids. Finally, we propose avenues for empirical research to examine this hypothesis and to understand the evolutionary origin of isochrony in general.
  • Ravignani, A. (2017). Visualizing and interpreting rhythmic patterns using phase space plots. Music Perception, 34(5), 557-568. doi:10.1525/MP.2017.34.5.557.

    Abstract

    STRUCTURE IN MUSICAL RHYTHM CAN BE MEASURED using a number of analytical techniques. While some techniques—like circular statistics or grammar induction—rely on strong top-down assumptions, assumption-free techniques can only provide limited insights on higher-order rhythmic structure. I suggest that research in music perception and performance can benefit from systematically adopting phase space plots, a visualization technique originally developed in mathematical physics that overcomes the aforementioned limitations. By jointly plotting adjacent interonset intervals (IOI), the motivic rhythmic structure of musical phrases, if present, is visualized geometrically without making any a priori assumptions concerning isochrony, beat induction, or metrical hierarchies. I provide visual examples and describe how particular features of rhythmic patterns correspond to geometrical shapes in phase space plots. I argue that research on music perception and systematic musicology stands to benefit from this descriptive tool, particularly in comparative analyses of rhythm production. Phase space plots can be employed as an initial assumption-free diagnostic to find higher order structures (i.e., beyond distributional regularities) before proceeding to more specific, theory-driven analyses.
  • Reifegerste, J., Meyer, A. S., & Zwitserlood, P. (2017). Inflectional complexity and experience affect plural processing in younger and older readers of Dutch and German. Language, Cognition and Neuroscience, 32(4), 471-487. doi:10.1080/23273798.2016.1247213.

    Abstract

    According to dual-route models of morphological processing, regular inflected words can be retrieved as whole-word forms or decomposed into morphemes. Baayen, Dijkstra, and Schreuder [(1997). Singulars and plurals in Dutch: Evidence for a parallel dual-route model. Journal of AQ2 Memory and Language, 37, 94–117. doi:10.1006/jmla.1997.2509] proposed a ¶ dual-route model according to which plurals of singular-dominant words (e.g. “brides”) are decomposed, while plurals of plural-dominant words (e.g. “peas”) are accessed as whole-word units. We report two lexical-decision experiments investigating how plural processing is influenced by participants’ age (a proxy for experience with word forms) and morphological complexity of the language (German versus Dutch). For both Dutch participant groups and older German participants, we replicated the interaction between number and dominance reported by Baayen and colleagues. Younger German participants showed a main effect of number, indicating access of all plurals via decomposition. Access to stored forms seems to depend on morphological richness and experience with word forms. The data pattern fits neither full-decomposition nor full-storage models, but is compatible with dual-route models

    Additional information

    plcp_a_1247213_sm6144.pdf
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Roberts, S. G., & Levinson, S. C. (2017). Conversation, cognition and cultural evolution: A model of the cultural evolution of word order through pressures imposed from turn taking in conversation. Interaction studies, 18(3), 402-429. doi:10.1075/is.18.3.06rob.

    Abstract

    This paper outlines a first attempt to model the special constraints that arise in language processing in conversation, and to explore the implications such functional considerations may have on language typology and language change. In particular, we focus on processing pressures imposed by conversational turn-taking and their consequences for the cultural evolution of the structural properties of language. We present an agent-based model of cultural evolution where agents take turns at talk in conversation. When the start of planning for the next turn is constrained by the position of the verb, the stable distribution of dominant word orders across languages evolves to match the actual distribution reasonably well. We suggest that the interface of cognition and interaction should be a more central part of the story of language evolution.
  • De Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C. and 25 moreDe Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C., Thonberg, H., Benussi, L., Ghidoni, R., Binetti, G., de Mendonça, A., Martins, M., Borroni, B., Padovani, A., Almeida, M. R., Santana, I., Diehl-Schmid, J., Alexopoulos, P., Clarimon, J., Lleó, A., Fortea, J., Tsolaki, M., Koutroumani, M., Matěj, R., Rohan, Z., De Deyn, P., Engelborghs, S., Cras, P., Van Broeckhoven, C., Sleegers, K., & European Early-Onset Dementia (EU EOD) consortium (2017). Deleterious ABCA7 mutations and transcript rescue mechanisms in early onset Alzheimer’s disease. Acta Neuropathologica, 134, 475-487. doi:10.1007/s00401-017-1714-x.

    Abstract

    Premature termination codon (PTC) mutations in the ATP-Binding Cassette, Sub-Family A, Member 7 gene (ABCA7) have recently been identified as intermediate-to-high penetrant risk factor for late-onset Alzheimer’s disease (LOAD). High variability, however, is observed in downstream ABCA7 mRNA and protein expression, disease penetrance, and onset age, indicative of unknown modifying factors. Here, we investigated the prevalence and disease penetrance of ABCA7 PTC mutations in a large early onset AD (EOAD)—control cohort, and examined the effect on transcript level with comprehensive third-generation long-read sequencing. We characterized the ABCA7 coding sequence with next-generation sequencing in 928 EOAD patients and 980 matched control individuals. With MetaSKAT rare variant association analysis, we observed a fivefold enrichment (p = 0.0004) of PTC mutations in EOAD patients (3%) versus controls (0.6%). Ten novel PTC mutations were only observed in patients, and PTC mutation carriers in general had an increased familial AD load. In addition, we observed nominal risk reducing trends for three common coding variants. Seven PTC mutations were further analyzed using targeted long-read cDNA sequencing on an Oxford Nanopore MinION platform. PTC-containing transcripts for each investigated PTC mutation were observed at varying proportion (5–41% of the total read count), implying incomplete nonsense-mediated mRNA decay (NMD). Furthermore, we distinguished and phased several previously unknown alternative splicing events (up to 30% of transcripts). In conjunction with PTC mutations, several of these novel ABCA7 isoforms have the potential to rescue deleterious PTC effects. In conclusion, ABCA7 PTC mutations play a substantial role in EOAD, warranting genetic screening of ABCA7 in genetically unexplained patients. Long-read cDNA sequencing revealed both varying degrees of NMD and transcript-modifying events, which may influence ABCA7 dosage, disease severity, and may create opportunities for therapeutic interventions in AD. © 2017, The Author(s).

    Additional information

    Supplementary material
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A., & Shitova, N. (2017). Importance of response time in assessing the cerebral dynamics of spoken word production: Comment on Munding et al. Language, Cognition and Neuroscience, 32(8), 1064-1067. doi:10.1080/23273798.2016.1274415.
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Rojas-Berscia, L. M., & Bourdeau, C. (2017). Optional or syntactic ergativity in Shawi? Distribution and possible origins. Linguistic discovery, 15(1), 50-65. doi:10.1349/PS1.1537-0852.A.481.

    Abstract

    In this article we provide a preliminary description and analysis of the most common ergative
    constructions in Shawi, a Kawapanan language spoken in Northwestern Amazonia. We offer a
    comparison with its sister language, Shiwilu, for which an optional ergativity-marking pattern has
    been claimed (Valenzuela, 2008, 2011). There is not enough evidence, however, to claim the exact
    same for Shawi. Ergativity in the language is driven by mere syntactic motivations. One of the
    most common constituent orders in the language where the ergative marker is obligatory is OAV.
    We close the article with a tentative proposal on the passive origins of OAV ergative constructions
    in the language, via a by-phrase-like incorporation, and eventual grammaticalisation, resorting
    to the formal syntactic theory known as Semantic Syntax (Seuren, 1996).
  • Rommers, J., Dickson, D. S., Norton, J. J. S., Wlotko, E. W., & Federmeier, K. D. (2017). Alpha and theta band dynamics related to sentential constraint and word expectancy. Language, Cognition and Neuroscience, 32(5), 576-589. doi:10.1080/23273798.2016.1183799.

    Abstract

    Despite strong evidence for prediction during language comprehension, the underlying
    mechanisms, and the extent to which they are specific to language, remain unclear. Re-analysing
    an event-related potentials study, we examined responses in the time-frequency domain to
    expected and unexpected (but plausible) words in strongly and weakly constraining sentences,
    and found results similar to those reported in nonverbal domains. Relative to expected words,
    unexpected words elicited an increase in the theta band (4–7 Hz) in strongly constraining
    contexts, suggesting the involvement of control processes to deal with the consequences of
    having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences
    exhibited a decrease in the alpha band (8–12 Hz) relative to weakly constraining sentences,
    suggesting that comprehenders can take advantage of predictive sentence contexts to prepare
    for the input. The results suggest that the brain recruits domain-general preparation and control
    mechanisms when making and assessing predictions during sentence comprehension
  • Rommers, J., Meyer, A. S., & Praamstra, P. (2017). Lateralized electrical brain activity reveals covert attention allocation during speaking. Neuropsychologia, 95, 101-110. doi:10.1016/j.neuropsychologia.2016.12.013.

    Abstract

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers’ eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers’ covert attention allocation as they produced short utterances to describe pairs of objects (e.g., “dog and chair”). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200–350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
  • Rose, M. L., Mok, Z., & Sekine, K. (2017). Communicative effectiveness of pantomime gesture in people with aphasia. International Journal of Language & Communication disorders, 52(2), 227-237. doi:10.1111/1460-6984.12268.

    Abstract

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous
    conversational speech and gesture occurs across all cultures and age groups. When verbal communication is
    compromised, more of the communicative load can be transferred to the gesture modality. Although people with
    aphasia produce meaning-laden gestures, the communicative value of these has not been adequately investigated.
    Aims: To investigate the communicative effectiveness of pantomime gesture produced spontaneously by individuals
    with aphasia during conversational discourse.
    Methods & Procedures: Sixty-seven undergraduate students wrote down the messages conveyed by 11 people with
    aphasia that produced pantomime while engaged in conversational discourse. Students were presented with a
    speech-only, a gesture-only and a combined speech and gesture condition and guessed messages in both a free
    description and a multiple-choice task.
    Outcomes & Results: As hypothesized, listener comprehension was more accurate in the combined pantomime
    gesture and speech condition as compared with the gesture- or speech-only conditions. Participants achieved
    greater accuracy in the multiple-choice task as compared with the free-description task, but only in the gestureonly
    condition. The communicative effectiveness of the pantomime gestures increased as the fluency of the
    participants with aphasia decreased.
    Conclusions&Implications: These results indicate that when pantomime gesture was presented with aphasic speech,
    the combination had strong communicative effectiveness. Future studies could investigate how pantomimes can
    be integrated into interventions for people with aphasia, particularly emphasizing elicitation of pantomimes in as
    natural a context as possible and highlighting the opportunity for efficient message repair.
  • Rougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X. and 25 moreRougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X., Jaron, K. S., Khamassi, M., Klein, A., Manninen, T., Marchesi, P., McGlinn, D., Metzner, C., Petchey, O., Plesser, H. E., Poisot, T., Ram, K., Ram, Y., Roesch, E., Rossant, C., Rostami, V., Shifman, A., Stachelek, J., Stimberg, M., Stollmeier, F., Vaggi, F., Viejo, G., Vitay, J., Vostinar, A. E., Yurchak, R., & Zito, T. (2017). Sustainable computational science. PeerJ Computer Science, 3: e142. doi:10.7717/peerj-cs.142.

    Abstract

    Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
  • Rowland, C. F., & Monaghan, P. (2017). Developmental psycholinguistics teaches us that we need multi-method, not single-method, approaches to the study of linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e308. doi:10.1017/S0140525X17000565.

    Abstract

    In developmental psycholinguistics, we have, for many years,
    been generating and testing theories that propose both descriptions of
    adult representations and explanations of how those representations
    develop. We have learnt that restricting ourselves to any one
    methodology yields only incomplete data about the nature of linguistic
    representations. We argue that we need a multi-method approach to the
    study of representation.
  • Rubio-Fernández, P. (2017). Can we forget what we know in a false‐belief task? An investigation of the true‐belief default. Cognitive Science: a multidisciplinary journal, 41, 218-241. doi:10.1111/cogs.12331.

    Abstract

    It has been generally assumed in the Theory of Mind literature of the past 30 years that young children fail standard false-belief tasks because they attribute their own knowledge to the protagonist (what Leslie and colleagues called a “true-belief default”). Contrary to the traditional view, we have recently proposed that the children's bias is task induced. This alternative view was supported by studies showing that 3 year olds are able to pass a false-belief task that allows them to focus on the protagonist, without drawing their attention to the target object in the test phase. For a more accurate comparison of these two accounts, the present study tested the true-belief default with adults. Four experiments measuring eye movements and response inhibition revealed that (a) adults do not have an automatic tendency to respond to the false-belief question according to their own knowledge and (b) the true-belief response need not be inhibited in order to correctly predict the protagonist's actions. The positive results observed in the control conditions confirm the accuracy of the various measures used. I conclude that the results of this study undermine the true-belief default view and those models that posit mechanisms of response inhibition in false-belief reasoning. Alternatively, the present study with adults and recent studies with children suggest that participants' focus of attention in false-belief tasks may be key to their performance.
  • Rubio-Fernández, P. (2017). Why are bilinguals better than monolinguals at false-belief tasks? Psychonomic Bulletin & Review, 24, 987-998. doi:10.3758/s13423-016-1143-1.

    Abstract

    In standard Theory of Mind tasks, such as the Sally-Anne, children have to predict the behaviour of a mistaken character, which requires attributing the character a false belief. Hundreds of developmental studies in the last 30 years have shown that children under 4 fail standard false-belief tasks. However, recent studies have revealed that bilingual children and adults outperform their monolingual peers in this type of tasks. Bilinguals’ better performance in false-belief tasks has generally been interpreted as a result of their better inhibitory control; that is, bilinguals are allegedly better than monolinguals at inhibiting the erroneous response to the false-belief question. In this review, I challenge the received view and argue instead that bilinguals’ better false-belief performance results from more effective attention management. This challenge ties in with two independent lines of research: on the one hand, recent studies on the role of attentional processes in false-belief tasks with monolingual children and adults; and on the other, current research on bilinguals’ performance in different Executive Function tasks. The review closes with an exploratory discussion of further benefits of bilingual cognition to Theory of Mind development and pragmatics, which may be independent from Executive Function.
  • Rubio-Fernández, P., Geurts, B., & Cummins, C. (2017). Is an apple like a fruit? A study on comparison and categorisation statements. Review of Philosophy and Psychology, 8, 367-390. doi:10.1007/s13164-016-0305-4.

    Abstract

    Categorisation models of metaphor interpretation are based on the premiss that categorisation statements (e.g., ‘Wilma is a nurse’) and comparison statements (e.g., ‘Betty is like a nurse’) are fundamentally different types of assertion. Against this assumption, we argue that the difference is merely a quantitative one: ‘x is a y’ unilaterally entails ‘x is like a y’, and therefore the latter is merely weaker than the former. Moreover, if ‘x is like a y’ licenses the inference that x is not a y, then that inference is a scalar implicature. We defend these claims partly on theoretical grounds and partly on the basis of experimental evidence. A suite of experiments indicates both that ‘x is a y’ unilaterally entails that x is like a y, and that in several respects the non-y inference behaves exactly as one should expect from a scalar implicature. We discuss the implications of our view of categorisation and comparison statements for categorisation models of metaphor interpretation.
  • Rubio-Fernández, P. (2017). The director task: A test of Theory-of-Mind use or selective attention? Psychonomic Bulletin & Review, 24, 1121-1128. doi:10.3758/s13423-016-1190-7.

    Abstract

    Over two decades, the director task has increasingly been employed as a test of the use of Theory of Mind in communication, first in psycholinguistics and more recently in social cognition research. A new version of this task was designed to test two independent hypotheses. First, optimal performance in the director task, as established by the standard metrics of interference, is possible by using selective attention alone, and not necessarily Theory of Mind. Second, pragmatic measures of Theory-of-Mind use can reveal that people actively represent the director’s mental states, contrary to recent claims that they only use domain-general cognitive processes to perform this task. The results of this study support both hypotheses and provide a new interactive paradigm to reliably test Theory-of-Mind use in referential communication.
  • Rubio-Fernández, P., Jara-Ettinger, J., & Gibson, E. (2017). Can processing demands explain toddlers’ performance in false-belief tasks? [Response to Setoh et al. (2016, PNAS)]. Proceedings of the National Academy of Sciences of the United States of America, 114(19): E3750. doi:10.1073/pnas.1701286114.
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • San Roque, L., Floyd, S., & Norcliffe, E. (2017). Evidentiality and interrogativity. Lingua, 186-187, 120-143. doi:10.1016/j.lingua.2014.11.003.

    Abstract

    Understanding of evidentials is incomplete without consideration of their behaviour in interrogative contexts. We discuss key formal, semantic, and pragmatic features of cross-linguistic variation concerning the use of evidential markers in interrogative clauses. Cross-linguistic data suggest that an exclusively speaker-centric view of evidentiality is not sufficient to explain the semantics of information source marking, as in many languages it is typical for evidentials in questions to represent addressee perspective. Comparison of evidentiality and the related phenomenon of egophoricity emphasises how knowledge-based linguistic systems reflect attention to the way knowledge is distributed among participants in the speech situation
  • Sauppe, S. (2017). Symmetrical and asymmetrical voice systems and processing load: Pupillometric evidence from sentence production in Tagalog and German. Language, 93(2), 288-313. doi:10.1353/lan.2017.0015.

    Abstract

    The voice system of Tagalog has been proposed to be symmetrical in the sense that there are no morphologically unmarked voice forms. This stands in contrast to asymmetrical voice systems which exhibit unmarked and marked voices (e.g., active and passive in German). This paper investigates the psycholinguistic processing consequences of the symmetrical and asymmetrical nature of the Tagalog and German voice systems by analyzing changes in cognitive load during sentence production. Tagalog and German native speakers' pupil diameters were recorded while they produced sentences with different voice markings. Growth curve analyses of the shape of task-evoked pupillary responses revealed that processing load changes were similar for different voices in the symmetrical voice system of Tagalog. By contrast, actives and passives in the asymmetrical voice system of German exhibited different patterns of processing load changes during sentence production. This is interpreted as supporting the notion of symmetry in the Tagalog voice system. Mental effort during sentence planning changes in different ways in the two languages because the grammatical architecture of their voice systems is different. Additionally, an anti-Patient bias in sentence production was found in Tagalog: cognitive load increased at the same time and at the same rate but was maintained for a longer time when the patient argument was the subject, as compared to agent subjects. This indicates that while both voices in Tagalog afford similar planning operations, linking patients to the subject function is more effortful. This anti-Patient bias in production adds converging evidence to “subject preferences” reported in the sentence comprehension literature.
  • Sauppe, S. (2017). Word order and voice influence the timing of verb planning in German sentence production. Frontiers in Psychology, 8: 1648. doi:10.3389/fpsyg.2017.01648.

    Abstract

    Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Schiller, N. O., Fikkert, P., & Levelt, C. C. (2004). Stress priming in picture naming: An SOA study. Brain and Language, 90(1-3), 231-240. doi:10.1016/S0093-934X(03)00436-X.

    Abstract

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel–MOtor vs. koSTUUM–MOtor; capital letters indicate stressed syllables in prime–target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.
  • Schiller, N. O., & De Ruiter, J. P. (2004). Some notes on priming, alignment, and self-monitoring [Commentary]. Behavioral and Brain Sciences, 27(2), 208-209. doi:10.1017/S0140525X0441005X.

    Abstract

    Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
  • Schiller, N. O. (2004). The onset effect in word naming. Journal of Memory and Language, 50(4), 477-490. doi:10.1016/j.jml.2004.02.004.

    Abstract

    This study investigates whether or not masked form priming effects in the naming task depend on the number of shared segments between prime and target. Dutch participants named bisyllabic words, which were preceded by visual masked primes. When primes shared the initial segment(s) with the target, naming latencies were shorter than in a control condition (string of percent signs). Onset complexity (singleton vs. complex word onset) did not modulate this priming effect in Dutch. Furthermore, significant priming due to shared final segments was only found when the prime did not contain a mismatching onset, suggesting an interfering role of initial non-target segments. It is concluded that (a) degree of overlap (segmental match vs. mismatch), and (b) position of overlap (initial vs. final) influence the magnitude of the form priming effect in the naming task. A modification of the segmental overlap hypothesis (Schiller, 1998) is proposed to account for the data.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schmiedtová, B. (2004). At the same time.. The expression of simultaneity in learner varieties. Berlin: Mouton de Gruyter.

    Abstract

    The study endeavors a detailed and systematic classification of linguistic simultaneity expressions. Further, it aims at a well described survey of how simultaneity is expressed by native speakers in their own language. On the basis of real production data the book answers the questions of how native speakers express temporal simultaneity in general, and how learners at different levels of proficiency deal with this situation under experimental test conditions. Furthermore, the results of this study shed new light on our understanding of aspect in general, and on its acquisition by adult learners.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2017). Mapping the speech code: Cortical responses linking the perception and production of vowels. Frontiers in Human Neuroscience, 11: 161. doi:10.3389/fnhum.2017.00161.

    Abstract

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation
  • Schuerman, W. L., Nagarajan, S., McQueen, J. M., & Houde, J. (2017). Sensorimotor adaptation affects perceptual compensation for coarticulation. The Journal of the Acoustical Society of America, 141(4), 2693-2704. doi:10.1121/1.4979791.

    Abstract

    A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil I. Linguistische Berichte, 141, 371-400.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil II. Linguistische Berichte, 142, 451-475.

Share this page