Publications

Displaying 301 - 400 of 480
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2005). Testing the limits of the semantic illusion phenomenon: ERPs reveal temporary semantic change deafness in discourse comprehension. Cognitive Brain Research, 24(3), 691-701. doi:10.1016/j.cogbrainres.2005.04.003.

    Abstract

    In general, language comprehension is surprisingly reliable. Listeners very rapidly extract meaning from the unfolding speech signal, on a word-by-word basis, and usually successfully. Research on ‘semantic illusions’ however suggests that under certain conditions, people fail to notice that the linguistic input simply doesn't make sense. In the current event-related brain potentials (ERP) study, we examined whether listeners would, under such conditions, spontaneously detect an anomaly in which a human character central to the story at hand (e.g., “a tourist”) was suddenly replaced by an inanimate object (e.g., “a suitcase”). Because this replacement introduced a very powerful coherence break, we expected listeners to immediately notice the anomaly and generate the standard ERP effect associated with incoherent language, the N400 effect. However, instead of the standard N400 effect, anomalous words elicited a positive ERP effect from about 500–600 ms onwards. The absence of an N400 effect suggests that subjects did not immediately notice the anomaly, and that for a few hundred milliseconds the comprehension system has converged on an apparently coherent but factually incorrect interpretation. The presence of the later ERP effect indicates that subjects were processing for comprehension and did ultimately detect the anomaly. Therefore, we take the absence of a regular N400 effect as the online manifestation of a temporary semantic illusion. Our results also show that even attentive listeners sometimes fail to notice a radical change in the nature of a story character, and therefore suggest a case of short-lived ‘semantic change deafness’ in language comprehension.
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Ocklenburg, S., Schmitz, J., Moinfar, Z., Moser, D., Klose, R., Lor, S., Kunz, G., Tegenthoff, M., Faustmann, P., Francks, C., Epplen, J. T., Kumsta, R., & Güntürkün, O. (2017). Epigenetic regulation of lateralized fetal spinal gene expression underlies hemispheric asymmetries. eLife, 6: e22784. doi:10.7554/eLife.22784.001.

    Abstract

    Lateralization is a fundamental principle of nervous system organization but its molecular determinants are mostly unknown. In humans, asymmetric gene expression in the fetal cortex has been suggested as the molecular basis of handedness. However, human fetuses already show considerable asymmetries in arm movements before the motor cortex is functionally linked to the spinal cord, making it more likely that spinal gene expression asymmetries form the molecular basis of handedness. We analyzed genome-wide mRNA expression and DNA methylation in cervical and anterior thoracal spinal cord segments of five human fetuses and show development-dependent gene expression asymmetries. These gene expression asymmetries were epigenetically regulated by miRNA expression asymmetries in the TGF-β signaling pathway and lateralized methylation of CpG islands. Our findings suggest that molecular mechanisms for epigenetic regulation within the spinal cord constitute the starting point for handedness, implying a fundamental shift in our understanding of the ontogenesis of hemispheric asymmetries in humans
  • Ortega, G. (2017). Iconicity and sign lexical acquisition: A review. Frontiers in Psychology, 8: 1280. doi:10.3389/fpsyg.2017.01280.

    Abstract

    The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways
  • Ortega, G., Sumer, B., & Ozyurek, A. (2017). Type of iconicity matters in the vocabulary development of signing children. Developmental Psychology, 53(1), 89-99. doi:10.1037/dev0000161.

    Abstract

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children’s preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development.
  • O'Shannessy, C. (2005). Light Warlpiri: A new language. Australian Journal of Linguistics, 25(1), 31-57. doi:10.1080/07268600500110472.
  • Ostarek, M., & Huettig, F. (2017). Spoken words can make the invisible visible – Testing the involvement of low-level visual representations in spoken word processing. Journal of Experimental Psychology: Human Perception and Performance, 43, 499-508. doi:10.1037/xhp0000313.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" -> picture of a bottle) vs. incongruent ("bottle" -> picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at 600ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215-1224. doi:10.1037/xlm0000375.

    Abstract

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete vs. abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.

    Additional information

    XLM-2016-2822_supp.docx
  • Ostarek, M., & Vigliocco, G. (2017). Reading sky and seeing a cloud: On the relevance of events for perceptual simulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 579-590. doi:10.1037/xlm0000318.

    Abstract

    Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In three experiments, participants had to discriminate between two target pictures appearing at the top or the bottom of the screen by pressing the left vs. right button. Immediately before the targets appeared, they saw an up/down word belonging to the target’s event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at 250ms SOA (experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100ms (experiment 2) or 800ms (experiment 3), only a location non-specific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed.
  • Ozker, M., Schepers, I., Magnotti, J., Yoshor, D., & Beauchamp, M. (2017). A double dissociation between anterior and posterior superior temporal gyrus for processing audiovisual speech demonstrated by electrocorticography. Journal of Cognitive Neuroscience, 29(6), 1044-1060. doi:10.1162/jocn_a_01110.

    Abstract

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5(1/2), 219-240.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Penke, M., Janssen, U., Indefrey, P., & Seitz, R. (2005). No evidence for a rule/procedural deficit in German patients with Parkinson's disease. Brain and Language, 95(1), 139-140. doi:10.1016/j.bandl.2005.07.078.
  • Perlman, M. (2017). Debunking two myths against vocal origins of language: Language is iconic and multimodal to the core. Interaction studies, 18(3), 376-401. doi:10.1075/is.18.3.05per.

    Abstract

    Gesture-first theories of language origins often raise two unsubstantiated arguments against vocal origins. First, they argue that great ape vocal behavior is highly constrained, limited to a fixed, species-typical repertoire of reflexive calls. Second, they argue that vocalizations lack any significant potential to ground meaning through iconicity, or resemblance between form and meaning. This paper reviews the considerable evidence that debunks these two “myths”. Accumulating evidence shows that the great apes exercise voluntary control over their vocal behavior, including their breathing, larynx, and supralaryngeal articulators. They are also able to learn new vocal behaviors, and even show some rudimentary ability for vocal imitation. In addition, an abundance of research demonstrates that the vocal modality affords rich potential for iconicity. People can understand iconicity in sound symbolism, and they can produce iconic vocalizations to communicate a diverse range of meanings. Thus, two of the primary arguments against vocal origins theories are not tenable. As an alternative, the paper concludes that the origins of language – going as far back as our last common ancestor with great apes – are rooted in iconicity in both gesture and vocalization.

    Files private

    Request files
  • Perlman, M., & Salmi, R. (2017). Gorillas may use their laryngeal air sacs for whinny-type vocalizations and male display. Journal of Language Evolution, 2(2), 126-140. doi:10.1093/jole/lzx012.

    Abstract

    Great apes and siamangs—but not humans—possess laryngeal air sacs, suggesting that they were lost over hominin evolution. The absence of air sacs in humans may hold clues to speech evolution, but little is known about their functions in extant apes. We investigated whether gorillas use their air sacs to produce the staccato ‘growling’ of the silverback chest beat display. This hypothesis was formulated after viewing a nature documentary showing a display by a silverback western gorilla (Kingo). As Kingo growls, the video shows distinctive vibrations in his chest and throat under which the air sacs extend. We also investigated whether other similarly staccato vocalizations—the whinny, sex whinny, and copulation grunt—might also involve the air sacs. To examine these hypotheses, we collected an opportunistic sample of video and audio evidence from research records and another documentary of Kingo’s group, and from videos of other gorillas found on YouTube. Analysis shows that the four vocalizations are each emitted in rapid pulses of a similar frequency (8–16 pulses per second), and limited visual evidence indicates that they may all occur with upper torso vibrations. Future research should determine how consistently the vibrations co-occur with the vocalizations, whether they are synchronized, and their precise location and timing. Our findings fit with the hypothesis that apes—especially, but not exclusively males—use their air sacs for vocalizations and displays related to size exaggeration for sex and territory. Thus changes in social structure, mating, and sexual dimorphism might have led to the obsolescence of the air sacs and their loss in hominin evolution.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Poletiek, F. H., & Rassin E. (Eds.). (2005). Het (on)bewuste [Special Issue]. De Psycholoog.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Poort, E. D., & Rodd, J. M. (2017). The cognate facilitation effect in bilingual lexical decision is influenced by stimulus list composition. Acta Psychologica, 180, 52-63. doi:10.1016/j.actpsy.2017.08.008.

    Abstract

    Cognates share their form and meaning across languages: “winter” in English means the same as “winter” in Dutch. Research has shown that bilinguals process cognates more quickly than words that exist in one language only (e.g. “ant” in English). This finding is taken as strong evidence for the claim that bilinguals have one integrated lexicon and that lexical access is language non-selective. Two English lexical decision experiments with Dutch–English bilinguals investigated whether the cognate facilitation effect is influenced by stimulus list composition. In Experiment 1, the ‘standard’ version, which included only cognates, English control words and regular non-words, showed significant cognate facilitation (31 ms). In contrast, the ‘mixed’ version, which also included interlingual homographs, pseudohomophones (instead of regular non-words) and Dutch-only words, showed a significantly different profile: a non-significant disadvantage for the cognates (8 ms). Experiment 2 examined the specific impact of these three additional stimuli types and found that only the inclusion of Dutch words significantly reduced the cognate facilitation effect. Additional exploratory analyses revealed that, when the preceding trial was a Dutch word, cognates were recognised up to 50 ms more slowly than English controls. We suggest that when participants must respond ‘no’ to non-target language words, competition arises between the ‘yes’- and ‘no’-responses associated with the two interpretations of a cognate, which (partially) cancels out the facilitation that is a result of the cognate's shared form and meaning. We conclude that the cognate facilitation effect is a real effect that originates in the lexicon, but that cognates can be subject to competition effects outside the lexicon.

    Additional information

    supplementary materials
  • Pouw, W., van Gog, T., Zwaan, R. A., & Paas, F. (2017). Are gesture and speech mismatches produced by an integrated gesture-speech system? A more dynamically embodied perspective is needed for understanding gesture-related learning. Behavioral and Brain Sciences, 40: e68. doi:10.1017/S0140525X15003039.

    Abstract

    We observe a tension in the target article as it stresses an integrated gesture-speech system that can nevertheless consist of contradictory representational states, which are reflected by mismatches in gesture and speech or sign. Beyond problems of coherence, this prevents furthering our understanding of gesture-related learning. As a possible antidote, we invite a more dynamically embodied perspective to the stage.
  • Praamstra, P., Hagoort, P., Maassen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 114, 1197-1225. doi:10.1093/brain/114.3.1197.

    Abstract

    A patient who already had Wernick's aphasia due to a left temporal lobe lesion suffered a severe deterioration specifically of auditory language comprehension, subsequent to right temporal lobe infarction. A detailed comparison of his new condition with his language status before the second stroke revealed that the newly acquired deficit was limited to tasks related to auditory input. Further investigations demonstrated a speech perceptual disorder, which we analysed as due to deficits both at the level of general auditory processes and at the level of phonetic analysis. We discuss some arguments related to hemisphere specialization of phonetic processing and to the disconnection explanation of word deafness that support the hypothesis of word deafness being generally caused by mixed deficits.
  • Ravignani, A., & Thompson, B. (2017). A note on ‘Noam Chomsky – What kind of creatures are we? Language in Society, 46(3), 446-447. doi:10.1017/S0047404517000288.
  • Ravignani, A., Honing, H., & Kotz, S. A. (2017). Editorial: The evolution of rhythm cognition: Timing in music and speech. Frontiers in Human Neuroscience, 11: 303. doi:10.3389/fnhum.2017.00303.

    Abstract

    This editorial serves a number of purposes. First, it aims at summarizing and discussing 33 accepted contributions to the special issue “The evolution of rhythm cognition: Timing in music and speech.” The major focus of the issue is the cognitive neuroscience of rhythm, intended as a neurobehavioral trait undergoing an evolutionary process. Second, this editorial provides the interested reader with a guide to navigate the interdisciplinary contributions to this special issue. For this purpose, we have compiled Table 1, where methods, topics, and study species are summarized and related across contributions. Third, we also briefly highlight research relevant to the evolution of rhythm that has appeared in other journals while this special issue was compiled. Altogether, this editorial constitutes a summary of rhythm research in music and speech spanning two years, from mid-2015 until mid-2017
  • Ravignani, A., & Sonnweber, R. (2017). Chimpanzees process structural isomorphisms across sensory modalities. Cognition, 161, 74-79. doi:10.1016/j.cognition.2017.01.005.
  • Ravignani, A., Gross, S., Garcia, M., Rubio-Garcia, A., & De Boer, B. (2017). How small could a pup sound? The physical bases of signaling body size in harbor seals. Current Zoology, 63(4), 457-465. doi:10.1093/cz/zox026.

    Abstract

    Vocal communication is a crucial aspect of animal behavior. The mechanism which most mammals use to vocalize relies on three anatomical components. First, air overpressure is generated inside the lower vocal tract. Second, as the airstream goes through the glottis, sound is produced via vocal fold vibration. Third, this sound is further filtered by the geometry and length of the upper vocal tract. Evidence from mammalian anatomy and bioacoustics suggests that some of these three components may covary with an animal’s body size. The framework provided by acoustic allometry suggests that, because vocal tract length (VTL) is more strongly constrained by the growth of the body than vocal fold length (VFL), VTL generates more reliable acoustic cues to an animal’s size. This hypothesis is often tested acoustically but rarely anatomically, especially in pinnipeds. Here, we test the anatomical bases of the acoustic allometry hypothesis in harbor seal pups Phoca vitulina. We dissected and measured vocal tract, vocal folds, and other anatomical features of 15 harbor seals post-mortem. We found that, while VTL correlates with body size, VFL does not. This suggests that, while body growth puts anatomical constraints on how vocalizations are filtered by harbor seals’ vocal tract, no such constraints appear to exist on vocal folds, at least during puppyhood. It is particularly interesting to find anatomical constraints on harbor seals’ vocal tracts, the same anatomical region partially enabling pups to produce individually distinctive vocalizations.
  • Ravignani, A., & Norton, P. (2017). Measuring rhythmic complexity: A primer to quantify and compare temporal structure in speech, movement, and animal vocalizations. Journal of Language Evolution, 2(1), 4-19. doi:10.1093/jole/lzx002.

    Abstract

    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history of human speech. Here we focus on analytical tools to measure and compare temporal structure in human speech and animal vocalizations. We introduce the reader to a range of statistical methods usable, on the one hand, to quantify rhythmic complexity in single vocalizations, and on the other hand, to compare rhythmic structure between multiple vocalizations. These methods include: time series analysis, distributional measures, variability metrics, Fourier transform, auto- and cross-correlation, phase portraits, and circular statistics. Using computer-generated data, we apply a range of techniques, walking the reader through the necessary software and its functions. We describe which techniques are most appropriate to test particular hypotheses on rhythmic structure, and provide possible interpretations of the tests. These techniques can be equally well applied to find rhythmic structure in gesture, movement, and any other behavior developing over time, when the research focus lies on its temporal structure. This introduction to quantitative techniques for rhythm and timing analysis will hopefully spur additional comparative research, and will produce comparable results across all disciplines working on the evolution of speech, ultimately advancing the field.

    Additional information

    lzx002_Supp.docx
  • Ravignani, A. (2017). Interdisciplinary debate: Agree on definitions of synchrony [Correspondence]. Nature, 545, 158. doi:10.1038/545158c.
  • Ravignani, A., & Madison, G. (2017). The paradox of isochrony in the evolution of human rhythm. Frontiers in Psychology, 8: 1820. doi:10.3389/fpsyg.2017.01820.

    Abstract

    Isochrony is crucial to the rhythm of human music. Some neural, behavioral and anatomical traits underlying rhythm perception and production are shared with a broad range of species. These may either have a common evolutionary origin, or have evolved into similar traits under different evolutionary pressures. Other traits underlying rhythm are rare across species, only found in humans and few other animals. Isochrony, or stable periodicity, is common to most human music, but isochronous behaviors are also found in many species. It appears paradoxical that humans are particularly good at producing and perceiving isochronous patterns, although this ability does not conceivably confer any evolutionary advantage to modern humans. This article will attempt to solve this conundrum. To this end, we define the concept of isochrony from the present functional perspective of physiology, cognitive neuroscience, signal processing, and interactive behavior, and review available evidence on isochrony in the signals of humans and other animals. We then attempt to resolve the paradox of isochrony by expanding an evolutionary hypothesis about the function that isochronous behavior may have had in early hominids. Finally, we propose avenues for empirical research to examine this hypothesis and to understand the evolutionary origin of isochrony in general.
  • Ravignani, A. (2017). Visualizing and interpreting rhythmic patterns using phase space plots. Music Perception, 34(5), 557-568. doi:10.1525/MP.2017.34.5.557.

    Abstract

    STRUCTURE IN MUSICAL RHYTHM CAN BE MEASURED using a number of analytical techniques. While some techniques—like circular statistics or grammar induction—rely on strong top-down assumptions, assumption-free techniques can only provide limited insights on higher-order rhythmic structure. I suggest that research in music perception and performance can benefit from systematically adopting phase space plots, a visualization technique originally developed in mathematical physics that overcomes the aforementioned limitations. By jointly plotting adjacent interonset intervals (IOI), the motivic rhythmic structure of musical phrases, if present, is visualized geometrically without making any a priori assumptions concerning isochrony, beat induction, or metrical hierarchies. I provide visual examples and describe how particular features of rhythmic patterns correspond to geometrical shapes in phase space plots. I argue that research on music perception and systematic musicology stands to benefit from this descriptive tool, particularly in comparative analyses of rhythm production. Phase space plots can be employed as an initial assumption-free diagnostic to find higher order structures (i.e., beyond distributional regularities) before proceeding to more specific, theory-driven analyses.
  • Reifegerste, J., Meyer, A. S., & Zwitserlood, P. (2017). Inflectional complexity and experience affect plural processing in younger and older readers of Dutch and German. Language, Cognition and Neuroscience, 32(4), 471-487. doi:10.1080/23273798.2016.1247213.

    Abstract

    According to dual-route models of morphological processing, regular inflected words can be retrieved as whole-word forms or decomposed into morphemes. Baayen, Dijkstra, and Schreuder [(1997). Singulars and plurals in Dutch: Evidence for a parallel dual-route model. Journal of AQ2 Memory and Language, 37, 94–117. doi:10.1006/jmla.1997.2509] proposed a ¶ dual-route model according to which plurals of singular-dominant words (e.g. “brides”) are decomposed, while plurals of plural-dominant words (e.g. “peas”) are accessed as whole-word units. We report two lexical-decision experiments investigating how plural processing is influenced by participants’ age (a proxy for experience with word forms) and morphological complexity of the language (German versus Dutch). For both Dutch participant groups and older German participants, we replicated the interaction between number and dominance reported by Baayen and colleagues. Younger German participants showed a main effect of number, indicating access of all plurals via decomposition. Access to stored forms seems to depend on morphological richness and experience with word forms. The data pattern fits neither full-decomposition nor full-storage models, but is compatible with dual-route models

    Additional information

    plcp_a_1247213_sm6144.pdf
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Roberts, S. G., & Levinson, S. C. (2017). Conversation, cognition and cultural evolution: A model of the cultural evolution of word order through pressures imposed from turn taking in conversation. Interaction studies, 18(3), 402-429. doi:10.1075/is.18.3.06rob.

    Abstract

    This paper outlines a first attempt to model the special constraints that arise in language processing in conversation, and to explore the implications such functional considerations may have on language typology and language change. In particular, we focus on processing pressures imposed by conversational turn-taking and their consequences for the cultural evolution of the structural properties of language. We present an agent-based model of cultural evolution where agents take turns at talk in conversation. When the start of planning for the next turn is constrained by the position of the verb, the stable distribution of dominant word orders across languages evolves to match the actual distribution reasonably well. We suggest that the interface of cognition and interaction should be a more central part of the story of language evolution.
  • De Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C. and 25 moreDe Roeck, A., Van den Bossche, T., Van der Zee, J., Verheijen, J., De Coster, W., Van Dongen, J., Dillen, L., Baradaran-Heravi, Y., Heeman, B., Sanchez-Valle, R., Lladó, A., Nacmias, B., Sorbi, S., Gelpi, E., Grau-Rivera, O., Gómez-Tortosa, E., Pastor, P., Ortega-Cubero, S., Pastor, M. A., Graff, C., Thonberg, H., Benussi, L., Ghidoni, R., Binetti, G., de Mendonça, A., Martins, M., Borroni, B., Padovani, A., Almeida, M. R., Santana, I., Diehl-Schmid, J., Alexopoulos, P., Clarimon, J., Lleó, A., Fortea, J., Tsolaki, M., Koutroumani, M., Matěj, R., Rohan, Z., De Deyn, P., Engelborghs, S., Cras, P., Van Broeckhoven, C., Sleegers, K., & European Early-Onset Dementia (EU EOD) consortium (2017). Deleterious ABCA7 mutations and transcript rescue mechanisms in early onset Alzheimer’s disease. Acta Neuropathologica, 134, 475-487. doi:10.1007/s00401-017-1714-x.

    Abstract

    Premature termination codon (PTC) mutations in the ATP-Binding Cassette, Sub-Family A, Member 7 gene (ABCA7) have recently been identified as intermediate-to-high penetrant risk factor for late-onset Alzheimer’s disease (LOAD). High variability, however, is observed in downstream ABCA7 mRNA and protein expression, disease penetrance, and onset age, indicative of unknown modifying factors. Here, we investigated the prevalence and disease penetrance of ABCA7 PTC mutations in a large early onset AD (EOAD)—control cohort, and examined the effect on transcript level with comprehensive third-generation long-read sequencing. We characterized the ABCA7 coding sequence with next-generation sequencing in 928 EOAD patients and 980 matched control individuals. With MetaSKAT rare variant association analysis, we observed a fivefold enrichment (p = 0.0004) of PTC mutations in EOAD patients (3%) versus controls (0.6%). Ten novel PTC mutations were only observed in patients, and PTC mutation carriers in general had an increased familial AD load. In addition, we observed nominal risk reducing trends for three common coding variants. Seven PTC mutations were further analyzed using targeted long-read cDNA sequencing on an Oxford Nanopore MinION platform. PTC-containing transcripts for each investigated PTC mutation were observed at varying proportion (5–41% of the total read count), implying incomplete nonsense-mediated mRNA decay (NMD). Furthermore, we distinguished and phased several previously unknown alternative splicing events (up to 30% of transcripts). In conjunction with PTC mutations, several of these novel ABCA7 isoforms have the potential to rescue deleterious PTC effects. In conclusion, ABCA7 PTC mutations play a substantial role in EOAD, warranting genetic screening of ABCA7 in genetically unexplained patients. Long-read cDNA sequencing revealed both varying degrees of NMD and transcript-modifying events, which may influence ABCA7 dosage, disease severity, and may create opportunities for therapeutic interventions in AD. © 2017, The Author(s).

    Additional information

    Supplementary material
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roelofs, A., & Shitova, N. (2017). Importance of response time in assessing the cerebral dynamics of spoken word production: Comment on Munding et al. Language, Cognition and Neuroscience, 32(8), 1064-1067. doi:10.1080/23273798.2016.1274415.
  • Rojas-Berscia, L. M., & Bourdeau, C. (2017). Optional or syntactic ergativity in Shawi? Distribution and possible origins. Linguistic discovery, 15(1), 50-65. doi:10.1349/PS1.1537-0852.A.481.

    Abstract

    In this article we provide a preliminary description and analysis of the most common ergative
    constructions in Shawi, a Kawapanan language spoken in Northwestern Amazonia. We offer a
    comparison with its sister language, Shiwilu, for which an optional ergativity-marking pattern has
    been claimed (Valenzuela, 2008, 2011). There is not enough evidence, however, to claim the exact
    same for Shawi. Ergativity in the language is driven by mere syntactic motivations. One of the
    most common constituent orders in the language where the ergative marker is obligatory is OAV.
    We close the article with a tentative proposal on the passive origins of OAV ergative constructions
    in the language, via a by-phrase-like incorporation, and eventual grammaticalisation, resorting
    to the formal syntactic theory known as Semantic Syntax (Seuren, 1996).
  • Rommers, J., Dickson, D. S., Norton, J. J. S., Wlotko, E. W., & Federmeier, K. D. (2017). Alpha and theta band dynamics related to sentential constraint and word expectancy. Language, Cognition and Neuroscience, 32(5), 576-589. doi:10.1080/23273798.2016.1183799.

    Abstract

    Despite strong evidence for prediction during language comprehension, the underlying
    mechanisms, and the extent to which they are specific to language, remain unclear. Re-analysing
    an event-related potentials study, we examined responses in the time-frequency domain to
    expected and unexpected (but plausible) words in strongly and weakly constraining sentences,
    and found results similar to those reported in nonverbal domains. Relative to expected words,
    unexpected words elicited an increase in the theta band (4–7 Hz) in strongly constraining
    contexts, suggesting the involvement of control processes to deal with the consequences of
    having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences
    exhibited a decrease in the alpha band (8–12 Hz) relative to weakly constraining sentences,
    suggesting that comprehenders can take advantage of predictive sentence contexts to prepare
    for the input. The results suggest that the brain recruits domain-general preparation and control
    mechanisms when making and assessing predictions during sentence comprehension
  • Rommers, J., Meyer, A. S., & Praamstra, P. (2017). Lateralized electrical brain activity reveals covert attention allocation during speaking. Neuropsychologia, 95, 101-110. doi:10.1016/j.neuropsychologia.2016.12.013.

    Abstract

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers’ eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers’ covert attention allocation as they produced short utterances to describe pairs of objects (e.g., “dog and chair”). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200–350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
  • Rose, M. L., Mok, Z., & Sekine, K. (2017). Communicative effectiveness of pantomime gesture in people with aphasia. International Journal of Language & Communication disorders, 52(2), 227-237. doi:10.1111/1460-6984.12268.

    Abstract

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous
    conversational speech and gesture occurs across all cultures and age groups. When verbal communication is
    compromised, more of the communicative load can be transferred to the gesture modality. Although people with
    aphasia produce meaning-laden gestures, the communicative value of these has not been adequately investigated.
    Aims: To investigate the communicative effectiveness of pantomime gesture produced spontaneously by individuals
    with aphasia during conversational discourse.
    Methods & Procedures: Sixty-seven undergraduate students wrote down the messages conveyed by 11 people with
    aphasia that produced pantomime while engaged in conversational discourse. Students were presented with a
    speech-only, a gesture-only and a combined speech and gesture condition and guessed messages in both a free
    description and a multiple-choice task.
    Outcomes & Results: As hypothesized, listener comprehension was more accurate in the combined pantomime
    gesture and speech condition as compared with the gesture- or speech-only conditions. Participants achieved
    greater accuracy in the multiple-choice task as compared with the free-description task, but only in the gestureonly
    condition. The communicative effectiveness of the pantomime gestures increased as the fluency of the
    participants with aphasia decreased.
    Conclusions&Implications: These results indicate that when pantomime gesture was presented with aphasic speech,
    the combination had strong communicative effectiveness. Future studies could investigate how pantomimes can
    be integrated into interventions for people with aphasia, particularly emphasizing elicitation of pantomimes in as
    natural a context as possible and highlighting the opportunity for efficient message repair.
  • Rougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X. and 25 moreRougier​, N. P., Hinsen, K., Alexandre, F., Arildsen, T., Barba, L. A., Benureau, F. C. Y., Brown, C. T., De Buyl, P., Caglayan, O., Davison, A. P., Delsuc, M.-A., Detorakis, G., Diem, A. K., Drix, D., Enel, P., Girard, B., Guest, O., Hall, M. G., Henriques, R. N., Hinaut, X., Jaron, K. S., Khamassi, M., Klein, A., Manninen, T., Marchesi, P., McGlinn, D., Metzner, C., Petchey, O., Plesser, H. E., Poisot, T., Ram, K., Ram, Y., Roesch, E., Rossant, C., Rostami, V., Shifman, A., Stachelek, J., Stimberg, M., Stollmeier, F., Vaggi, F., Viejo, G., Vitay, J., Vostinar, A. E., Yurchak, R., & Zito, T. (2017). Sustainable computational science. PeerJ Computer Science, 3: e142. doi:10.7717/peerj-cs.142.

    Abstract

    Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
  • Rowland, C. F., & Monaghan, P. (2017). Developmental psycholinguistics teaches us that we need multi-method, not single-method, approaches to the study of linguistic representation. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e308. doi:10.1017/S0140525X17000565.

    Abstract

    In developmental psycholinguistics, we have, for many years,
    been generating and testing theories that propose both descriptions of
    adult representations and explanations of how those representations
    develop. We have learnt that restricting ourselves to any one
    methodology yields only incomplete data about the nature of linguistic
    representations. We argue that we need a multi-method approach to the
    study of representation.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • Rubio-Fernández, P. (2017). Can we forget what we know in a false‐belief task? An investigation of the true‐belief default. Cognitive Science: a multidisciplinary journal, 41, 218-241. doi:10.1111/cogs.12331.

    Abstract

    It has been generally assumed in the Theory of Mind literature of the past 30 years that young children fail standard false-belief tasks because they attribute their own knowledge to the protagonist (what Leslie and colleagues called a “true-belief default”). Contrary to the traditional view, we have recently proposed that the children's bias is task induced. This alternative view was supported by studies showing that 3 year olds are able to pass a false-belief task that allows them to focus on the protagonist, without drawing their attention to the target object in the test phase. For a more accurate comparison of these two accounts, the present study tested the true-belief default with adults. Four experiments measuring eye movements and response inhibition revealed that (a) adults do not have an automatic tendency to respond to the false-belief question according to their own knowledge and (b) the true-belief response need not be inhibited in order to correctly predict the protagonist's actions. The positive results observed in the control conditions confirm the accuracy of the various measures used. I conclude that the results of this study undermine the true-belief default view and those models that posit mechanisms of response inhibition in false-belief reasoning. Alternatively, the present study with adults and recent studies with children suggest that participants' focus of attention in false-belief tasks may be key to their performance.
  • Rubio-Fernández, P. (2017). Why are bilinguals better than monolinguals at false-belief tasks? Psychonomic Bulletin & Review, 24, 987-998. doi:10.3758/s13423-016-1143-1.

    Abstract

    In standard Theory of Mind tasks, such as the Sally-Anne, children have to predict the behaviour of a mistaken character, which requires attributing the character a false belief. Hundreds of developmental studies in the last 30 years have shown that children under 4 fail standard false-belief tasks. However, recent studies have revealed that bilingual children and adults outperform their monolingual peers in this type of tasks. Bilinguals’ better performance in false-belief tasks has generally been interpreted as a result of their better inhibitory control; that is, bilinguals are allegedly better than monolinguals at inhibiting the erroneous response to the false-belief question. In this review, I challenge the received view and argue instead that bilinguals’ better false-belief performance results from more effective attention management. This challenge ties in with two independent lines of research: on the one hand, recent studies on the role of attentional processes in false-belief tasks with monolingual children and adults; and on the other, current research on bilinguals’ performance in different Executive Function tasks. The review closes with an exploratory discussion of further benefits of bilingual cognition to Theory of Mind development and pragmatics, which may be independent from Executive Function.
  • Rubio-Fernández, P., Geurts, B., & Cummins, C. (2017). Is an apple like a fruit? A study on comparison and categorisation statements. Review of Philosophy and Psychology, 8, 367-390. doi:10.1007/s13164-016-0305-4.

    Abstract

    Categorisation models of metaphor interpretation are based on the premiss that categorisation statements (e.g., ‘Wilma is a nurse’) and comparison statements (e.g., ‘Betty is like a nurse’) are fundamentally different types of assertion. Against this assumption, we argue that the difference is merely a quantitative one: ‘x is a y’ unilaterally entails ‘x is like a y’, and therefore the latter is merely weaker than the former. Moreover, if ‘x is like a y’ licenses the inference that x is not a y, then that inference is a scalar implicature. We defend these claims partly on theoretical grounds and partly on the basis of experimental evidence. A suite of experiments indicates both that ‘x is a y’ unilaterally entails that x is like a y, and that in several respects the non-y inference behaves exactly as one should expect from a scalar implicature. We discuss the implications of our view of categorisation and comparison statements for categorisation models of metaphor interpretation.
  • Rubio-Fernández, P. (2017). The director task: A test of Theory-of-Mind use or selective attention? Psychonomic Bulletin & Review, 24, 1121-1128. doi:10.3758/s13423-016-1190-7.

    Abstract

    Over two decades, the director task has increasingly been employed as a test of the use of Theory of Mind in communication, first in psycholinguistics and more recently in social cognition research. A new version of this task was designed to test two independent hypotheses. First, optimal performance in the director task, as established by the standard metrics of interference, is possible by using selective attention alone, and not necessarily Theory of Mind. Second, pragmatic measures of Theory-of-Mind use can reveal that people actively represent the director’s mental states, contrary to recent claims that they only use domain-general cognitive processes to perform this task. The results of this study support both hypotheses and provide a new interactive paradigm to reliably test Theory-of-Mind use in referential communication.
  • Rubio-Fernández, P., Jara-Ettinger, J., & Gibson, E. (2017). Can processing demands explain toddlers’ performance in false-belief tasks? [Response to Setoh et al. (2016, PNAS)]. Proceedings of the National Academy of Sciences of the United States of America, 114(19): E3750. doi:10.1073/pnas.1701286114.
  • San Roque, L., Floyd, S., & Norcliffe, E. (2017). Evidentiality and interrogativity. Lingua, 186-187, 120-143. doi:10.1016/j.lingua.2014.11.003.

    Abstract

    Understanding of evidentials is incomplete without consideration of their behaviour in interrogative contexts. We discuss key formal, semantic, and pragmatic features of cross-linguistic variation concerning the use of evidential markers in interrogative clauses. Cross-linguistic data suggest that an exclusively speaker-centric view of evidentiality is not sufficient to explain the semantics of information source marking, as in many languages it is typical for evidentials in questions to represent addressee perspective. Comparison of evidentiality and the related phenomenon of egophoricity emphasises how knowledge-based linguistic systems reflect attention to the way knowledge is distributed among participants in the speech situation
  • Sankoff, G., & Brown, P. (1976). The origins of syntax in discourse: A case study of Tok Pisin relatives. Language, 52(3), 631-666.

    Abstract

    The structure of relative clauses has attracted considerable attention in recent years, and a number of authors have carried out analyses of the syntax of relativization. In our investigation of syntactic structure and change in New Guinea Tok Pisin, we find that the basic processes involved in relativization have much broader discourse functions, and that relativization is only a special instance of the application of general ‘bracketing’ devices used in the organization of information. Syntactic structure, in this case, can be understood as a component of, and derivative from, discourse structure.
  • Sauppe, S. (2017). Symmetrical and asymmetrical voice systems and processing load: Pupillometric evidence from sentence production in Tagalog and German. Language, 93(2), 288-313. doi:10.1353/lan.2017.0015.

    Abstract

    The voice system of Tagalog has been proposed to be symmetrical in the sense that there are no morphologically unmarked voice forms. This stands in contrast to asymmetrical voice systems which exhibit unmarked and marked voices (e.g., active and passive in German). This paper investigates the psycholinguistic processing consequences of the symmetrical and asymmetrical nature of the Tagalog and German voice systems by analyzing changes in cognitive load during sentence production. Tagalog and German native speakers' pupil diameters were recorded while they produced sentences with different voice markings. Growth curve analyses of the shape of task-evoked pupillary responses revealed that processing load changes were similar for different voices in the symmetrical voice system of Tagalog. By contrast, actives and passives in the asymmetrical voice system of German exhibited different patterns of processing load changes during sentence production. This is interpreted as supporting the notion of symmetry in the Tagalog voice system. Mental effort during sentence planning changes in different ways in the two languages because the grammatical architecture of their voice systems is different. Additionally, an anti-Patient bias in sentence production was found in Tagalog: cognitive load increased at the same time and at the same rate but was maintained for a longer time when the patient argument was the subject, as compared to agent subjects. This indicates that while both voices in Tagalog afford similar planning operations, linking patients to the subject function is more effortful. This anti-Patient bias in production adds converging evidence to “subject preferences” reported in the sentence comprehension literature.
  • Sauppe, S. (2017). Word order and voice influence the timing of verb planning in German sentence production. Frontiers in Psychology, 8: 1648. doi:10.3389/fpsyg.2017.01648.

    Abstract

    Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Schoffelen, J.-M., Hulten, A., Lam, N. H. L., Marquand, A. F., Udden, J., & Hagoort, P. (2017). Frequency-specific directed interactions in the human brain network for language. Proceedings of the National Academy of Sciences of the United States of America, 114(30), 8083-8088. doi:10.1073/pnas.1703155114.

    Abstract

    The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms

    Additional information

    pnas.201703155SI.pdf
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2005). Neuronal coherence as a mechanism of effective corticospinal interaction. Science, 308, 111-113. doi:10.1126/science.1107027.

    Abstract

    Neuronal groups can interact with each other even if they are widely separated. One group might modulate its firing rate or its internal oscillatory synchronization to influence another group. We propose that coherence between two neuronal groups is a mechanism of efficient interaction, because it renders mutual input optimally timed and thereby maximally effective. Modulations of subjects' readiness to respond in a simple reaction-time task were closely correlated with the strength of gamma-band (40 to 70 hertz) coherence between motor cortex and spinal cord neurons. This coherence may contribute to an effective corticospinal interaction and shortened reaction times.
  • Schriefers, H., & Meyer, A. S. (1990). Experimental note: Cross-modal, visual-auditory picture-word interference. Bulletin of the Psychonomic Society, 28, 418-420.
  • Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (1990). Exploring the time course of lexical access in language production: Picture-word interference studies. Journal of Memory and Language, 29(1), 86-102. doi:10.1016/0749-596X(90)90011-N.

    Abstract

    According to certain theories of language production, lexical access to a content word consists of two independent and serially ordered stages. In the first, semantically driven stage, so-called lemmas are retrieved, i.e., lexical items that are specified with respect to syntactic and semantic properties, but not with respect to phonological characteristics. In the second stage, the corresponding wordforms, the so-called lexemes, are retrieved. This implies that the access to a content word involves an early stage of exclusively semantic activation and a later stage of exclusively phonological activation. This seriality assumption was tested experimentally, using a picture-word interference paradigm in which the interfering words were presented auditorily. The results show an interference effect of semantically related words on picture naming latencies at an early SOA (− 150 ms), and a facilitatory effect of phonologically related words at later SOAs (0 ms, + 150 ms). On the basis of these results it can be concluded that there is indeed a stage of lexical access to a content word where only its meaning is activated, followed by a stage where only its form is activated. These findings can be seen as empirical support for a two-stage model of lexical access, or, alternatively, as putting constraints on the parameters in a network model of lexical access, such as the model proposed by Dell and Reich.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2017). Mapping the speech code: Cortical responses linking the perception and production of vowels. Frontiers in Human Neuroscience, 11: 161. doi:10.3389/fnhum.2017.00161.

    Abstract

    The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation
  • Schuerman, W. L., Nagarajan, S., McQueen, J. M., & Houde, J. (2017). Sensorimotor adaptation affects perceptual compensation for coarticulation. The Journal of the Acoustical Society of America, 141(4), 2693-2704. doi:10.1121/1.4979791.

    Abstract

    A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception
  • Sekine, K., & Kita, S. (2017). The listener automatically uses spatial story representations from the speaker's cohesive gestures when processing subsequent sentences without gestures. Acta Psychologica, 179, 89-95. doi:10.1016/j.actpsy.2017.07.009.

    Abstract

    This study examined spatial story representations created by speaker's cohesive gestures. Participants were presented with three-sentence discourse with two protagonists. In the first and second sentences, gestures consistently located the two protagonists in the gesture space: one to the right and the other to the left. The third sentence (without gestures) referred to one of the protagonists, and the participants responded with one of the two keys to indicate the relevant protagonist. The response keys were either spatially congruent or incongruent with the gesturally established locations for the two participants. Though the cohesive gestures did not provide any clue for the correct response, they influenced performance: the reaction time in the congruent condition was faster than that in the incongruent condition. Thus, cohesive gestures automatically establish spatial story representations and the spatial story representations remain activated in a subsequent sentence without any gesture.
  • Senft, G. (2017). Absolute frames of spatial reference in Austronesian languages. Russian Journal of Linguistics, 21, 686-705. doi:10.22363/2312-9182-2017-21-4-686-705.

    Abstract

    This paper provides a brief survey on various absolute frames of spatial reference that can be observed in a number of Austronesian languages – with an emphasis on languages of the Oceanic subgroup. It is based on research of conceptions of space and systems of spatial reference that was initiated by the “space project” of the Cognitive Anthropology Research Group (now the Department of Language and Cognition) at the Max Planck Institute for Psycholinguistics and by my anthology “Referring to Space” (Senft 1997a; see Keller 2002: 250). The examples illustrating these different absolute frames of spatial reference reveal once more that earlier generalizations within the domain of “SPACE” were strongly biased by research on Indo-European languages; they also reveal how complex some of these absolute frames of spatial reference found in these languages are. The paper ends with a summary of Wegener’s (2002) preliminary typology of these absolute frames of spatial reference.
  • Senft, G. (2017). Acquiring Kilivila Pragmatics - the Role of the Children's (Play-)Groups in the first 7 Years of their Lives on the Trobriand Islands in Papua New Guinea. Studies in Pragmatics, 19, 40-53.

    Abstract

    Trobriand children are breastfed until they can walk; then they are abruptly weaned and the parents dramatically reduce the pervasive loving care that their children experienced before. The children have to find a place within the children’s groups in their villages. They learn to behave according to their community’s rules and regulations which find their expression in forms of verbal and non-verbal behavior. They acquire their culture specific pragmatics under the control of older members of their groups. The children's “small republic” is the primary institution of verbal and cultural socialization. Attempts of parental education are confined to a minimum.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1990). [Review of the book Intergrammar by H. Arndt, & R.W. Janney]. System, 18(1), 112-114. doi:10.1016/0346-251X(90)90036-5.
  • Senft, G. (1990). [Review of the book Noun classes and categorization ed. by Colette Craig]. Acta Linguistica Hafniensia, 22, 173-180.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (2005). [Review of the book Malinowski: Odyssey of an anthropologist 1884-1920 by Michael Young]. Oceania, 75(3), 302-302.
  • Senft, G. (2005). [Review of the book The art of Kula by Shirley F. Campbell]. Anthropos, 100, 247-249.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Senft, G. (1990). Yoreshiawes Klagelied anläßlich des Todes seiner kleinen Tochter. Forschungsstelle für Humanethologie in der MPG. Berichte und Mitteilungen; 1/90, 23-24.
  • Senghas, A., Ozyurek, A., & Kita, S. (2005). [Response to comment on Children creating core properties of language: Evidence from an emerging sign language in Nicaragua]. Science, 309(5731), 56c-56c. doi:10.1126/science.1110901.
  • Seuren, P. A. M. (1990). Burton-Roberts on presupposition and negation. Journal of Linguistics, 26(2), 425-453. doi:10.1017/S0022226700014730.

    Abstract

    In his paper ‘On Horn's dilemma: presupposition and negation’ Burton-Roberts (1989a) presents an ambitious programme, formulated right at the outset. He seeks to establish three points: (i) Under the ‘standard logical definition of presupposition’ a pre-suppositional semantics is INCOMPATIBLE with a SEMANTICALLY AMBIGUOUS NEGATION operator (SAN), on pain of the theory being rendered ‘empirically empty and theoretically trivial’,. (ii) From this it follows that the one unambiguous negation is presupposition preserving. Cases that have been identified as presupposition-cancelling negation should be re-analysed as ‘instances of a pragmatic phenomenon’, not unlike what has been proposed in Horn (1985), that is as METALINGUISTIC NEGATION (MN). (iii) This pragmatic analysis of MN ‘itself implies a presuppositional semantics’, that is to say ‘a presuppositional theory of truth-value gaps’.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1976). Clitic pronoun clusters. Italian Linguistics, 2, 7-35.
  • Seuren, P. A. M. (1990). [Review of the book A life for language: A biographical memoir of Leonard Bloomfield by Robert A. Hall]. Linguistics, 29(4), 753-757. doi:10.1515/ling.1991.29.4.719.
  • Seuren, P. A. M. (1990). [Review of the book The limits to debate: A revised theory of presupposition by N. Burton-Roberts]. Linguistics, 28(3), 503-516. doi:10.1515/ling.1990.28.3.503.
  • Seuren, P. A. M. (2005). Eubulides as a 20th-century semanticist. Language Sciences, 27(1), 75-95. doi:10.1016/j.langsci.2003.12.001.

    Abstract

    It is the purpose of the present paper to highlight the figure of Eubulides, a relatively unknown Greek philosopher who lived ±405–330 BC and taught at Megara, not far from Athens. He is mainly known for his four paradoxes (the Liar, the Sorites, the Electra, and the Horns), and for the mutual animosity between him and his younger contemporary Aristotle. The Megarian school of philosophy was one of the main sources of the great Stoic tradition in ancient philosophy. What has never been made explicit in the literature is the importance of the four paradoxes for the study of meaning in natural language: they summarize the whole research programme of 20th century formal or formally oriented semantics, including the problems of vague predicates (Sorites), intensional contexts (Electra), and presuppositions (Horns). One might say that modern formal or formally oriented semantics is essentially an attempt at finding linguistically tenable answers to problems arising in the context of Aristotelian thought. It is a surprising and highly significant fact that a contemporary of Aristotle already spotted the main weaknesses of the Aristotelian paradigm.
  • Seuren, P. A. M. (1991). Grammatika als algorithme: Rekenen met taal. Koninklijke Nederlandse Akademie van Wetenschappen. Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 54(2), 25-63.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M., & Mufwene, S. S. (1990). Introduction. Linguistics, 28(4), 641-643. doi:10.1515/ling.1990.28.4.641.
  • Seuren, P. A. M., & Mufwene, S. S. (Eds.). (1990). Issues in Creole lingusitics [Special Issue]. Linguistics, 28(4).
  • Seuren, P. A. M. (1990). Still no serials in Seselwa: A Reply to "Seselwa Serialization and its Significance" by Derek Bickerton. Journal of Pidgin and Creole Languages, 5(2), 271-292.
  • Seuren, P. A. M. (1990). Verb syncopation and predicate raising in Mauritian Creole. Linguistics, 28(4), 809-844. doi:10.1515/ling.1990.28.4.809.
  • Shapiro, K. A., Mottaghy, F. M., Schiller, N. O., Poeppel, T. D., Flüss, M. O., Müller, H. W., Caramazza, A., & Krause, B. J. (2005). Dissociating neural correlates for nouns and verbs. NeuroImage, 24(4), 1058-1067. doi:10.1016/j.neuroimage.2004.10.015.

    Abstract

    Dissociations in the ability to produce words of different grammatical categories are well established in neuropsychology but have not been corroborated fully with evidence from brain imaging. Here we report on a PET study designed to reveal the anatomical correlates of grammatical processes involving nouns and verbs. German-speaking subjects were asked to produce either plural and singular nouns, or first-person plural and singular verbs. Verbs, relative to nouns, activated a left frontal cortical network, while the opposite contrast (nouns–verbs) showed greater activation in temporal regions bilaterally. Similar patterns emerged when subjects performed the task with pseudowords used as nouns or as verbs. These results converge with findings from lesion studies and suggest that grammatical category is an important dimension of organization for knowledge of language in the brain.
  • Sharp, D. J., Scott, S. K., Cutler, A., & Wise, R. J. S. (2005). Lexical retrieval constrained by sound structure: The role of the left inferior frontal gyrus. Brain and Language, 92(3), 309-319. doi:10.1016/j.bandl.2004.07.002.

    Abstract

    Positron emission tomography was used to investigate two competing hypotheses about the role of the left inferior frontal gyrus (IFG) in word generation. One proposes a domain-specific organization, with neural activation dependent on the type of information being processed, i.e., surface sound structure or semantic. The other proposes a process-specific organization, with activation dependent on processing demands, such as the amount of selection needed to decide between competing lexical alternatives. In a novel word retrieval task, word reconstruction (WR), subjects generated real words from heard non-words by the substitution of either a vowel or consonant. Both types of lexical retrieval, informed by sound structure alone, produced activation within anterior and posterior left IFG regions. Within these regions there was greater activity for consonant WR, which is more difficult and imposes greater processing demands. These results support a process-specific organization of the anterior left IFG.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an EEG study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M. C. M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture–word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Coughler, C., & Schriefers, H. (2017). P3 event-related brain potential reflects allocation and use of central processing capacity in language production. Neuropsychologia, 106, 138-145. doi:10.1016/j.neuropsychologia.2017.09.024.

    Abstract

    Allocation and use of central processing capacity have been associated with the P3 event-related brain potential amplitude in a large variety of non-linguistic tasks. However, little is known about the P3 in spoken language production. Moreover, the few studies that are available report opposing P3 effects when task complexity is manipulated. We investigated allocation and use of central processing capacity in a spoken phrase production task: Participants switched every second trial between describing pictures using noun phrases with one adjective (size only; simple condition, e.g., “the big desk”) or two adjectives (size and color; complex condition, e.g., “the big red desk”). Capacity allocation was manipulated by complexity, and capacity use by switching. Response time (RT) was longer for complex than for simple trials. Moreover, complexity and switching interacted: RTs were longer on switch than on repeat trials for simple phrases but shorter on switch than on repeat trials for complex phrases. P3 amplitude increased with complexity. Moreover, complexity and switching interacted: The complexity effect was larger on the switch trials than on the repeat trials. These results provide evidence that the allocation and use of central processing capacity in language production are differentially reflected in the P3 amplitude.
  • Sidnell, J., & Stivers, T. (Eds.). (2005). Multimodal Interaction [Special Issue]. Semiotica, 156.
  • Silva, S., Inácio, F., Folia, V., & Petersson, K. M. (2017). Eye movements in implicit artificial grammar learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1387-1402. doi:10.1037/xlm0000350.

    Abstract

    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests
  • Silva, S., Petersson, K. M., & Castro, S. L. (2017). The effects of ordinal load on incidental temporal learning. Quarterly Journal of Experimental Psychology, 70(4), 664-674. doi:10.1080/17470218.2016.1146909.

    Abstract

    How can we grasp the temporal structure of events? A few studies have indicated that representations of temporal structure are acquired when there is an intention to learn, but not when learning is incidental. Response-to-stimulus intervals, uncorrelated temporal structures, unpredictable ordinal information, and lack of metrical organization have been pointed out as key obstacles to incidental temporal learning, but the literature includes piecemeal demonstrations of learning under all these circumstances. We suggest that the unacknowledged effects of ordinal load may help reconcile these conflicting findings, ordinal load referring to the cost of identifying the sequence of events (e.g., tones, locations) where a temporal pattern is embedded. In a first experiment, we manipulated ordinal load into simple and complex levels. Participants learned ordinal-simple sequences, despite their uncorrelated temporal structure and lack of metrical organization. They did not learn ordinal-complex sequences, even though there were no response-to-stimulus intervals nor unpredictable ordinal information. In a second experiment, we probed learning of ordinal-complex sequences with strong metrical organization, and again there was no learning. We conclude that ordinal load is a key obstacle to incidental temporal learning. Further analyses showed that the effect of ordinal load is to mask the expression of temporal knowledge, rather than to prevent learning.
  • Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.

    Abstract

    The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
    after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
    exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test.
  • Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.

    Abstract

    Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology.
  • Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.

    Abstract

    Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
    the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition.

Share this page