Publications

Displaying 301 - 400 of 475
  • Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. L. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157-173. doi:10.1016/j.cognition.2013.10.005.

    Abstract

    We report an investigation of aging and individual differences in binding information during sentence understanding. An age-continuous sample of adults (N=91), ranging from 18 to 81 years of age, read sentences in which a relative clause could be attached high to a head noun NP1, attached low to its modifying prepositional phrase NP2 (e.g., The son of the princess who scratched himself/herself in public was humiliated), or in which the attachment site of the relative clause was ultimately indeterminate (e.g., The maid of the princess who scratched herself in public was humiliated). Word-by-word reading times and comprehension (e.g., who scratched?) were measured. A series of mixed-effects models were fit to the data, revealing: (1) that, on average, NP1-attached sentences were harder to process and comprehend than NP2-attached sentences; (2) that these average effects were independently moderated by verbal working memory capacity and reading experience, with effects that were most pronounced in the oldest participants and; (3) that readers on average did not allocate extra time to resolve global ambiguities, though older adults with higher working memory span did. Findings are discussed in relation to current models of lifespan cognitive development, working memory, language experience, and the role of prosodic segmentation strategies in reading. Collectively, these data suggest that aging brings differences in sentence understanding, and these differences may depend on independent influences of verbal working memory capacity and reading experience.

    Files private

    Request files
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Perlman, M., & Cain, A. A. (2014). Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture, 14(3), 320-350. doi:10.1075/gest.14.3.03per.

    Abstract

    Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Petersson, K. M., Reis, A., Askelöf, S., Castro-Caldas, A., & Ingvar, M. (2000). Language processing modulated by literacy: A network analysis of verbal repetition in literate and illiterate subjects. Journal of Cognitive Neuroscience, 12(3), 364-382. doi:10.1162/089892900562147.
  • Petrovic, P., Petersson, K. M., Ghatan, P., Stone-Elander, S., & Ingvar, M. (2000). Pain related cerebral activation is altered by a distracting cognitive task. Pain, 85, 19-30.

    Abstract

    It has previously been suggested that the activity in sensory regions of the brain can be modulated by attentional mechanisms during parallel cognitive processing. To investigate whether such attention-related modulations are present in the processing of pain, the regional cerebral blood ¯ow was measured using [15O]butanol and positron emission tomography in conditions involving both pain and parallel cognitive demands. The painful stimulus consisted of the standard cold pressor test and the cognitive task was a computerised perceptual maze test. The activations during the maze test reproduced findings in previous studies of the same cognitive task. The cold pressor test evoked signi®cant activity in the contralateral S1, and bilaterally in the somatosensory association areas (including S2), the ACC and the mid-insula. The activity in the somatosensory association areas and periaqueductal gray/midbrain were significantly modified, i.e. relatively decreased, when the subjects also were performing the maze task. The altered activity was accompanied with significantly lower ratings of pain during the cognitive task. In contrast, lateral orbitofrontal regions showed a relative increase of activity during pain combined with the maze task as compared to only pain, which suggests the possibility of the involvement of frontal cortex in modulation of regions processing pain
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
  • Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.

    Abstract

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity.
  • Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.

    Abstract

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production.
  • Pinget, A.-F., Bosker, H. R., Quené, H., & de Jong, N. H. (2014). Native speakers' perceptions of fluency and accent in L2 speech. Language Testing, 31, 349-365. doi:10.1177/0265532214526177.

    Abstract

    Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal, segmental and suprasegmental properties of speech on the one hand, and fluency and accent as rated by native raters on the other hand. For 90 speech fragments from Turkish and English L2 learners of Dutch, several acoustic measures of fluency and accent were calculated. In Experiment 1, 20 native speakers of Dutch rated the L2 Dutch samples on fluency. In Experiment 2, 20 different untrained native speakers of Dutch judged the L2 Dutch samples on accentedness. Regression analyses revealed that acoustic measures of fluency were good predictors of fluency ratings. Secondly, segmental and suprasegmental measures of accent could predict some variance of accent ratings. Thirdly, perceived fluency and perceived accent were only weakly related. In conclusion, this study shows that fluency and perceived foreign accent can be judged as separate constructs.
  • Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.

    Abstract

    Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.

    Abstract

    Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
  • Poletiek, F. H. (2000). De beoordelaar dobbelt niet - denkt hij. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 55(5), 246-249.
  • Poletiek, F. H., & Berndsen, M. (2000). Hypothesis testing as risk behaviour with regard to beliefs. Journal of Behavioral Decision Making, 13(1), 107-123. doi:10.1002/(SICI)1099-0771(200001/03)13:1<107:AID-BDM349>3.0.CO;2-P.

    Abstract

    In this paper hypothesis‐testing behaviour is compared to risk‐taking behaviour. It is proposed that choosing a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence. This consideration resembles the one a gambler makes when choosing among bets, each having a probability of winning and an amount to be won. A confirmatory testing strategy can be defined within this framework as a strategy directed at maximizing either the probability or the value of a confirming outcome. Previous theories on testing behaviour have focused on the human tendency to maximize the probability of a confirming outcome. In this paper, two experiments are presented in which participants tend to maximize the confirming value of the test outcome. Motivational factors enhance this tendency dependent on the context of the testing situation. Both this result and the framework are discussed in relation to other studies in the field of testing behaviour.
  • St Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L. and 12 moreSt Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L., Price, T. S., Dale, P. S., Pillas, D., Yliherva, A., Rodriguez, A., Golding, J., Jaddoe, V. W., Jarvelin, M.-R., Plomin, R., Pennell, C. E., Tiemeier, H., & Davey Smith, G. (2014). Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 5: 4831. doi:10.1038/ncomms5831.
  • St Pourcain, B., Skuse, D. H., Mandy, W. P., Wang, K., Hakonarson, H., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., & Smith, G. D. (2014). Variability in the common genetic architecture of social-communication spectrum phenotypes during childhood and adolescence. Molecular Autism, 5: 18. doi:10.1186/2040-2392-5-18.

    Abstract

    Background Social-communication abilities are heritable traits, and their impairments overlap with the autism continuum. To characterise the genetic architecture of social-communication difficulties developmentally and identify genetic links with the autistic dimension, we conducted a genome-wide screen of social-communication problems at multiple time-points during childhood and adolescence. Methods Social-communication difficulties were ascertained at ages 8, 11, 14 and 17 years in a UK population-based birth cohort (Avon Longitudinal Study of Parents and Children; N ≤ 5,628) using mother-reported Social Communication Disorder Checklist scores. Genome-wide Complex Trait Analysis (GCTA) was conducted for all phenotypes. The time-points with the highest GCTA heritability were subsequently analysed for single SNP association genome-wide. Type I error in the presence of measurement relatedness and the likelihood of observing SNP signals near known autism susceptibility loci (co-location) were assessed via large-scale, genome-wide permutations. Association signals (P ≤ 10−5) were also followed up in Autism Genetic Resource Exchange pedigrees (N = 793) and the Autism Case Control cohort (Ncases/Ncontrols = 1,204/6,491). Results GCTA heritability was strongest in childhood (h2(8 years) = 0.24) and especially in later adolescence (h2(17 years) = 0.45), with a marked drop during early to middle adolescence (h2(11 years) = 0.16 and h2(14 years) = 0.08). Genome-wide screens at ages 8 and 17 years identified for the latter time-point evidence for association at 3p22.2 near SCN11A (rs4453791, P = 9.3 × 10−9; genome-wide empirical P = 0.011) and suggestive evidence at 20p12.3 at PLCB1 (rs3761168, P = 7.9 × 10−8; genome-wide empirical P = 0.085). None of these signals contributed to risk for autism. However, the co-location of population-based signals and autism susceptibility loci harbouring rare mutations, such as PLCB1, is unlikely to be due to chance (genome-wide empirical Pco-location = 0.007). Conclusions Our findings suggest that measurable common genetic effects for social-communication difficulties vary developmentally and that these changes may affect detectable overlaps with the autism spectrum.

    Additional information

    13229_2013_113_MOESM1_ESM.docx
  • Pouw, W., Van Gog, T., & Paas, F. (2014). An embedded and embodied cognition review of instructional manipulatives. Educational Psychology Review, 26, 51-72. doi:10.1007/s10648-014-9255-5.

    Abstract

    Recent literature on learning with instructional manipulatives seems to call for a moderate view on the effects of perceptual and interactive richness of instructional manipulatives on learning. This “moderate view” holds that manipulatives’ perceptual and interactive richness may compromise learning in two ways: (1) by imposing a very high cognitive load on the learner, and (2) by hindering drawing of symbolic inferences that are supposed to play a key role in transfer (i.e., application of knowledge to new situations in the absence of instructional manipulatives). This paper presents a contrasting view. Drawing on recent insights from Embedded Embodied perspectives on cognition, it is argued that (1) perceptual and interactive richness may provide opportunities for alleviating cognitive load (Embedded Cognition), and (2) transfer of learning is not reliant on decontextualized knowledge but may draw on previous sensorimotor experiences of the kind afforded by perceptual and interactive richness of manipulatives (Embodied Cognition). By negotiating the Embedded Embodied Cognition view with the moderate view, implications for research are derived.
  • Pouw, W., De Nooijer, J. A., Van Gog, T., Zwaan, R. A., & Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Frontiers in Psychology, 5: 359. doi:10.3389/fpsyg.2014.00359.

    Abstract

    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures.
  • Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.

    Abstract

    Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities.
  • Rahmany, R., Marefat, H., & Kidd, E. (2014). Resumptive elements aid comprehension of object relative clauses: evidence from Persian. Journal of Child Language, 41(4), 937-48. doi:10.1017/s0305000913000147.
  • Ravignani, A., Bowling, D. L., & Fitch, W. T. (2014). Chorusing, synchrony, and the evolutionary functions of rhythm. Frontiers in Psychology, 5: 1118. doi:10.3389/fpsyg.2014.01118.

    Abstract

    A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language.
  • Ravignani, A. (2014). Chronometry for the chorusing herd: Hamilton's legacy on context-dependent acoustic signalling—a comment on Herbers (2013). Biology Letters, 10(1): 20131018. doi:10.1098/rsbl.2013.1018.
  • Ravignani, A., Martins, M., & Fitch, W. T. (2014). Vocal learning, prosody, and basal ganglia: Don't underestimate their complexity. Behavioral and Brain Sciences, 37(6), 570-571. doi:10.1017/S0140525X13004184.

    Abstract

    In response to: Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective

    Abstract:
    Ackermann et al.'s arguments in the target article need sharpening and rethinking at both mechanistic and evolutionary levels. First, the authors' evolutionary arguments are inconsistent with recent evidence concerning nonhuman animal rhythmic abilities. Second, prosodic intonation conveys much more complex linguistic information than mere emotional expression. Finally, human adults' basal ganglia have a considerably wider role in speech modulation than Ackermann et al. surmise.
  • Redmann, A., FitzPatrick, I., Hellwig, F. M., & Indefrey, P. (2014). The use of conceptual components in language production: an ERP study. Frontiers in Psychology, 5: 363. doi:10.3389/fpsyg.2014.00363.

    Abstract

    According to frame-theory, concepts can be represented as structured frames that contain conceptual attributes (e.g., "color") and their values (e.g., "red"). A particular color value can be seen as a core conceptual component for (high color-diagnostic; HCD) objects (e.g., bananas) which are strongly associated with a typical color, but less so for (low color-diagnostic; LCD) objects (e.g., bicycles) that exist in many different colors. To investigate whether the availability of a core conceptual component (color) affects lexical access in language production, we conducted two experiments on the naming of visually presented HCD and LCD objects. Experiment 1 showed that, when naming latencies were matched for colored HCD and LCD objects, achromatic HCD objects were named more slowly than achromatic LCD objects. In Experiment 2 we recorded ERPs while participants performed a picture-naming task, in which achromatic target pictures were either preceded by an appropriately colored box (primed condition) or a black and white checkerboard (unprimed condition). We focused on the P2 component, which has been shown to reflect difficulty of lexical access in language production. Results showed that HCD resulted in slower object-naming and a more pronounced P2. Priming also yielded a more positive P2 but did not result in an RT difference. ERP waveforms on the P1, P2 and N300 components showed a priming by color-diagnosticity interaction, the effect of color priming being stronger for HCD objects than for LCD objects. The effect of color-diagnosticity on the P2 component suggests that the slower naming of achromatic HCD objects is (at least in part) due to more difficult lexical retrieval. Hence, the color attribute seems to affect lexical retrieval in HCD words. The interaction between priming and color-diagnosticity indicates that priming with a feature hinders lexical access, especially if the feature is a core feature of the target object.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Roberts, S. G., Dediu, D., & Moisik, S. R. (2014). How to speak Neanderthal. New Scientist, 222(2969), 40-41. doi:10.1016/S0262-4079(14)60970-2.
  • Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.

    Abstract

    The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.

    Abstract

    In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering
  • Roorda, D., Kalkman, G., Naaijer, M., & Van Cranenburgh, A. (2014). LAF-Fabric: A data analysis tool for linguistic annotation framework with an application to the Hebrew Bible. Computational linguistics in the Netherlands, 4, 105-120.

    Abstract

    The Linguistic Annotation Framework (LAF) provides a general, extensible stand-o markup system for corpora. This paper discusses LAF-Fabric, a new tool to analyse LAF resources in general with an extension to process the Hebrew Bible in particular. We rst walk through the history of the Hebrew Bible as text database in decennium-wide steps. Then we describe how LAF-Fabric may serve as an analysis tool for this corpus. Finally, we describe three analytic projects/work ows that benet from the new LAF representation: 1) the study of linguistic variation: extract cooccurrence data of common nouns between the books of the Bible (Martijn Naaijer); 2) the study of the grammar of Hebrew poetry in the Psalms: extract clause typology (Gino Kalkman); 3) construction of a parser of classical Hebrew by Data Oriented Parsing: generate tree structures from the database (Andreas van Cranenburgh).
  • Roswandowitz, C., Mathias, S. R., Hintz, F., Kreitewolf, J., Schelinski, S., & von Kriegstein, K. (2014). Two cases of selective developmental voice-recognition impairments. Current Biology, 24(19), 2348-2353. doi:10.1016/j.cub.2014.08.048.

    Abstract

    Recognizing other individuals is an essential skill in humans and in other species [1, 2 and 3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.

    Files private

    Request files
  • Rowbotham, S., Wardy, A. J., Lloyd, D. M., Wearden, A., & Holler, J. (2014). Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures. PLoS One, 9(10): e110779. doi:10.1371/journal.pone.0110779.

    Abstract

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.
  • Rowbotham, S., Holler, J., Lloyd, D., & Wearden, A. (2014). Handling pain: The semantic interplay of speech and co-speech hand gestures in the description of pain sensations. Speech Communication, 57, 244-256. doi:10.1016/j.specom.2013.04.002.

    Abstract

    Pain is a private and subjective experience about which effective communication is vital, particularly in medical settings. Speakers often represent information about pain sensation in both speech and co-speech hand gestures simultaneously, but it is not known whether gestures merely replicate spoken information or complement it in some way. We examined the representational contribution
    of gestures in a range of consecutive analyses. Firstly, we found that 78% of speech units containing pain sensation were accompanied by gestures, with 53% of these gestures representing pain sensation. Secondly, in 43% of these instances, gestures represented pain sensation information that was not contained in speech, contributing additional, complementary information to the pain sensation message.
    Finally, when applying a specificity analysis, we found that in contrast with research in different domains of talk, gestures did not make the pain sensation information in speech more specific. Rather, they complemented the verbal pain message by representing different
    aspects of pain sensation, contributing to a fuller representation of pain sensation than speech alone. These findings highlight the importance of gestures in communicating about pain sensation and suggest that this modality provides additional information to supplement and clarify the often ambiguous verbal pain message

    Files private

    Request files
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rowland, C. F., & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: what children do know? Journal of Child Language, 27(1), 157-181.

    Abstract

    The present paper reports an analysis of correct wh-question production and subject–auxiliary inversion errors in one child's early wh-question data (age 2; 3.4 to 4; 10.23). It is argued that two current movement rule accounts (DeVilliers, 1991; Valian, Lasser & Mandelbaum, 1992) cannot explain the patterning of early wh-questions. However, the data can be explained in terms of the child's knowledge of particular lexically-specific wh-word+auxiliary combinations, and the pattern of inversion and uninversion predicted from the relative frequencies of these combinations in the mother's speech. The results support the claim that correctly inverted wh-questions can be produced without access to a subject–auxiliary inversion rule and are consistent with the constructivist claim that a distributional learning mechanism that learns and reproduces lexically-specific formulae heard in the input can explain much of the early multi-word speech data. The implications of these results for movement rule-based and constructivist theories of grammatical development are discussed.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Sadakata, M., & McQueen, J. M. (2014). Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training. Frontiers in Psychology, 5: 1318. doi:10.3389/fpsyg.2014.01318.

    Abstract

    Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a five-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by 3 speakers) and high (similar to medium but with 5 speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely-matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Sanchis-Trilles, G., Alabau, V., Buck, C., Carl, M., Casacuberta, F., García Martínez, M., Germann, U., González Rubio, J., Hill, R. L., Koehn, P., Leiva, L. A., Mesa-Lao, B., Ortiz Martínez, D., Saint-Amand, H., Tsoukala, C., & Vidal, E. (2014). Interactive translation prediction versus conventional post-editing in practice: a study with the CasMaCat workbench. Machine Translation, 28(3-4), 217-235. doi:10.1007/s10590-014-9157-9.

    Abstract

    We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence “post-editing”), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation.

    Files private

    Request files
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. Neurocomputing, 32(33), 987-994. doi:10.1016/S0925-2312(00)00270-8.

    Abstract

    Capacity limited memory systems need to gradually forget old information in order to avoid catastrophic forgetting where all stored information is lost. This can be achieved by allowing new information to overwrite old, as in the so-called palimpsest memory. This paper describes a new such learning rule employed in an attractor neural network. The network does not exhibit catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits recency e!ects in retrieval
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Schertz, J., & Ernestus, M. (2014). Variability in the pronunciation of non-native English the: Effects of frequency and disfluencies. Corpus Linguistics and Linguistic Theory, 10, 329-345. doi:10.1515/cllt-2014-0024.

    Abstract

    This study examines how lexical frequency and planning problems can predict phonetic variability in the function word ‘the’ in conversational speech produced by non-native speakers of English. We examined 3180 tokens of ‘the’ drawn from English conversations between native speakers of Czech or Norwegian. Using regression models, we investigated the effect of following word frequency and disfluencies on three phonetic parameters: vowel duration, vowel quality, and consonant quality. Overall, the non-native speakers showed variation that is very similar to the variation displayed by native speakers of English. Like native speakers, Czech speakers showed an effect of frequency on vowel durations, which were shorter in more frequent word sequences. Both groups of speakers showed an effect of frequency on consonant quality: the substitution of another consonant for /ð/ occurred more often in the context of more frequent words. The speakers in this study also showed a native-like allophonic distinction in vowel quality, in which /ði/ occurs more often before vowels and /ðə/ before consonants. Vowel durations were longer in the presence of following disfluencies, again mirroring patterns in native speakers, and the consonant quality was more likely to be the target /ð/ before disfluencies, as opposed to a different consonant. The fact that non-native speakers show native-like sensitivity to lexical frequency and disfluencies suggests that these effects are consequences of a general, non-language-specific production mechanism governing language planning. On the other hand, the non-native speakers in this study did not show native-like patterns of vowel quality in the presence of disfluencies, suggesting that the pattern attested in native speakers of English may result from language-specific processes separate from the general production mechanisms
  • Schijven, D., Sousa, V. C., Roelofs, J., Olivier, B., & Olivier, J. D. A. (2014). Serotonin 1A receptors and sexual behavior in a genetic model of depression. Pharmacology, Biochemistry and Behavior, 121, 82-87. doi:10.1016/j.pbb.2013.12.012.

    Abstract

    The Flinder Sensitive Line (FSL) is a rat strain that displays distinct behavioral and neurochemical features of major depression. Chronic selective serotonin reuptake inhibitors (SSRIs) are able to reverse these symptoms in FSL rats. It is well known that several abnormalities in the serotonergic system have been found in FSL rats, including increased 5-HT brain tissue levels and reduced 5-HT synthesis. SSRIs are known to exert (part of) their effects by desensitization of the 5-HT1A receptor and FSL rats appear to have lower 5-HT1A receptor densities compared with Flinder Resistant Line (FRL) rats. We therefore studied the sensitivity of this receptor on the sexual behavior performance in both FRL and FSL rats. First, basal sexual performance was studied after saline treatment followed by treatment of two different doses of the 5-HT1A receptor agonist ±8-OH-DPAT. Finally we measured the effect of a 5-HT1A receptor antagonist to check for specificity of the 5-HT1A receptor activation. Our results show that FSL rats have higher ejaculation frequencies compared with FRL rats which do not fit with a more depressive-like phenotype. Moreover FRL rats are more sensitive to effects of ±8-OH-DPAT upon EL and IF than FSL rats. The blunted response of FSL rats to the effects of ±8-OH-DPAT may be due to lower densities of 5-HT1A receptors.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schoot, L., Menenti, L., Hagoort, P., & Segaert, K. (2014). A little more conversation - The influence of communicative context on syntactic priming in brain and behavior. Frontiers in Psychology, 5: 208. doi:10.3389/fpsyg.2014.00208.

    Abstract

    We report on an fMRI syntactic priming experiment in which we measure brain activity for participants who communicate with another participant outside the scanner. We investigated whether syntactic processing during overt language production and comprehension is influenced by having a (shared) goal to communicate. Although theory suggests this is true, the nature of this influence remains unclear. Two hypotheses are tested: i. syntactic priming effects (fMRI and RT) are stronger for participants in the communicative context than for participants doing the same experiment in a non-communicative context, and ii. syntactic priming magnitude (RT) is correlated with the syntactic priming magnitude of the speaker’s communicative partner. Results showed that across conditions, participants were faster to produce sentences with repeated syntax, relative to novel syntax. This behavioral result converged with the fMRI data: we found repetition suppression effects in the left insula extending into left inferior frontal gyrus (BA 47/45), left middle temporal gyrus (BA 21), left inferior parietal cortex (BA 40), left precentral gyrus (BA 6), bilateral precuneus (BA 7), bilateral supplementary motor cortex (BA 32/8) and right insula (BA 47). We did not find support for the first hypothesis: having a communicative intention does not increase the magnitude of syntactic priming effects (either in the brain or in behavior) per se. We did find support for the second hypothesis: if speaker A is strongly/weakly primed by speaker B, then speaker B is primed by speaker A to a similar extent. We conclude that syntactic processing is influenced by being in a communicative context, and that the nature of this influence is bi-directional: speakers are influenced by each other.
  • Schreiweis, C., Bornschein, U., Burguière, E., Kerimoglu, C., Schreiter, S., Dannemann, M., Goyal, S., Rea, E., French, C. A., Puliyadi, R., Groszer, M., Fisher, S. E., Mundry, R., Winter, C., Hevers, W., Pääbo, S., Enard, W., & Graybiel, A. M. (2014). Humanized Foxp2 accelerates learning by enhancing transitions from declarative to procedural performance. Proceedings of the National Academy of Sciences of the United States of America, 111, 14253-14258. doi:10.1073/pnas.1414542111.

    Abstract

    The acquisition of language and speech is uniquely human, but how genetic changes might have adapted the nervous system to this capacity is not well understood. Two human-specific amino acid substitutions in the transcription factor forkhead box P2 (FOXP2) are outstanding mechanistic candidates, as they could have been positively selected during human evolution and as FOXP2 is the sole gene to date firmly linked to speech and language development. When these two substitutions are introduced into the endogenous Foxp2 gene of mice (Foxp2hum), cortico-basal ganglia circuits are specifically affected. Here we demonstrate marked effects of this humanization of Foxp2 on learning and striatal neuroplasticity. Foxp2hum/hum mice learn stimulus–response associations faster than their WT littermates in situations in which declarative (i.e., place-based) and procedural (i.e., response-based) forms of learning could compete during transitions toward proceduralization of action sequences. Striatal districts known to be differently related to these two modes of learning are affected differently in the Foxp2hum/hum mice, as judged by measures of dopamine levels, gene expression patterns, and synaptic plasticity, including an NMDA receptor-dependent form of long-term depression. These findings raise the possibility that the humanized Foxp2 phenotype reflects a different tuning of corticostriatal systems involved in declarative and procedural learning, a capacity potentially contributing to adapting the human brain for speech and language acquisition.

    Files private

    Request files
  • Scott, D. R., & Cutler, A. (1984). Segmental phonology and the perception of syntactic structure. Journal of Verbal Learning and Verbal Behavior, 23, 450-466. Retrieved from http://www.sciencedirect.com/science//journal/00225371.

    Abstract

    Recent research in speech production has shown that syntactic structure is reflected in segmental phonology--the application of certain phonological rules of English (e.g., palatalization and alveolar flapping) is inhibited across phrase boundaries. We examined whether such segmental effects can be used in speech perception as cues to syntactic structure, and the relation between the use of these segmental features as syntactic markers in production and perception. Speakers of American English (a dialect in which the above segmental effects occur) could indeed use the segmental cues in syntax perception; speakers of British English (in which the effects do not occur) were unable to make use of them, while speakers of British English who were long-term residents of the United States showed intermediate performance.
  • Segaert, K., Weber, K., Cladder-Micus, M., & Hagoort, P. (2014). The influence of verb-bound syntactic preferences on the processing of syntactic structures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1448-1460. doi:10.1037/a0036796.

    Abstract

    Speakers sometimes repeat syntactic structures across sentences, a phenomenon called syntactic priming. We investigated the influence of verb-bound syntactic preferences on syntactic priming effects in response choices and response latencies for German ditransitive sentences. In the response choices we found inverse preference effects: There were stronger syntactic priming effects for primes in the less preferred structure, given the syntactic preference of the prime verb. In the response latencies we found positive preference effects: There were stronger syntactic priming effects for primes in the more preferred structure, given the syntactic preference of the prime verb. These findings provide further support for the idea that syntactic processing is lexically guided.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Senft, G. (2000). [Review of the book Language, identity, and marginality in Indonesia: The changing nature of ritual speech on the island of Sumba by Joel C. Kuipers]. Linguistics, 38, 435-441. doi:10.1515/ling.38.2.435.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Seuren, P. A. M. (2000). Bewustzijn en taal. Splijtstof, 28(4), 111-123.
  • Seuren, P. A. M. (1964). Dupliek. Levende Talen, 227, 675-680.
  • Seuren, P. A. M. (1971). Chomsky, man en werk. De Gids, 134, 298-308.
  • Seuren, P. A. M. (1971). [Review of the book Introduction à la grammaire générative by Nicolas Ruwet]. Linguistics, 10(78), 111-120. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1971). [Review of the book La linguistique synchronique by Andre Martinet]. Linguistics, 10(78), 109-111. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1964). [Review of the book Set theory and syntactic descriptions by William S. Cooper]. Linguistics, 2(10), 73-80. doi:10.1515/ling.1964.2.10.61.
  • Seuren, P. A. M. (1971). [Review of the book Syntaxis by A. Kraak and W. Klooster]. Foundations of Language, 7(3), 441-445.
  • Seuren, P. A. M., & Jaspers, D. (2014). Logico-cognitive structure in the lexicon. Language, 90(3), 607-643. doi:10.1353/lan.2014.0058.

    Abstract

    This study is a prolegomenon to a formal theory of the natural growth of conceptual and lexical fields. Negation, in the various forms in which it occurs in language, is found to be a powerful indicator. Other than in standard logic, natural language negation selects its complement within universes of discourse that are, for practical and functional reasons, restricted in various ways and to different degrees. It is hypothesized that a system of cognitive principles drives recursive processes of universe restriction, which in turn affects logical relations within the restricted universes. This approach provides a new perspective in which to view the well-known clashes between standard logic and natural logical intuitions. Lexicalization in language, especially the morphological incorporation of negation, is limited to highly restricted universes, which explains, for example, why a dog can be said not to be a Catholic, but also not to be a non-Catholic. Cognition is taken to restrict the universe of discourse to contrary pairs, splitting up one or both of the contraries into further subuniverses as a result of further cognitive activity. It is shown how a logically sound square of opposition , expanded to a hexagon (Jacoby 1950, 1960, Sesmat 1951, Blanché 1952, 1953, 1966), is generated by a hierarchy of universe restrictions, defining the notion ‘natural’ for logical systems. The logical hexagon contains two additional vertices, one for ‘some but not all’ (the Y-type) and one for ‘either all or none’ (the U-type), and incorporates both the classic square and the Hamiltonian triangle of contraries . Some is thus considered semantically ambiguous, representing two distinct quantifiers. The pragmaticist claim that the language system contains only the standard logical ‘some perhaps all’ and that the ‘some but not all’ meaning is pragmatically derived from the use of the system is rejected. Four principles are proposed according to which negation selects a complement from the subuniverses at hand. On the basis of these principles and of the logico-cognitive system proposed, the well-known nonlexicalization not only of *nall and *nand but also of many other nonlogical cases found throughout the lexicons of languages is analyzed and explained
  • Seuren, P. A. M. (1989). Neue Entwicklungen im Wahrheitsbegriff. Studia Leibnitiana, 21(2), 155-173.
  • Seuren, P. A. M. (1984). Operator lowering. Linguistics, 22(5), 573-627. doi:10.1515/ling.1984.22.5.573.
  • Seuren, P. A. M. (2000). Presupposition, negation and trivalence. Journal of Linguistics, 36(2), 261-297.
  • Seuren, P. A. M. (1984). The bioprogram hypothesis: Facts and fancy. A commentary on Bickerton "The language bioprogram hypothesis". Behavioral and Brain Sciences, 7(2), 208-209. doi:10.1017/S0140525X00044356.
  • Seuren, P. A. M. (2014). The cognitive ontogenesis of predicate logid. Notre Dame Journal of Formal Logic, 55, 499-532. doi:10.1215/00294527-2798718.

    Abstract

    Since Aristotle and the Stoa, there has been a clash, worsened by modern predicate logic, between logically defined operator meanings and natural intuitions. Pragmatics has tried to neutralize the clash by an appeal to the Gricean conversational maxims. The present study argues that the pragmatic attempt has been unsuccessful. The “softness” of the Gricean explanation fails to do justice to the robustness of the intuitions concerned, leaving the relation between the principles evoked and the observed facts opaque. Moreover, there are cases where the Gricean maxims fail to apply. A more adequate solution consists in the devising of a sound natural logic, part of the innate cognitive equipment of mankind. This account has proved successful in conjunction with a postulated cognitive mechanism in virtue of which the universe of discourse (Un) is stepwise and recursively restricted, so that the negation selects different complements according to the degree of restrictedness of Un. This mechanism explains not only the discrepancies between natural logical intuitions and known logical systems; it also accounts for certain systematic lexicalization gaps in the languages of the world. Finally, it is shown how stepwise restriction of Un produces the ontogenesis of natural predicate logic, while at the same time resolving the intuitive clashes with established logical systems that the Gricean maxims sought to explain
  • Seuren, P. A. M. (1984). The comparative revisited. Journal of Semantics, 3(1), 109-141. doi:10.1093/jos/3.1-2.109.
  • Seuren, P. A. M. (1980). The delimitation between semantics and pragmatics. Quaderni di Semantica, 1, 108-113; 126-134.
  • Seuren, P. A. M. (1980). Wat is taal? Cahiers Bio-Wetenschappen en Maatschappij, 6(4), 23-29.
  • Sha, L., Wu, X., Yao, Y., Wen, B., Feng, J., Sha, Z., Wang, X., Xing, X., Dou, W., Jin, L., Li, W., Wang, N., Shen, Y., Wang, J., Wu, L., & Xu, Q. (2014). Notch Signaling Activation Promotes Seizure Activity in Temporal Lobe Epilepsy. Molecular Neurobiology, 49(2), 633-644.

    Abstract

    Notch signaling in the nervous system is often regarded as a developmental pathway. However, recent studies have suggested that Notch is associated with neuronal discharges. Here, focusing on temporal lobe epilepsy, we found that Notch signaling was activated in the kainic acid (KA)-induced epilepsy model and in human epileptogenic tissues. Using an acute model of seizures, we showed that DAPT, an inhibitor of Notch, inhibited ictal activity. In contrast, pretreatment with exogenous Jagged1 to elevate Notch signaling before KA application had proconvulsant effects. In vivo, we demonstrated that the impacts of activated Notch signaling on seizures can in part be attributed to the regulatory role of Notch signaling on excitatory synaptic activity in CA1 pyramidal neurons. In vitro, we found that DAPT treatment impaired synaptic vesicle endocytosis in cultured hippocampal neurons. Taken together, our findings suggest a correlation between aberrant Notch signaling and epileptic seizures. Notch signaling is up-regulated in response to seizure activity, and its activation further promotes neuronal excitation of CA1 pyramidal neurons in acute seizures.
  • Shao, Z., Roelofs, A., Acheson, D. J., & Meyer, A. S. (2014). Electrophysiological evidence that inhibition supports lexical selection in picture naming. Brain Research, 1586, 130-142. doi:10.1016/j.brainres.2014.07.009.

    Abstract

    We investigated the neural basis of inhibitory control during lexical selection. Participants overtly named pictures while response times (RTs) and event-related brain potentials (ERPs) were recorded. The difficulty of lexical selection was manipulated by using object and action pictures with high name agreement (few response candidates) versus low name agreement (many response candidates). To assess the involvement of inhibition, we conducted delta plot analyses of naming RTs and examined the N2 component of the ERP. We found longer mean naming RTs and a larger N2 amplitude in the low relative to the high name agreement condition. For action naming we found a negative correlation between the slopes of the slowest delta segment and the difference in N2 amplitude between the low and high name agreement conditions. The converging behavioral and electrophysiological evidence suggests that selective inhibition is engaged to reduce competition during lexical selection in picture naming.
  • Shao, Z., Roelofs, A., & Meyer, A. S. (2014). Predicting naming latencies for action pictures: Dutch norms. Behavior Research Methods, 46, 274-283. doi:10.3758/s13428-013-0358-6.

    Abstract

    The present study provides Dutch norms for age of acquisition, familiarity, imageability, image agreement, visual complexity, word frequency, and word length (in syllables) for 124 line drawings of actions. Ratings were obtained from 117 Dutch participants. Word frequency was determined on the basis of the SUBTLEX-NL corpus (Keuleers, Brysbaert, & New, Behavior Research Methods, 42, 643–650, 2010). For 104 of the pictures, naming latencies and name agreement were determined in a separate naming experiment with 74 native speakers of Dutch. The Dutch norms closely corresponded to the norms for British English. Multiple regression analysis showed that age of acquisition, imageability, image agreement, visual complexity, and name agreement were significant predictors of naming latencies, whereas word frequency and word length were not. Combined with the results of a principal-component analysis, these findings suggest that variables influencing the processes of conceptual preparation and lexical selection affect latencies more strongly than do variables influencing word-form encoding.

    Additional information

    Shao_Behav_Res_2013_Suppl_Mat.doc
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

    Abstract

    This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n=82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
  • Shayan, S., Ozturk, O., Bowerman, M., & Majid, A. (2014). Spatial metaphor in language can promote the development of cross-modal mappings in children. Developmental Science, 17(4), 636-643. doi:10.1111/desc.12157.

    Abstract

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross-modal association task. All groups, except for German children, performed significantly better than chance. German-speaking adults’ success suggests the pitch-to-thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross-modal associations can be boosted through experience with consistent metaphorical mappings in the input language
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smith, M. R., Cutler, A., Butterfield, S., & Nimmo-Smith, I. (1989). The perception of rhythm and word boundaries in noise-masked speech. Journal of Speech and Hearing Research, 32, 912-920.

    Abstract

    The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.

Share this page