Publications

Displaying 301 - 400 of 441
  • Phok, K., Moisan, A., Rinaldi, D., Brucato, N., Carpousis, A. J., Gaspin, C., & Clouet-d'Orval, B. (2011). Identification of CRISPR and riboswitch related RNAs among novel non-coding RNAs of the euryarchaeon Pyrococcus abyssi. BMC Genomics, 12, 312. doi:10.1186/1471-2164-12-312.

    Abstract

    Background

    Noncoding RNA (ncRNA) has been recognized as an important regulator of gene expression networks in Bacteria and Eucaryota. Little is known about ncRNA in thermococcal archaea except for the eukaryotic-like C/D and H/ACA modification guide RNAs.
    Results

    Using a combination of in silico and experimental approaches, we identified and characterized novel P. abyssi ncRNAs transcribed from 12 intergenic regions, ten of which are conserved throughout the Thermococcales. Several of them accumulate in the late-exponential phase of growth. Analysis of the genomic context and sequence conservation amongst related thermococcal species revealed two novel P. abyssi ncRNA families. The CRISPR family is comprised of crRNAs expressed from two of the four P. abyssi CRISPR cassettes. The 5'UTR derived family includes four conserved ncRNAs, two of which have features similar to known bacterial riboswitches. Several of the novel ncRNAs have sequence similarities to orphan OrfB transposase elements. Based on RNA secondary structure predictions and experimental results, we show that three of the twelve ncRNAs include Kink-turn RNA motifs, arguing for a biological role of these ncRNAs in the cell. Furthermore, our results show that several of the ncRNAs are subjected to processing events by enzymes that remain to be identified and characterized.
    Conclusions

    This work proposes a revised annotation of CRISPR loci in P. abyssi and expands our knowledge of ncRNAs in the Thermococcales, thus providing a starting point for studies needed to elucidate their biological function.
  • Piai, V., Roelofs, A., & Schriefers, H. (2011). Semantic interference in immediate and delayed naming and reading: Attention and task decisions. Journal of Memory and Language, 64, 404-423. doi:10.1016/j.jml.2011.01.004.

    Abstract

    Disagreement exists about whether lexical selection in word production is a competitive process. Competition predicts semanticinterference from distractor words in immediate but not in delayed picture naming. In contrast, Janssen, Schirm, Mahon, and Caramazza (2008) obtained semanticinterference in delayed picture naming when participants had to decide between picture naming and oral reading depending on the distractor word’s colour. We report three experiments that examined the role of such taskdecisions. In a single-task situation requiring picture naming only (Experiment 1), we obtained semanticinterference in immediate but not in delayednaming. In a task-decision situation (Experiments 2 and 3), no semantic effects were obtained in immediate and delayed picture naming and word reading using either the materials of Experiment 1 or the materials of Janssen et al. (2008). We present an attentional account in which taskdecisions may hide or reveal semanticinterference from lexical competition depending on the amount of parallelism between task-decision and picture–word processing.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2011). Reasoning with exceptions: An event-related brain potentials study. Journal of Cognitive Neuroscience, 23, 471-480. doi:10.1162/jocn.2009.21360.

    Abstract

    Defeasible inferences are inferences that can be revised in the light of new information. Although defeasible inferences are pervasive in everyday communication, little is known about how and when they are processed by the brain. This study examined the electrophysiological signature of defeasible reasoning using a modified version of the suppression task. Participants were presented with conditional inferences (of the type “if p, then q; p, therefore q”) that were preceded by a congruent or a disabling context. The disabling context contained a possible exception or precondition that prevented people from drawing the conclusion. Acceptability of the conclusion was indeed lower in the disabling condition compared to the congruent condition. Further, we found a large sustained negativity at the conclusion of the disabling condition relative to the congruent condition, which started around 250 msec and was persistent throughout the entire epoch. Possible accounts for the observed effect are discussed.
  • Poletiek, F. H. (2011). You can't have your hypothesis and test it: The importance of utilities in theories of reasoning. Behavioral and Brain Sciences, 34(2), 87-88. doi:10.1017/S0140525X10002980.
  • St Pourcain, B., Mandy, W. P., Heron, J., Golding, J., Davey Smith, G., & Skuse, D. H. (2011). Links between co-occurring social-communication and hyperactive-inattentive trait trajectories. Journal of the American Academy of Child & Adolescent Psychiatry, 50(9), 892-902.e5. doi:10.1016/j.jaac.2011.05.015.

    Abstract

    OBJECTIVE: There is overlap between an autistic and hyperactive-inattentive symptomatology when studied cross-sectionally. This study is the first to examine the longitudinal pattern of association between social-communication deficits and hyperactive-inattentive symptoms in the general population, from childhood through adolescence. We explored the interrelationship between trajectories of co-occurring symptoms, and sought evidence for shared prenatal/perinatal risk factors. METHOD: Study participants were 5,383 singletons of white ethnicity from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multiple measurements of hyperactive-inattentive traits (Strengths and Difficulties Questionnaire) and autistic social-communication impairment (Social Communication Disorder Checklist) were obtained between 4 and 17 years. Both traits and their trajectories were modeled in parallel using latent class growth analysis (LCGA). Trajectory membership was subsequently investigated with respect to prenatal/perinatal risk factors. RESULTS: LCGA analysis revealed two distinct social-communication trajectories (persistently impaired versus low-risk) and four hyperactive-inattentive trait trajectories (persistently impaired, intermediate, childhood-limited and low-risk). Autistic symptoms were more stable than those of attention-deficit/hyperactivity disorder (ADHD) behaviors, which showed greater variability. Trajectories for both traits were strongly but not reciprocally interlinked, such that the majority of children with a persistent hyperactive-inattentive symptomatology also showed persistent social-communication deficits but not vice versa. Shared predictors, especially for trajectories of persistent impairment, were maternal smoking during the first trimester, which included familial effects, and a teenage pregnancy. CONCLUSIONS: Our longitudinal study reveals that a complex relationship exists between social-communication and hyperactive-inattentive traits. Patterns of association change over time, with corresponding implications for removing exclusivity criteria for ASD and ADHD, as proposed for DSM-5.
  • Pozzoli, O., Vella, P., Iaffaldano, G., Parente, V., Devanna, P., Lacovich, M., Lamia, C. L., Fascio, U., Longoni, D., Cotelli, F., Capogrossi, M. C., & Pesce, M. (2011). Endothelial fate and angiogenic properties of human CD34+ progenitor cells in zebrafish. Arteriosclerosis, Thrombosis, and Vascular Biology, 31, 1589-1597. doi:10.1161/ATVBAHA.111.226969.

    Abstract

    Objective—The vascular competence of human-derived hematopoietic progenitors for postnatal vascularization is still poorly characterized. It is unclear whether, in the absence of ischemia, hematopoietic progenitors participate in neovascularization and whether they play a role in new blood vessel formation by incorporating into developing vessels or by a paracrine action. Methods and Results—In the present study, human cord blood–derived CD34+ (hCD34+) cells were transplanted into pre- and postgastrulation zebrafish embryos and in an adult vascular regeneration model induced by caudal fin amputation. When injected before gastrulation, hCD34+ cells cosegregated with the presumptive zebrafish hemangioblasts, characterized by Scl and Gata2 expression, in the anterior and posterior lateral mesoderm and were involved in early development of the embryonic vasculature. These morphogenetic events occurred without apparent lineage reprogramming, as shown by CD45 expression. When transplanted postgastrulation, hCD34+ cells were recruited into developing vessels, where they exhibited a potent paracrine proangiogenic action. Finally, hCD34+ cells rescued vascular defects induced by Vegf-c in vivo targeting and enhanced vascular repair in the zebrafish fin amputation model. Conclusion—These results indicate an unexpected developmental ability of human-derived hematopoietic progenitors and support the hypothesis of an evolutionary conservation of molecular pathways involved in endothelial progenitor differentiation in vivo.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Rahmany, R., Marefat, H., & Kidd, E. (2011). Persian speaking children's acquisition of relative clauses. European Journal of Developmental Psychology, 8(3), 367-388. doi:10.1080/17405629.2010.509056.

    Abstract

    The current study examined the acquisition of relative clauses (RCs) in Persian-speaking children. Persian is a relatively unique data point in crosslinguistic research in acquisition because it is a head-final language with post-nominal RCs. Children (N = 51) aged 2 to 7 years completed a picture-selection task that tested their comprehension of subject-, object-, and genitive-RCs. The results showed that the children experienced greater difficulty processing object and genitive RCs when compared to subject RCs, suggesting that the children have particular difficulty processing sentences with non-canonical word order. The results are discussed with reference to a number of theoretical accounts proposed to account for sentence difficulty.
  • Ramenzoni, V. C., Davis, T. J., Riley, M. A., Shockley, K., & Baker, A. A. (2011). Joint action in a cooperative precision task: Nested processes of intrapersonal and interpersonal coordination. Experimental Brain Research, 211, 447-457. doi:10.1007/s00221-011-2653-8.

    Abstract

    The authors determined the effects of changes in task demands on interpersonal and intrapersonal coordination. Participants performed a joint task in which one participant held a stick to which a circle was attached at the top (holding role), while the other held a pointer through the circle without touching its borders (pointing role). Experiment 1 investigated whether interpersonal and intrapersonal coordination varied depending on task difficulty. Results showed that interpersonal and intrapersonal coordination increased in degree and stability with increments in task difficulty. Experiment 2 explored the effects of individual constraints by increasing the balance demands of the task (one or both members of the pair stood in a less stable tandem stance). Results showed that interpersonal coordination increased in degree and stability as joint task demands increased and that coupling strength varied depending on joint and individual task constraints. In all, results suggest that interpersonal and intrapersonal coordination are affected by the nature of the task performed and the constraints it places on joint and individual performance.
  • Ravenscroft, G., Sollis, E., Charles, A. K., North, K. N., Baynam, G., & Laing, N. G. (2011). Fetal akinesia: review of the genetics of the neuromuscular causes. Journal of Medical Genetics (London), 48(12), 793-801.

    Abstract

    Fetal akinesia refers to a broad spectrum of disorders in which the unifying feature is a reduction or lack of fetal movement. Fetal akinesias may be caused by defects at any point along the motor system pathway including the central and peripheral nervous system, the neuromuscular junction and the muscle, as well as by restrictive dermopathy or external restriction of the fetus in utero. The fetal akinesias are clinically and genetically heterogeneous, with causative mutations identified to date in a large number of genes encoding disparate parts of the motor system. However, for most patients, the molecular cause remains unidentified. One reason for this is because the tools are only now becoming available to efficiently and affordably identify mutations in a large panel of disease genes. Next-generation sequencing offers the promise, if sufficient cohorts of patients can be assembled, to identify the majority of the remaining genes on a research basis and facilitate efficient clinical molecular diagnosis. The benefits of identifying the causative mutation(s) for each individual patient or family include accurate genetic counselling and the options of prenatal diagnosis or preimplantation genetic diagnosis.

    In this review, we summarise known single-gene disorders affecting the spinal cord, peripheral nerves, neuromuscular junction or skeletal muscles that result in fetal akinesia. This audit of these known molecular and pathophysiological mechanisms involved in fetal akinesia provides a basis for improved molecular diagnosis and completing disease gene discovery.
  • Reif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N. and 24 moreReif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N., Ribasès, M., Bayès, M., Buitelaar, J. K., Kiemeney, L. A. L. M., Kooij, J. J. S., Kan, C. C., Hoogman, M., Johansson, S., Jacobsen, K. K., Knappskog, P. M., Fasmer, O. B., Asherson, P., Warnke, A., Grabe, H.-J., Mahler, J., Teumer, A., Völzke, H., Mors, O. N., Schäfer, H., Ramos-Quiroga, J. A., Cormand, B., Haavik, J., Franke, B., & Lesch, K.-P. (2011). DIRAS2 is associated with Adult ADHD, related traits, and co-morbid disorders. Neuropsychopharmacology, 36, 2318-2327. doi:10.1038/npp.2011.120.

    Abstract

    Several linkage analyses implicated the chromosome 9q22 region in attention deficit/hyperactivity disorder (ADHD), a neurodevelopmental disease with remarkable persistence into adulthood. This locus contains the brain-expressed GTP-binding RAS-like 2 gene (DIRAS2) thought to regulate neurogenesis. As DIRAS2 is a positional and functional ADHD candidate gene, we conducted an association study in 600 patients suffering from adult ADHD (aADHD) and 420 controls. Replication samples consisted of 1035 aADHD patients and 1381 controls, as well as 166 families with a child affected from childhood ADHD. Given the high degree of co-morbidity with ADHD, we also investigated patients suffering from bipolar disorder (BD) (n=336) or personality disorders (PDs) (n=622). Twelve single-nucleotide polymorphisms (SNPs) covering the structural gene and the transcriptional control region of DIRAS2 were analyzed. Four SNPs and two haplotype blocks showed evidence of association with ADHD, with nominal p-values ranging from p=0.006 to p=0.05. In the adult replication samples, we obtained a consistent effect of rs1412005 and of a risk haplotype containing the promoter region (p=0.026). Meta-analysis resulted in a significant common OR of 1.12 (p=0.04) for rs1412005 and confirmed association with the promoter risk haplotype (OR=1.45, p=0.0003). Subsequent analysis in nuclear families with childhood ADHD again showed an association of the promoter haplotype block (p=0.02). rs1412005 also increased risk toward BD (p=0.026) and cluster B PD (p=0.031). Additional SNPs showed association with personality scores (p=0.008–0.048). Converging lines of evidence implicate genetic variance in the promoter region of DIRAS2 in the etiology of ADHD and co-morbid impulsive disorders.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue. Language and Speech, 54(2), 147-165. doi:10.1177/0023830910397489.

    Abstract

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment 1a) or not (Experiment 1b). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than non-initial syllables. Speaking rate affects perception of suprasegmental information.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37, 978-996. doi:10.1037/a0021923.

    Abstract

    A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' “once (s)pear,” [t] in 'nooit (t)rap,' “never staircase/quick”) were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Rekers, Y., Haun, D. B. M., & Tomasello, M. (2011). Children, but not chimpanzees, prefer to collaborate. Current Biology, 21, 1756-1758. doi:10.1016/j.cub.2011.08.066.

    Abstract

    Human societies are built on collaborative activities. Already from early childhood, human children are skillful and proficient collaborators. They recognize when they need help in solving a problem and actively recruit collaborators [ [1] and 2 F. Warneken, F. Chen and M. Tomasello, Cooperative activities in young children and chimpanzees. Child Dev., 77 (2006), pp. 640–663. | View Record in Scopus | [MPG-SFX] | | Full Text via CrossRef | Cited By in Scopus (56) [2] ]. The societies of other primates are also to some degree cooperative. Chimpanzees, for example, engage in a variety of cooperative activities such as border patrols, group hunting, and intra- and intergroup coalitionary behavior [ [3] , [4] and [5] ]. Recent studies have shown that chimpanzees possess many of the cognitive prerequisites necessary for human-like collaboration. Chimpanzees have been shown to recognize when they need help in solving a problem and to actively recruit good over bad collaborators [ [6] and [7] ]. However, cognitive abilities might not be all that differs between chimpanzees and humans when it comes to cooperation. Another factor might be the motivation to engage in a cooperative activity. Here, we hypothesized that a key difference between human and chimpanzee collaboration—and so potentially a key mechanism in the evolution of human cooperation—is a simple preference for collaborating (versus acting alone) to obtain food. Our results supported this hypothesis, finding that whereas children strongly prefer to work together with another to obtain food, chimpanzees show no such preference.
  • Reynolds, E., Stagnitti, K., & Kidd, E. (2011). Play, language and social skills of children attending a play-based curriculum school and a traditionally structured classroom curriculum school in low socioeconomic areas. Australasian Journal of Early Childhood, 36(4), 120-130.

    Abstract

    Aim and method: A comparison study of four six-year-old children attending a school with a play-based curriculum and a school with a traditionally structured classroom from low socioeconomic areas was conducted in Victoria, Australia. Children’s play, language and social skills were measured in February and again in August. At baseline assessment there was a combined sample of 31 children (mean age 5.5 years, SD 0.35 years; 13 females and 18 males). At follow-up there was a combined sample of 26 children (mean age 5.9 years, SD 0.35 years; 10 females, 16 males). Results: There was no significant difference between the school groups in play, language, social skills, age and sex at baseline assessment. Compared to norms on a standardised assessment, all the children were beginning school with delayed play ability. At follow-up assessment, children at the play-based curriculum school had made significant gains in all areas assessed (p values ranged from 0.000 to 0.05). Children at the school with the traditional structured classroom had made significant positive gains in use of symbols in play (p < 0.05) and semantic language (p < 0.05). At follow-up, there were significant differences between schools in elaborate play (p < 0.000), semantic language (p < 0.000), narrative language (p < 0.01) and social connection (p < 0.01), with children in the play-based curriculum school having significantly higher scores in play, narrative language and language and lower scores in social disconnection. Implications: Children from low SES areas begin school at risk of failure as skills in play, language and social skills are delayed. The school experience increases children’s skills, with children in the play-based curriculum showing significant improvements in all areas assessed. It is argued that a play-based curriculum meets children’s developmental and learning needs more effectively. More research is needed to replicate these results.
  • Rieffe, C., Oosterveld, P., Meerum Terwogt, M., Mootz, S., Van Leeuwen, E. J. C., & Stockmann, L. (2011). Emotion regulation and internalizing symptoms in children with Autism Spectrum Disorders. Autism, 15(6), 655-670. doi:10.1177/1362361310366571.

    Abstract

    The aim of this study was to examine the unique contribution of two aspects of emotion regulation (awareness and coping) to the development of internalizing problems in 11-year-old high-functioning children with an autism spectrum disorder (HFASD) and a control group, and the moderating effect of group membership on this. The results revealed overlap between the two groups, but also significant differences, suggesting a more fragmented emotion regulation pattern in children with HFASD, especially related to worry and rumination. Moreover, in children with HFASD, symptoms of depression were unrelated to positive mental coping strategies and the conviction that the emotion experience helps in dealing with the problem, suggesting that a positive approach to the problem and its subsequent emotion experience are less effective in the HFASD group.
  • Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology, 2, 38. doi:10.3389/fpsyg.2011.00038.

    Abstract

    We present the perspective that interpersonal movement coordination results from establishing interpersonal synergies. Interpersonal synergies are higher-order control systems formed by coupling movement system degrees of freedom of two (or more) actors. Characteristic features of synergies identified in studies of intrapersonal coordination – dimensional compression and reciprocal compensation – are revealed in studies of interpersonal coordination that applied the uncontrolled manifold approach and principal component analysis to interpersonal movement tasks. Broader implications of the interpersonal synergy approach for movement science include an expanded notion of mechanism and an emphasis on interaction-dominant dynamics.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roberts, L., & Felser, C. (2011). Plausibility and recovery from garden paths in L2 sentence processing. Applied Psycholinguistics, 32, 299-331. doi:10.1017/S0142716410000421.

    Abstract

    In this study, the influence of plausibility information on the real-time processing of locally ambiguous (“garden path”) sentences in a nonnative language is investigated. Using self-paced reading, we examined how advanced Greek-speaking learners of English and native speaker controls read sentences containing temporary subject–object ambiguities, with the ambiguous noun phrase being either semantically plausible or implausible as the direct object of the immediately preceding verb. Besides providing evidence for incremental interpretation in second language processing, our results indicate that the learners were more strongly influenced by plausibility information than the native speaker controls in their on-line processing of the experimental items. For the second language learners an initially plausible direct object interpretation lead to increased reanalysis difficulty in “weak” garden-path sentences where the required reanalysis did not interrupt the current thematic processing domain. No such evidence of on-line recovery was observed, in contrast, for “strong” garden-path sentences that required more substantial revisions of the representation built thus far, suggesting that comprehension breakdown was more likely here.
  • Robotham, L., Sauter, D., Bachoud-Lévi, A.-C., & Trinkler, I. (2011). The impairment of emotion recognition in Huntington’s disease extends to positive emotions. Cortex, 47(7), 880-884. doi:10.1016/j.cortex.2011.02.014.

    Abstract

    Patients with Huntington’s Disease are impaired in the recognition of emotional signals. However, the nature and extent of the impairment is controversial: It has variously been argued to be disgust-specific (Sprengelmeyer et al., 1996; 1997), general for negative emotions (Snowden, et al., 2008), or a consequence of item difficulty (Milders, Crawford, Lamb, & Simpson, 2003). Yet no study to date has included more than one positive stimulus category in emotion recognition tasks. We present a study of 14 Huntington’s patients and 15 control participants performing a forced-choice task with a range of negative and positive non-verbal emotional vocalizations. Participants were found to be impaired in emotion recognition across the emotion categories, including positive emotions such as amusement and sensual pleasure, and negative emotions, such as anger, disgust, and fear. These data complement previous work by demonstrating that impairments are found in the recognition of positive, as well as negative, emotions in Huntington’s disease. Our results point to a global deficit in the recognition of emotional signals in Huntington’s Disease.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: A review. Frontiers in Psychology, 2, 307. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Roelofs, A., Piai, V., & Garrido Rodriguez, G. (2011). Attentional inhibition in bilingual naming performance: Evidence from delta-plot analyses. Frontiers in Psychology, 2, 184. doi:10.3389/fpsyg.2011.00184.

    Abstract

    It has been argued that inhibition is a mechanism of attentional control in bilingual language performance. Evidence suggests that effects of inhibition are largest in the tail of a response time (RT) distribution in non-linguistic and monolingual performance domains. We examined this for bilingual performance by conducting delta-plot analyses of naming RTs. Dutch-English bilingual speakers named pictures using English while trying to ignore superimposed neutral Xs or Dutch distractor words that were semantically related, unrelated, or translations. The mean RTs revealed semantic, translation, and lexicality effects. The delta plots leveled off with increasing RT, more so when the mean distractor effect was smaller as compared with larger. This suggests that the influence of inhibition is largest toward the distribution tail, corresponding to what is observed in other performance domains. Moreover, the delta plots suggested that more inhibition was applied by high- than low-proficiency individuals in the unrelated than the other distractor conditions. These results support the view that inhibition is a domain-general mechanism that may be optionally engaged depending on the prevailing circumstances.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A., Piai, V., & Schriefers, H. (2011). Selective attention and distractor frequency in naming performance: Comment on Dhooge and Hartsuiker (2010). Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1032-1038. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Rossano, F., Rakoczy, H., & Tomasello, M. (2011). Young children’s understanding of violations of property rights. Cognition, 121, 219-227. doi:10.1016/j.cognition.2011.06.007.

    Abstract

    The present work investigated young children’s normative understanding of property rights using a novel methodology. Two- and 3-year-old children participated in situations in which an actor (1) took possession of an object for himself, and (2) attempted to throw it away. What varied was who owned the object: the actor himself, the child subject, or a third party. We found that while both 2- and 3-year-old children protested frequently when their own object was involved, only 3-year-old children protested more when a third party’s object was involved than when the actor was acting on his own object. This suggests that at the latest around 3 years of age young children begin to understand the normative dimensions of property rights.
  • Rossi, S., Jürgenson, I. B., Hanulikova, A., Telkemeyer, S., Wartenburger, I., & Obrig, H. (2011). Implicit processing of phonotactic cues: Evidence from electrophysiological and vascular responses. Journal of Cognitive Neuroscience, 23, 1752-1764. doi:10.1162/jocn.2010.21547.

    Abstract

    Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of phonemes within syllables or words in a given language. The present study aimed at investigating both temporal and topographical aspects of the neuronal correlates of phonotactic processing by simultaneously applying event-related brain potentials (ERPs) and functional near-infrared spectroscopy (fNIRS). Pseudowords, either phonotactically legal or illegal with respect to the participants' native language, were acoustically presented to passively listening adult native German speakers. ERPs showed a larger N400 effect for phonotactically legal compared to illegal pseudowords, suggesting stronger lexical activation mechanisms in phonotactically legal material. fNIRS revealed a left hemispheric network including fronto-temporal regions with greater response to phonotactically legal pseudowords than to illegal pseudowords. This confirms earlier hypotheses on a left hemispheric dominance of phonotactic processing most likely due to the fact that phonotactics is related to phonological processing and represents a segmental feature of language comprehension. These segmental linguistic properties of a stimulus are predominantly processed in the left hemisphere. Thus, our study provides first insights into temporal and topographical characteristics of phonotactic processing mechanisms in a passive listening task. Differential brain responses between known and unknown phonotactic rules thus supply evidence for an implicit use of phonotactic cues to guide lexical activation mechanisms.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Noble, C. L. (2011). The role of syntactic structure in children's sentence comprehension: Evidence from the dative. Language Learning and Development, 7(1), 55-75. doi:10.1080/15475441003769411.

    Abstract

    Research has demonstrated that young children quickly acquire knowledge of how the structure of their language encodes meaning. However, this work focused on structurally simple transitives. The present studies investigate childrens' comprehension of the double object dative (e.g., I gave him the box) and the prepositional dative (e.g., I gave the box to him). In Study 1, 3- and 4-year-olds correctly preferred a transfer event reading of prepositional datives with novel verbs (e.g., I'm glorping the rabbit to the duck) but were unable to interpret double object datives (e.g., I'm glorping the duck the rabbit). In Studies 2 and 3, they were able to interpret both dative types when the nouns referring to the theme and recipient were canonically marked (Study 2; I'm glorping the rabbit to Duck) and, to a lesser extent, when they were distinctively but noncanonically marked (Study 3: I'm glorping rabbit to the Duck). Overall, the results suggest that English children have some verb-general knowledge of how dative syntax encodes meaning by 3 years of age, but successful comprehension may require the presence of additional surface cues.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, L. E. (2011). Polynomial modeling of child and adult intonation in German spontaneous speech. Language and Speech, 54, 199-223. doi:10.1177/0023830910397495.

    Abstract

    In a data set of 291 spontaneous utterances from German 5-year-olds, 7-year-olds and adults, nuclear pitch contours were labeled manually using the GToBI annotation system. Ten different contour types were identified. The fundamental frequency (F0) of these contours was modeled using third-order orthogonal polynomials, following an approach similar to the one Grabe, Kochanski, and Coleman (2007) used for English. Statistical analyses showed that all but one contour pair differed significantly from each other in at least one of the four coefficients. This demonstrates that polynomial modeling can provide quantitative empirical support for phonological labels in unscripted speech, and for languages other than English. Furthermore, polynomial expressions can be used to derive the alignment of tonal targets relative to the syllable structure, making polynomial modeling more accessible to the phonological research community. Finally, within-contour comparisons of the three age groups showed that for children, the magnitude of the higher coefficients is lower, suggesting that they are not yet able to modulate their pitch as fast as adults.
  • Ruiter, M. B., Kolk, H. H. J., Rietveld, T. C. M., Dijkstra, N., & Lotgering, E. (2011). Towards a quantitative measure of verbal effectiveness and efficiency in the Amsterdam-Nijmegen Everyday Language Test (ANELT). Aphasiology, 25, 961-975. doi:10.1080/02687038.2011.569892.

    Abstract

    Background: A well-known test for measuring verbal adequacy (i.e., verbal effectiveness) in mildly impaired aphasic speakers is the Amsterdam-Nijmegen Everyday Language Test (ANELT; Blomert, Koster, & Kean, 1995). Aphasia therapy practitioners score verbal adequacy qualitatively when they administer the ANELT to their aphasic clients in clinical practice. Aims: The current study investigated whether the construct validity of the ANELT could be further improved by substituting the qualitative score by a quantitative one, which takes the number of essential information units into account. The new quantitative measure could have the following advantages: the ability to derive a quantitative score of verbal efficiency, as well as improved sensitivity to detect changes in functional communication over time. Methods & Procedures: The current study systematically compared a new quantitative measure of verbal effectiveness with the current ANELT Comprehensibility scale, which is based on qualitative judgements. A total of 30 speakers of Dutch participated: 20 non-aphasic speakers and 10 aphasic patients with predominantly expressive disturbances. Outcomes & Results: Although our findings need to be replicated in a larger group of aphasic speakers, the main results suggest that the new quantitative measure of verbal effectiveness is more sensitive to detect change in verbal effectiveness over time. What is more, it can be used to derive a measure of verbal efficiency. Conclusions: The fact that both verbal effectiveness and verbal efficiency can be reliably as well as validly measured in the ANELT is of relevance to clinicians. It allows them to obtain a more complete picture of aphasic speakers' functional communication skills.
  • Sadakata, M., & Sekiyama, K. (2011). Enhanced perception of various linguistic features by musicians: A cross-linguistic study. Acta Psychologica, 138, 1-10. doi:10.1016/j.actpsy.2011.03.007.

    Abstract

    Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing. Research Highlights We compared the perception of L1 and L2 speech by musicians and non-musicians. Discrimination and identification experiments examined perception of consonant timing, quality of Japanese consonants and of Dutch vowels. We compared results for Japanese native musicians and non-musicians as well as, Dutch native musicians and non-musicians. Musicians demonstrated enhanced perception for both L1 and L2. Most pronounced effect was found for Japanese consonant timing.
  • Salomo, D., Graf, E., Lieven, E., & Tomasello, M. (2011). The role of perceptual availability and discourse context in young children’s question answering. Journal of Child Language, 38, 918-931. doi:10.1017/S0305000910000395.

    Abstract

    Three- and four-year-old children were asked predicate-focus questions ('What's X doing?') about a scene in which an agent performed an action on a patient. We varied: (i) whether (or not) the preceding discourse context, which established the patient as given information, was available for the questioner; and (ii) whether (or not) the patient was perceptually available to the questioner when she asked the question. The main finding in our study differs from those of previous studies since it suggests that children are sensitive to the perceptual context at an earlier age than they are to previous discourse context if they need to take the questioner's perspective into account. Our finding indicates that, while children are in principle sensitive to both factors, young children rely on perceptual availability when a conflict arises.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P. and 7 moreSánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P., Faraone, S. V., Franke, B., Reif, A., Johansson, S., Haavik, J., Ramos-Quiroga, J. A., & Cormand, B. (2011). Exploring DRD4 and its interaction with SLC6A3 as possible risk factors for adult ADHD: A meta-analysis in four European populations. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 156, 600-612. doi:10.1002/ajmg.b.31202.

    Abstract

    Attention-deficit hyperactivity disorder (ADHD) is a common behavioral disorder affecting about 4–8% of children. ADHD persists into adulthood in around 65% of cases, either as the full condition or in partial remission with persistence of symptoms. Pharmacological, animal and molecular genetic studies support a role for genes of the dopaminergic system in ADHD due to its essential role in motor control, cognition, emotion, and reward. Based on these data, we analyzed two functional polymorphisms within the DRD4 gene (120 bp duplication in the promoter and 48 bp VNTR in exon 3) in a clinical sample of 1,608 adult ADHD patients and 2,352 controls of Caucasian origin from four European countries that had been recruited in the context of the International Multicentre persistent ADHD CollaboraTion (IMpACT). Single-marker analysis of the two polymorphisms did not reveal association with ADHD. In contrast, multiple-marker meta-analysis showed a nominal association (P  = 0.02) of the L-4R haplotype (dup120bp-48bpVNTR) with adulthood ADHD, especially with the combined clinical subtype. Since we previously described association between adulthood ADHD and the dopamine transporter SLC6A3 9R-6R haplotype (3′UTR VNTR-intron 8 VNTR) in the same dataset, we further tested for gene × gene interaction between DRD4 and SLC6A3. However, we detected no epistatic effects but our results rather suggest additive effects of the DRD4 risk haplotype and the SLC6A3 gene.
  • Sauter, D., Le Guen, O., & Haun, D. B. M. (2011). Categorical perception of emotional expressions does not require lexical categories. Emotion, 11, 1479-1483. doi:10.1037/a0025336.

    Abstract

    Does our perception of others’ emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Schaefer, R. S., Farquhar, J., Blokland, Y., Sadakata, M., & Desain, P. (2011). Name that tune: Decoding music from the listening brain. NeuroImage, 56, 843-849. doi:10.1016/j.neuroimage.2010.05.084.

    Abstract

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.

    Additional information

    supp_f.pdf
  • Schapper, A., & San Roque, L. (2011). Demonstratives and non-embedded nominalisations in three Papuan languages of the Timor-Alor-Pantar family. Studies in Language, 35, 380-408. doi:10.1075/sl.35.2.05sch.

    Abstract

    This paper explores the use of demonstratives in non-embedded clausal nominalisations. We present data and analysis from three Papuan languages of the Timor-Alor-Pantar family in south-east Indonesia. In these languages, demonstratives can apply to the clausal as well as to the nominal domain, contributing contrastive semantic content in assertive stance-taking and attention-directing utterances. In the Timor-Alor-Pantar constructions, meanings that are to do with spatial and discourse locations at the participant level apply to spatial, temporal and mental locations at the state or event leve
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.

    Abstract

    Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.

    Additional information

    mmc1.pdf
  • Schimke, S. (2011). Variable verb placement in second-language German and French: Evidence from production and elicited imitation of finite and nonfinite negated sentences. Applied Psycholinguistics, 32, 635-685. doi:10.1017/S0142716411000014.

    Abstract

    This study examines the placement of finite and nonfinite lexical verbs and finite light verbs (LVs) in semispontaneous production and elicited imitation of adult beginning learners of German and French. Theories assuming nonnativelike syntactic representations at early stages of development predict variable placement of lexical verbs and consistent placement of LVs, whereas theories assuming nativelike syntax predict variability for nonfinite verbs and consistent placement of all finite verbs. The results show that beginning learners of German have consistent preferences only for LVs. More advanced learners of German and learners of French produce and imitate finite verbs in more variable positions than nonfinite verbs. This is argued to support a structure-building view of second-language development.
  • Schoffelen, J.-M., & Gross, J. (2011). Improving the interpretability of all-to-all pairwise source connectivity analysis in MEG with nonhomogeneous smoothing. Human brain mapping, 32, 426-437. doi:10.1002/hbm.21031.

    Abstract

    Studying the interaction between brain regions is important to increase our understanding of brain function. Magnetoencephalography (MEG) is well suited to investigate brain connectivity, because it provides measurements of activity of the whole brain at very high temporal resolution. Typically, brain activity is reconstructed from the sensor recordings with an inverse method such as a beamformer, and subsequently a connectivity metric is estimated between predefined reference regions-of-interest (ROIs) and the rest of the source space. Unfortunately, this approach relies on a robust estimate of the relevant reference regions and on a robust estimate of the activity in those reference regions, and is not generally applicable to a wide variety of cognitive paradigms. Here, we investigate the possibility to perform all-to-all pairwise connectivity analysis, thus removing the need to define ROIs. Particularly, we evaluate the effect of nonhomogeneous spatial smoothing of differential connectivity maps. This approach is inspired by the fact that the spatial resolution of source reconstructions is typically spatially nonhomogeneous. We use this property to reduce the spatial noise in the cerebro-cerebral connectivity map, thus improving interpretability. Using extensive data simulations we show a superior detection rate and a substantial reduction in the number of spurious connections. We conclude that nonhomogeneous spatial smoothing of cerebro-cerebral connectivity maps could be an important improvement of the existing analysis tools to study neuronal interactions noninvasively.
  • Schoffelen, J.-M., Poort, J., Oostenveld, R., & Fries, P. (2011). Selective movement preparation is subserved by selective increases in corticomuscular gamma-band coherence. Journal of Neuroscience, 31, 6750-6758. doi:10.1523/​JNEUROSCI.4882-10.2011.

    Abstract

    Local groups of neurons engaged in a cognitive task often exhibit rhythmically synchronized activity in the gamma band, a phenomenon that likely enhances their impact on downstream areas. The efficacy of neuronal interactions may be enhanced further by interareal synchronization of these local rhythms, establishing mutually well timed fluctuations in neuronal excitability. This notion suggests that long-range synchronization is enhanced selectively for connections that are behaviorally relevant. We tested this prediction in the human motor system, assessing activity from bilateral motor cortices with magnetoencephalography and corresponding spinal activity through electromyography of bilateral hand muscles. A bimanual isometric wrist extension task engaged the two motor cortices simultaneously into interactions and coherence with their respective corresponding contralateral hand muscles. One of the hands was cued before each trial as the response hand and had to be extended further to report an unpredictable visual go cue. We found that, during the isometric hold phase, corticomuscular coherence was enhanced, spatially selective for the corticospinal connection that was effectuating the subsequent motor response. This effect was spectrally selective in the low gamma-frequency band (40–47 Hz) and was observed in the absence of changes in motor output or changes in local cortical gamma-band synchronization. These findings indicate that, in the anatomical connections between the cortex and the spinal cord, gamma-band synchronization is a mechanism that may facilitate behaviorally relevant interactions between these distant neuronal groups.
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions [Letter to the editor]. Journal of Phonetics, 39(1), 96-109. doi:10.1016/j.wocn.2010.11.006.

    Abstract

    In spontaneous, conversational speech, words are often reduced compared to their citation forms, such that a word like yesterday may sound like [’jεsmall eshei]. The present chapter investigates such acoustic reduction. The study of reduction needs large corpora that are transcribed phonetically. The first part of this chapter describes an automatic transcription procedure used to obtain such a large phonetically transcribed corpus of Dutch spontaneous dialogues, which is subsequently used for the investigation of acoustic reduction. First, the orthographic transcriptions were adapted for automatic processing. Next, the phonetic transcription of the corpus was created by means of a forced alignment using a lexicon with multiple pronunciation variants per word. These variants were generated by applying phonological and reduction rules to the canonical phonetic transcriptions of the words. The second part of this chapter reports the results of a quantitative analysis of reduction in the corpus on the basis of the generated transcriptions and gives an inventory of segmental reductions in standard Dutch. Overall, we found that reduction is more pervasive in spontaneous Dutch than previously documented.
  • Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.

    Abstract

    Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.

    Additional information

    Segaert_2011_Supporting_Info.doc
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Sekine, K. (2011). The role of gesture in the language production of preschool children. Gesture, 11(2), 148-173. doi:10.1075/gest.11.2.03sek.

    Abstract

    The present study investigates the functions of gestures in preschoolers’ descriptions of activities. Specifically, utilizing McNeill’s growth point theory (1992), I examine how gestures contribute to the creation of contrast from the immediate context in the spoken discourse of children. When preschool children describe an activity consisting of multiple actions, like playing on a slide, they often begin with the central action (e.g., sliding-down) instead of with the beginning of the activity sequence (e.g., climbing-up). This study indicates that, in descriptions of activities, gestures may be among the cues the speaker uses for forming a next idea or for repairing the temporal order of the activities described. Gestures may function for the speaker as visual feedback and contribute to the process of utterance formation and provide an index for assessing language development.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (2011). Talking about color and taste on the Trobriand Islands: A diachronic study. The Senses & Society, 6(1), 48 -56. doi:10.2752/174589311X12893982233713.

    Abstract

    How stable is the lexicon for perceptual experiences? This article presents results on how the Trobriand Islanders of Papua New Guinea talk about color and taste and whether this has changed over the years. Comparing the results of research on color terms conducted in 1983 with data collected in 2008 revealed that many English color terms have been integrated into the Kilivila lexicon. Members of the younger generation with school education have been the agents of this language change. However, today not all English color terms are produced correctly according to English lexical semantics. The traditional Kilivila color terms bwabwau ‘black’, pupwakau ‘white’, and bweyani ‘red’ are not affected by this change, probably because of the cultural importance of the art of coloring canoes, big yams houses, and bodies. Comparing the 1983 data on taste vocabulary with the results of my 2008 research revealed no substantial change. The conservatism of the Trobriand Islanders' taste vocabulary may be related to the conservatism of their palate. Moreover, they are more interested in displaying and exchanging food than in savoring it. Although English color terms are integrated into the lexicon, Kilivila provides evidence that traditional terms used for talking about color and terms used to refer to tastes have remained stable over time.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (2011). How I remember Evert Beth [In memoriam]. Synthese, 179(2), 207-210. doi:10.1007/s11229-010-9777-4.

    Abstract

    Without Abstract
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1981). Tense and aspect in Sranan. Linguistics, 19(11/12), 1043-1076. doi:10.1515/ling.1981.19.11-12.1043.
  • Seuren, P. A. M. (1981). Taalvariatie en de variabele regel. Gramma, 5(1), 51-54.
  • Shayan, S., Ozturk, O., & Sicoli, M. A. (2011). The thickness of pitch: Crossmodal metaphors in Farsi, Turkish and Zapotec. The Senses & Society, 6(1), 96-105. doi:10.2752/174589311X12893982233911.

    Abstract

    Speakers use vocabulary for spatial verticality and size to describe pitch. A high–low contrast is common to many languages, but others show contrasts like thick–thin and big–small. We consider uses of thick for low pitch and thin for high pitch in three languages: Farsi, Turkish, and Zapotec. We ask how metaphors for pitch structure the sound space. In a language like English, high applies to both high-pitched as well as high-amplitude (loud) sounds; low applies to low-pitched as well as low-amplitude (quiet) sounds. Farsi, Turkish, and Zapotec organize sound in a different way. Thin applies to high pitch and low amplitude and thick to low pitch and high amplitude. We claim that these metaphors have their sources in life experiences. Musical instruments show co-occurrences of higher pitch with thinner, smaller objects and lower pitch with thicker, larger objects. On the other hand bodily experience can ground the high–low metaphor. A raised larynx produces higher pitch and lowered larynx lower pitch. Low-pitched sounds resonate the chest, a lower place than highpitched sounds. While both patterns are available from life experience, linguistic experience privileges one over the other, which results in differential structuring of the multiple dimensions of sound.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.

    Abstract

    Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.
  • Skoruppa, K., Cristia, A., Peperkamp, S., & Seidl, A. (2011). English-learning infants' perception of word stress patterns [JASA Express Letter]. Journal of the Acoustical Society of America, 130(1), EL50-EL55. doi:10.1121/1.3590169.

    Abstract

    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • De Sousa, H. (2011). Changes in the language of perception in Cantonese. The Senses & Society, 6(1), 38-47. doi:10.2752/174589311X12893982233678.

    Abstract

    The way a language encodes sensory experiences changes over time, and often this correlates with other changes in the society. There are noticeable differences in the language of perception between older and younger speakers of Cantonese in Hong Kong and Macau. Younger speakers make finer distinctions in the distal senses, but have less knowledge of the finer categories of the proximal senses than older speakers. The difference in the language of perception between older and younger speakers probably reflects the rapid changes that happened in Hong Kong and Macau in the last fifty years, from an underdeveloped and lessliterate society, to a developed and highly literate society. In addition to the increase in literacy, the education system has also undergone significant Westernization. Western-style education systems have most likely created finer categorizations in the distal senses. At the same time, the traditional finer distinctions of the proximal senses have become less salient: as the society became more urbanized and sanitized, people have had fewer opportunities to experience the variety of olfactory sensations experienced by their ancestors. This case study investigating interactions between social-economic 'development' and the elaboration of the senses hopefully contributes to the study of the ineffability of senses.
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.

    Abstract

    The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory.
  • Terrill, A. (2007). [Review of ‘Andrew Pawley, Robert Attenborough, Jack Golson, and Robin Hide, eds. 2005. Papuan pasts: Cultural, linguistic and biological histories of Papuan-speaking people]. Oceanic Linguistics, 46(1), 313-321. doi:10.1353/ol.2007.0025.
  • Terrill, A. (2011). Languages in contact: An exploration of stability and change in the Solomon Islands. Oceanic Linguistics, 50(2), 312-337.

    Abstract

    The Papuan-Oceanic world has long been considered a hotbed of contact-induced linguistic change, and there have been a number of studies of deep linguistic influence between Papuan and Oceanic languages (like those by Thurston and Ross). This paper assesses the degree and type of contact-induced language change in the Solomon Islands, between the four Papuan languages—Bilua (spoken on Vella Lavella, Western Province), Touo (spoken on southern Rendova, Western Province), Savosavo (spoken on Savo Island, Central Province), and Lavukaleve (spoken in the Russell Islands, Central Province)—and their Oceanic neighbors. First, a claim is made for a degree of cultural homogeneity for Papuan and Oceanic-speaking populations within the Solomons. Second, lexical and grammatical borrowing are considered in turn, in an attempt to identify which elements in each of the four Papuan languages may have an origin in Oceanic languages—and indeed which elements in Oceanic languages may have their origin in Papuan languages. Finally, an assessment is made of the degrees of stability versus change in the Papuan and Oceanic languages of the Solomon Islands.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material
  • Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78, 705-722. doi:10.1111/j.1467-8624.2007.01025.x.

    Abstract

    The current article proposes a new theory of infant pointing involving multiple layers of intentionality and shared intentionality. In the context of this theory, evidence is presented for a rich interpretation of prelinguistic communication, that is, one that posits that when 12-month-old infants point for an adult they are in some sense trying to influence her mental states. Moreover, evidence is also presented for a deeply social view in which infant pointing is best understood—on many levels and in many ways—as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication.
  • Torreira, F., & Ernestus, M. (2011). Realization of voiceless stops and vowels in conversational French and Spanish. Laboratory Phonology, 2(2), 331-353. doi:10.1515/LABPHON.2011.012.

    Abstract

    The present study compares the realization of intervocalic voiceless stops and vowels surrounded by voiceless stops in conversational Spanish and French. Our data reveal significant differences in how these segments are realized in each language. Spanish voiceless stops tend to have shorter stop closures, display incomplete closures more often, and exhibit more voicing than French voiceless stops. As for vowels, more cases of complete devoicing and greater degrees of partial devoicing were found in French than in Spanish. Moreover, all French vowel types exhibit significantly lower F1 values than their Spanish counterparts. These findings indicate that the extent of reduction that a segment type can undergo in conversational speech can vary significantly across languages. Language differences in coarticulatory strategies and “base-of-articulation” are discussed as possible causes of our observations.
  • Torreira, F., & Ernestus, M. (2011). Vowel elision in casual French: The case of vowel /e/ in the word c’était. Journal of Phonetics, 39(1), 50 -58. doi:10.1016/j.wocn.2010.11.003.

    Abstract

    This study investigates the reduction of vowel /e/ in the French word c’était /setε/ ‘it was’. This reduction phenomenon appeared to be highly frequent, as more than half of the occurrences of this word in a corpus of casual French contained few or no acoustic traces of a vowel between [s] and [t]. All our durational analyses clearly supported a categorical absence of vowel /e/ in a subset of c’était tokens. This interpretation was also supported by our finding that the occurrence of complete elision and [e] duration in non-elision tokens were conditioned by different factors. However, spectral measures were consistent with the possibility that a highly reduced /e/ vowel is still present in elision tokens in spite of the durational evidence for categorical elision. We discuss how these findings can be reconciled, and conclude that acoustic analysis of uncontrolled materials can provide valuable information about the mechanisms underlying reduction phenomena in casual speech.
  • Tufvesson, S. (2011). Analogy-making in the Semai sensory world. The Senses & Society, 6(1), 86-95. doi:10.2752/174589311X12893982233876.

    Abstract

    In the interplay between language, culture, and perception, iconicity structures our representations of what we experience. By examining secondary iconicity in sensory vocabulary, this study draws attention to diagrammatic qualities in human interaction with, and representation of, the sensory world. In Semai (Mon-Khmer, Aslian), spoken on Peninsular Malaysia, sensory experiences are encoded by expressives. Expressives display a diagrammatic iconic structure whereby related sensory experiences receive related linguistic forms. Through this type of formmeaning mapping, gradient relationships in the perceptual world receive gradient linguistic representations. Form-meaning mapping such as this enables speakers to categorize sensory events into types and subtypes of perceptions, and provide illustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape. sensory specifics of various kinds. This studyillustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.

    Abstract

    In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation.
  • De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.

    Abstract

    This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Berkum, J. J. A., Koornneef, A. W., Otten, M., & Nieuwland, M. S. (2007). Establishing reference in language comprehension: An electrophysiological perspective. Brain Research, 1146, 158-171. doi:10.1016/j.brainres.2006.06.091.

    Abstract

    The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough--readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.

Share this page