Publications

Displaying 701 - 800 of 1071
  • Perniss, P. M., Thompson, R. L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages [Review article]. Frontiers in Psychology, 1, E227. doi:10.3389/fpsyg.2010.00227.

    Abstract

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor and perceptual experience.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions in Kata Kolok (Bali). In Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Perniss, P. M., & Zeshan, U. (2008). Possessive and existential constructions: Introduction and overview. In Possessive and existential constructions in sign languages (pp. 1-31). Nijmegen: Ishara Press.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., & Reis, A. (2006). Characteristics of illiterate and literate cognitive processing: Implications of brain- behavior co-constructivism. In P. B. Baltes, P. Reuter-Lorenz, & F. Rösler (Eds.), Lifespan development and the brain: The perspective of biocultural co-constructivism (pp. 279-305). Cambridge: Cambridge University Press.

    Abstract

    Literacy and education represent essential aspects of contemporary society and subserve important aspects of socialization and cultural transmission. The study of illiterate subjects represents one approach to investigate the interactions between neurobiological and cultural factors in cognitive development, individual learning, and their influence on the functional organization of the brain. In this chapter we review some recent cognitive, neuroanatomic, and functional neuroimaging results indicating that formal education influences important aspects of the human brain. Taken together this provides strong support for the idea that the brain is modulated by literacy and formal education, which in turn change the brains capacity to interact with its environment, including the individual's contemporary culture. In other words, the individual is able to participate in, interact with, and actively contribute to the process of cultural transmission in new ways through acquired cognitive skills.
  • Petersson, K. M., Gisselgard, J., Gretzer, M., & Ingvar, M. (2006). Interaction between a verbal working memory network and the medial temporal lobe. NeuroImage, 33(4), 1207-1217. doi:10.1016/j.neuroimage.2006.07.042.

    Abstract

    The irrelevant speech effect illustrates that sounds that are irrelevant to a visually presented short-term memory task still interfere with neuronal function. In the present study we explore the functional and effective connectivity of such interference. The functional connectivity analysis suggested an interaction between the level of irrelevant speech and the correlation between in particular the left superior temporal region, associated with verbal working memory, and the left medial temporal lobe. Based on this psycho-physiological interaction, and to broaden the understanding of this result, we performed a network analysis, using a simple network model for verbal working memory, to analyze its interaction with the medial temporal lobe memory system. The results showed dissociations in terms of network interactions between frontal as well as parietal and temporal areas in relation to the medial temporal lobe. The results of the present study suggest that a transition from phonological loop processing towards an engagement of episodic processing might take place during the processing of interfering irrelevant sounds. We speculate that, in response to the irrelevant sounds, this reflects a dynamic shift in processing as suggested by a closer interaction between a verbal working memory system and the medial temporal lobe memory system.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Petrich, P., Piedrasanta, R., Figuerola, H., & Le Guen, O. (2010). Variantes y variaciones en la percepción de los antepasados entre los Mayas. In A. Monod Becquelin, A. Breton, & M. H. Ruz (Eds.), Figuras Mayas de la diversidad (pp. 255-275). Mérida, Mexico: Universidad autónoma de México.
  • Petrovic, P., Kalso, E., Petersson, K. M., Andersson, J., Fransson, P., & Ingvar, M. (2010). A prefrontal non-opioid mechanism in placebo analgesia. Pain, 150, 59-65. doi:10.1016/j.pain.2010.03.011.

    Abstract

    ehavioral studies have suggested that placebo analgesia is partly mediated by the endogenous opioid system. Expanding on these results we have shown that the opioid-receptor-rich rostral anterior cingulate cortex (rACC) is activated in both placebo and opioid analgesia. However, there are also differences between the two treatments. While opioids have direct pharmacological effects, acting on the descending pain inhibitory system, placebo analgesia depends on neocortical top-down mechanisms. An important difference may be that expectations are met to a lesser extent in placebo treatment as compared with a specific treatment, yielding a larger error signal. As these processes previously have been shown to influence other types of perceptual experiences, we hypothesized that they also may drive placebo analgesia. Imaging studies suggest that lateral orbitofrontal cortex (lObfc) and ventrolateral prefrontal cortex (vlPFC) are involved in processing expectation and error signals. We re-analyzed two independent functional imaging experiments related to placebo analgesia and emotional placebo to probe for a differential processing in these regions during placebo treatment vs. opioid treatment and to test if this activity is associated with the placebo response. In the first dataset lObfc and vlPFC showed an enhanced activation in placebo analgesia vs. opioid analgesia. Furthermore, the rACC activity co-varied with the prefrontal regions in the placebo condition specifically. A similar correlation between rACC and vlPFC was reproduced in another dataset involving emotional placebo and correlated with the degree of the placebo effect. Our results thus support that placebo is different from specific treatment with a prefrontal top-down influence on rACC.
  • Piekema, C., Kessels, R. P. C., Mars, R. B., Petersson, K. M., & Fernández, G. (2006). The right hippocampus participates in short-term memory maintenance of object–location associations. NeuroImage, 33(1), 374-382. doi:10.1016/j.neuroimage.2006.06.035.

    Abstract

    Doubts have been cast on the strict dissociation between short- and long-term memory systems. Specifically, several neuroimaging studies have shown that the medial temporal lobe, a region almost invariably associated with long-term memory, is involved in active short-term memory maintenance. Furthermore, a recent study in hippocampally lesioned patients has shown that the hippocampus is critically involved in associating objects and their locations, even when the delay period lasts only 8 s. However, the critical feature that causes the medial temporal lobe, and in particular the hippocampus, to participate in active maintenance is still unknown. This study was designed in order to explore hippocampal involvement in active maintenance of spatial and non-spatial associations. Eighteen participants performed a delayed-match-to-sample task in which they had to maintain either object–location associations, color–number association, single colors, or single locations. Whole-brain activity was measured using event-related functional magnetic resonance imaging and analyzed using a random effects model. Right lateralized hippocampal activity was evident when participants had to maintain object–location associations, but not when they had to maintain object–color associations or single items. The present results suggest a hippocampal involvement in active maintenance when feature combinations that include spatial information have to be maintained online.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism. Neuropsychologia, 48, 2940-2951. doi:10.1016/j.neuropsychologia.2010.06.003.

    Abstract

    Several studies have demonstrated that people with ASD and intact language skills still have problems processing linguistic information in context. Given this evidence for reduced sensitivity to linguistic context, the question arises how contextual information is actually processed by people with ASD. In this study, we used event-related brain potentials (ERPs) to examine context sensitivity in high-functioning adults with autistic disorder (HFA) and Asperger syndrome at two levels: at the level of sentence processing and at the level of solving reasoning problems. We found that sentence context as well as reasoning context had an immediate ERP effect in adults with Asperger syndrome, as in matched controls. Both groups showed a typical N400 effect and a late positive component for the sentence conditions, and a sustained negativity for the reasoning conditions. In contrast, the HFA group demonstrated neither an N400 effect nor a sustained negativity. However, the HFA group showed a late positive component which was larger for semantically anomalous sentences than congruent sentences. Because sentence context had a modulating effect in a later phase, semantic integration is perhaps less automatic in HFA, and presumably more elaborate processes are needed to arrive at a sentence interpretation.
  • Pillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J. and 10 morePillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J., Ring, S. M., Deng, G., Zhang, W., McCarthy, M. I., Deloukas, P., Peltonen, L., Elliott, P., Coin, L. J. M., Smith, G. D., & Jarvelin, M.-R. (2010). Genome-wide association study reveals multiple loci associated with primary tooth development during infancy. PLoS Genetics, 6(2): e1000856. doi:10.1371/journal.pgen.1000856.

    Abstract

    Tooth development is a highly heritable process which relates to other growth and developmental processes, and which interacts with the development of the entire craniofacial complex. Abnormalities of tooth development are common, with tooth agenesis being the most common developmental anomaly in humans. We performed a genome-wide association study of time to first tooth eruption and number of teeth at one year in 4,564 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966) and 1,518 individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC). We identified 5 loci at P<}5x10(-8), and 5 with suggestive association (P{<5x10(-6)). The loci included several genes with links to tooth and other organ development (KCNJ2, EDA, HOXB2, RAD51L1, IGF2BP1, HMGA2, MSRB3). Genes at four of the identified loci are implicated in the development of cancer. A variant within the HOXB gene cluster associated with occlusion defects requiring orthodontic treatment by age 31 years.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter.
  • Poletiek, F. H. (2006). De dwingende macht van een Goed Verhaal [Boekbespreking van Vincent plast op de grond:Nachtmerries in het Nederlands recht door W.A. Wagenaar]. De Psycholoog, 41, 460-462.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Poletiek, F. H. (2008). Het probleem van escalerende beschuldigingen [Boekbespreking van Kindermishandeling door H. Crombag en den Hartog]. Maandblad voor Geestelijke Volksgezondheid, (2), 163-166.
  • Poletiek, F. H. (2006). Natural sampling of stimuli in (artificial) grammar learning. In K. Fiedler, & P. Juslin (Eds.), Information sampling and adaptive cognition (pp. 440-455). Cambridge: Cambridge University Press.
  • Poletiek, F. H. (2005). The proof of the pudding is in the eating: Translating Popper's philosophy into a model for testing behaviour. In K. I. Manktelow, & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 333-347). Hove: Psychology Press.
  • St Pourcain, B., Wang, K., Glessner, J. T., Golding, J., Steer, C., Ring, S. M., Skuse, D. H., Grant, S. F. A., Hakonarson, H., & Davey Smith, G. (2010). Association Between a High-Risk Autism Locus on 5p14 and Social Communication Spectrum Phenotypes in the General Population. American Journal of Psychiatry, 167(11), 1364-1372. doi:10.1176/appi.ajp.2010.09121789.

    Abstract

    Objective: Recent genome-wide analysis identified a genetic variant on 5p14.1 (rs4307059), which is associated with risk for autism spectrum disorder. This study investigated whether rs4307059 also operates as a quantitative trait locus underlying a broader autism phenotype in the general population, focusing specifically on the social communication aspect of the spectrum. Method: Study participants were 7,313 children from the Avon Longitudinal Study of Parents and Children. Single-trait and joint-trait genotype associations were investigated for 29 measures related to language and communication, verbal intelligence, social interaction, and behavioral adjustment, assessed between ages 3 and 12 years. Analyses were performed in one-sided or directed mode and adjusted for multiple testing, trait interrelatedness, and random genotype dropout. Results: Single phenotype analyses showed that an increased load of rs4307059 risk allele is associated with stereotyped conversation and lower pragmatic communication skills, as measured by the Children's Communication Checklist (at a mean age of 9.7 years). In addition a trend toward a higher frequency of identification of special educational needs (at a mean age of 11.8 years) was observed. Variation at rs4307059 was also associated with the phenotypic profile of studied traits. This joint signal was fully explained neither by single-trait associations nor by overall behavioral adjustment problems but suggested a combined effect, which manifested through multiple sub-threshold social, communicative, and cognitive impairments. Conclusions: Our results suggest that common variation at 5p14.1 is associated with social communication spectrum phenotypes in the general population and support the role of rs4307059 as a quantitative trait locus for autism spectrum disorder.
  • Praamstra, P., Meyer, A. S., & Levelt, W. J. M. (1994). Neurophysiological manifestations of auditory phonological processing: Latency variation of a negative ERP component timelocked to phonological mismatch. Journal of Cognitive Neuroscience, 6(3), 204-219. doi:10.1162/jocn.1994.6.3.204.

    Abstract

    Two experiments examined phonological priming effects on reaction times, error rates, and event-related brain potential (ERP) measures in an auditory lexical decision task. In Experiment 1 related prime-target pairs rhymed, and in Experiment 2 they alliterated (i.e., shared the consonantal onset and vowel). Event-related potentials were recorded in a delayed response task. Reaction times and error rates were obtained both for the delayed and an immediate response task. The behavioral data of Experiment 1 provided evidence for phonological facilitation of word, but not of nonword decisions. The brain potentials were more negative to unrelated than to rhyming word-word pairs between 450 and 700 msec after target onset. This negative enhancement was not present for word-nonword pairs. Thus, the ERP results match the behavioral data. The behavioral data of Experiment 2 provided no evidence for phonological Facilitation. However, between 250 and 450 msec after target onset, i.e., considerably earlier than in Experiment 1, brain potentials were more negative for unrelated than for alliterating word and word-nonword pairs. It is argued that the ERP effects in the two experiments could be modulations of the same underlying component, possibly the N400. The difference in the timing of the effects is likely to be due to the fact that the shared segments in related stimulus pairs appeared in different word positions in the two experiments.
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Proios, H., Asaridou, S. S., & Brugger, P. (2008). Random number generation in patients with aphasia: A test of executive functions. Acta Neuropsychologica, 6(2), 157-168.

    Abstract

    Randomization performance was studied using the "Mental Dice Task" in 20 patients with aphasia (APH) and 101 elderly normal control subjects (NC). The produced sequences were compared to 100 computer-generated pseudorandom sequences with respect to 7 measures of sequential bias. The performance of APH differed significantly from NC participants, according to all but one measure, i.e. Turning Point Index (points of change between ascending and descending sequences). NC participants differed significantly from the computer generated sequences, according to all measures of randomness. Finally, APH differed significantly from the computer simulator, according to all measures but mean Repetition Gap score (gap between a digit and its reoccurrence). Despite the heterogeneity of our APH group, there were no significant differences in randomization performance between patients with different language impairments. All the APH displayed a distinct performance profile, with more response stereotypy, counting tendencies, and inhibition problems, as hypothesised, while at the same time responding more randomly than NC by showing less of a cycling strategy and more number repetitions.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2006). Lexical and default stress assignment in reading Greek. Journal of research in reading, 29(4), 418-432. doi:10.1111/j.1467-9817.2006.00316.x.

    Abstract

    Greek is a language with lexical stress that marks stress orthographically with a special diacritic. Thus, the orthography and the lexicon constitute potential sources of stress assignment information in addition to any possible general default metrical pattern. Here, we report two experiments with secondary education children reading aloud pseudo-word stimuli, in which we manipulated the availability of lexical (using stimuli resembling particular words) and visual (existence and placement of the diacritic) information. The reliance on the diacritic was found to be imperfect. Strong lexical effects as well as a default metrical pattern stressing the penultimate syllable were revealed. Reading models must be extended to account for multisyllabic word reading including, in particular, stress assignment based on the interplay among multiple possible sources of information.
  • Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2010). The type of shared activity shapes caregiver and infant communication. Gesture, 10(2/3), 279-297. doi:10.1075/gest.10.2-3.08puc.

    Abstract

    For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language.
  • Pyykkönen, P., & Järvikivi, J. (2010). Activation and persistence of implicit causality information in spoken language comprehension. Experimental Psychology, 57, 5-16. doi:10.1027/1618-3169/a000002.

    Abstract

    A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account (Greene & McKoon, 1995; Koornneef & Van Berkum, 2006; Van Berkum, Koornneef, Otten, & Nieuwland, 2007). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.
  • Pyykkönen, P., Matthews, D., & Järvikivi, J. (2010). Three-year-olds are sensitive to semantic prominence during online spoken language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25, 115 -129. doi:10.1080/01690960902944014.

    Abstract

    Recent evidence from adult pronoun comprehension suggests that semantic factors such as verb transitivity affect referent salience and thereby anaphora resolution. We tested whether the same semantic factors influence pronoun comprehension in young children. In a visual world study, 3-year-olds heard stories that began with a sentence containing either a high or a low transitivity verb. Looking behaviour to pictures depicting the subject and object of this sentence was recorded as children listened to a subsequent sentence containing a pronoun. Children showed a stronger preference to look to the subject as opposed to the object antecedent in the low transitivity condition. In addition there were general preferences (1) to look to the subject in both conditions and (2) to look more at both potential antecedents in the high transitivity condition. This suggests that children, like adults, are affected by semantic factors, specifically semantic prominence, when interpreting anaphoric pronouns.
  • Rapold, C. J. (2010). Beneficiary and other roles of the dative in Tashelhiyt. In F. Zúñiga, & S. Kittilä (Eds.), Benefactives and malefactives: Typological perspectives and case studies (pp. 351-376). Amsterdam: Benjamins.

    Abstract

    This paper explores the semantics of the dative in Tashelhiyt, a Berber language from Morocco. After a brief morphosyntactic overview of the dative in this language, I identify a wide range of its semantic roles, including possessor, experiencer, distributive and unintending causer. I arrange these roles in a semantic map and propose semantic links between the roles such as metaphorisation and generalisation. In the light of the Tashelhiyt data, the paper also proposes additions to previous semantic maps of the dative (Haspelmath 1999, 2003) and to Kittilä’s 2005 typology of beneficiary coding.
  • Rapold, C. J. (2010). Defining converbs ten years on - A hitchhikers'guide. In S. Völlmin, A. Amha, C. J. Rapold, & S. Zaugg-Coretti (Eds.), Converbs, medial verbs, clause chaining and related issues (pp. 7-30). Köln: Rüdiger Köppe Verlag.
  • Rapold, C. J., & Widlok, T. (2008). Dimensions of variability in Northern Khoekhoe language and culture. Southern African Humanities, 20, 133-161. Retrieved from http://www.sahumanities.org.za/RapoldWidlok_203.aspx.

    Abstract

    This article takes an interdisciplinary route towards explaining the complex history of Hai//om culture and language. We begin this article with a short review of ideas relating to 'origins' and historical reconstructions as they are currently played out among Khoekhoe groups in Namibia, in particular with regard to the Hai//om. We then take a comparative look at parts of the kinship system and the tonology of ≠Âkhoe Hai//om and other variants of Khoekhoe. With regard to the kinship and naming system, we see patterns that show similarities with Nama and Damara on the one hand but also with 'San' groups on the other hand. With regard to tonology, new data from three northern Khoekoe varieties shows similarities as well as differences with Standard Namibian Khoekhoe and Ju and Tuu varieties. The historical scenarios that might explain these facts suggest different centres of innovations and opposite directions of diffusion. The anthropological and linguistic data demonstrates that only a fine-grained and multi-layered approach that goes far beyond any simplistic dichotomies can do justice to the Hai//om riddle.
  • Razafindrazaka, H., & Brucato, N. (2008). Esclavage et diaspora Africaine. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 326-328). Issy-les-Moulineaux: Elsevier Masson.
  • Razafindrazaka, H., Brucato, N., & Mazières, S. (2008). Les Noirs marrons. In É. Crubézy, J. Braga, & G. Larrouy (Eds.), Anthropobiologie: Évolution humaine (pp. 319-320). Issy-les-Moulineaux: Elsevier Masson.
  • Reesink, G. (2010). The difference a word makes. In K. A. McElhannon, & G. Reesink (Eds.), A mosaic of languages and cultures: Studies celebrating the career of Karl J. Franklin (pp. 434-446). Dallas, TX: SIL International.

    Abstract

    This paper offers some thoughts on the question what effect language has on the understanding and hence behavior of a human being. It reviews some issues of linguistic relativity, known as the “Sapir-Whorf hypothesis,” suggesting that the culture we grow up in is reflected in the language and that our cognition (and our worldview) is shaped or colored by the conventions developed by our ancestors and peers. This raises questions for the degree of translatability, illustrated by the comparison of two poems by a Dutch poet who spent most of his life in the USA. Mutual understanding, I claim, is possible because we have the cognitive apparatus that allows us to enter different emic systems.
  • Reesink, G. (2010). Prefixation of arguments in West Papuan languages. In M. Ewing, & M. Klamer (Eds.), East Nusantara, typological and areal analyses (pp. 71-95). Canberra: Pacific Linguistics.
  • Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Reis, A., Faísca, L., Ingvar, M., & Petersson, K. M. (2006). Color makes a difference: Two-dimensional object naming in literate and illiterate subjects. Brain and Cognition, 60, 49-54. doi:10.1016/j.bandc.2005.09.012.

    Abstract

    Previous work has shown that illiterate subjects are better at naming two-dimensional representations of real objects when presented as colored photos as compared to black and white drawings. This raises the question if color or textural details selectively improve object recognition and naming in illiterate compared to literate subjects. In this study, we investigated whether the surface texture and/or color of objects is used to access stored object knowledge in illiterate subjects. A group of illiterate subjects and a matched literate control group were compared on an immediate object naming task with four conditions: color and black and white (i.e., grey-scaled) photos, as well as color and black and white (i.e., grey-scaled) drawings of common everyday objects. The results show that illiterate subjects perform significantly better when the stimuli are colored and this effect is independent of the photographic detail. In addition, there were significant differences between the literacy groups in the black and white condition for both drawings and photos. These results suggest that color object information contributes to object recognition. This effect was particularly prominent in the illiterate group
  • Reis, A., Petersson, K. M., & Faísca, L. (2010). Neuroplasticidade: Os efeitos de aprendizagens específicas no cérebro humano. In C. Nunes, & S. N. Jesus (Eds.), Temas actuais em Psicologia (pp. 11-26). Faro: Universidade do Algarve.
  • Rey, A., & Schiller, N. O. (2006). A case of normal word reading but impaired letter naming. Journal of Neurolinguistics, 19(2), 87-95. doi:10.1016/j.jneuroling.2005.09.003.

    Abstract

    A case of a word/letter dissociation is described. The present patient has a quasi-normal word reading performance (both at the level of speed and accuracy) while he has major problems in nonword and letter reading. More specifically, he has strong difficulties in retrieving letter names but preserved abilities in letter identification. This study complements previous cases reporting a similar word/letter dissociation by focusing more specifically on word reading and letter naming latencies. The results provide new constraints for modeling the role of letter knowledge within reading processes and during reading acquisition or rehabilitation.
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Rietveld, T., & Chen, A. (2006). How to obtain and process perceptual judgements of intonational meaning. In S. Sudhoff, D. Lenertová, R. Meyer, S. Pappert, P. Augurzky, I. Mleinek, N. Richter, & J. Schliesser (Eds.), Methods in empirical prosody research (pp. 283-319). Berlin: Mouton de Gruyter.
  • Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.

    Abstract

    The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats.
  • Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.

    Abstract

    The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting.
  • Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.

    Abstract

    This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2).
  • Roberts, L., Gullberg, M., & Indefrey, P. (2008). Online pronoun resolution in L2 discourse: L1 influence and general learner effects. Studies in Second Language Acquisition, 30(3), 333-357. doi:10.1017/S0272263108080480.

    Abstract

    This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 57-61). Oxford: Blackwell.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. Language Learning, 58(suppl. 1), 57-61. doi:10.1111/j.1467-9922.2008.00461.x.
  • Roberts, L. (2010). Parsing the L2 input, an overview: Investigating L2 learners’ processing of syntactic ambiguities and dependencies in real-time comprehension. In G. D. Véronique (Ed.), Language, Interaction and Acquisition [Special issue] (pp. 189-205). Amsterdam: Benjamins.

    Abstract

    The acquisition of second language (L2) syntax has been central to the study of L2 acquisition, but recently there has been an interest in how learners apply their L2 syntactic knowledge to the input in real-time comprehension. Investigating L2 learners’ moment-by-moment syntactic analysis during listening or reading of sentence as it unfolds — their parsing of the input — is important, because language learning involves both the acquisition of knowledge and the ability to use it in real time. Using methods employed in monolingual processing research, investigations often focus on the processing of temporary syntactic ambiguities and structural dependencies. Investigating ambiguities involves examining parsing decisions at points in a sentence where there is a syntactic choice and this can offer insights into the nature of the parsing mechanism, and in particular, its processing preferences. Studying the establishment of syntactic dependencies at the critical point in the input allows for an investigation of how and when different kinds of information (e.g., syntactic, semantic, pragmatic) are put to use in real-time interpretation. Within an L2 context, further questions are of interest and familiar from traditional L2 acquisition research. Specifically, how native-like are the parsing procedures that L2 learners apply when processing the L2 input? What is the role of the learner’s first language (L1)? And, what are the effects of individual factors such as age, proficiency/dominance and working memory on L2 parsing? In the current paper I will provide an overview of the findings of some experimental research designed to investigate these questions.
  • Robinson, S. (2006). The phoneme inventory of the Aita dialect of Rotokas. Oceanic Linguistics, 45(1), 206-209.

    Abstract

    Rotokas is famous for possessing one of the world’s smallest phoneme inventories. According to one source, the Central dialect of Rotokas possesses only 11 segmental phonemes (five vowels and six consonants) and lacks nasals while the Aita dialect possesses a similar-sized inventory in which nasals replace voiced stops. However, recent fieldwork reveals that the Aita dialect has, in fact, both voiced and nasal stops, making for an inventory of 14 segmental phonemes (five vowels and nine consonants). The correspondences between Central and Aita Rotokas suggest that the former is innovative with respect to its consonant inventory and the latter conservative, and that the small inventory of Central Rotokas arose by collapsing the distinction between voiced and nasal stops.
  • Roby, A. C., & Kidd, E. (2008). The referential communication skills of children with imaginary companions. Developmental Science, 11(4), 531-40. doi:10.1111/j.1467-7687.2008.00699.x.

    Abstract

    he present study investigated the referential communication skills of children with imaginary companions (ICs). Twenty-two children with ICs aged between 4 and 6 years were compared to 22 children without ICs (NICs). The children were matched for age, gender, birth order, number of siblings, and parental education. All children completed the Test of Referential Commu- nication (Camaioni, Ercolani & Lloyd, 1995). The results showed that the children with ICs performed better than the children without ICs on the speaker component of the task. In particular, the IC children were better able to identify a specific referen t to their interlocutor than were the NIC children. Furthermore, the IC children described less redundant features of the target picture than did the NIC children. The children did not differ in the listening comprehension component of the task. Overall, the results suggest that the IC children had a better understanding of their interlocutor’s information requirements in convers ation. The role of pretend play in the development of communicative competence is discussed in light of these results.
  • Roelofs, A. (2005). Spoken word planning, comprehending, and self-monitoring: Evaluation of WEAVER++. In R. Hartsuiker, R. Bastiaanse, A. Postma, & F. Wijnen (Eds.), Phonological encoding and monitoring in normal and pathological speech (pp. 42-63). Hove: Psychology press.
  • Roelofs, A. (2006). The influence of spelling on phonological encoding in word reading, object naming, and word generation. Psychonomic Bulletin & Review, 13(1), 33-37.

    Abstract

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only
    when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling
    effects in spoken word production in English using a prompt–response word generation task. Preparation
    of the response words was disrupted when the responses shared initial phonemes that differed
    in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments,
    conducted in Dutch, tested for spelling effects using word production tasks in which spelling
    was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation
    in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency
    only with the word reading, suggesting that the spelling of a word constrains spoken word production
    in Dutch only when it is relevant for the word production task at hand.
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2006). Context effects of pictures and words in naming objects, reading words, and generating simple phrases. Quarterly Journal of Experimental Psychology, 59(10), 1764-1784. doi:10.1080/17470210500416052.

    Abstract

    In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.
  • Roelofs, A., Van Turennout, M., & Coles, M. G. H. (2006). Anterior cingulate cortex activity can be independent of response conflict in stroop-like tasks. Proceedings of the National Academy of Sciences of the United States of America, 103(37), 13884-13889. doi:10.1073/pnas.0606265103.

    Abstract

    Cognitive control includes the ability to formulate goals and plans of action and to follow these while facing distraction. Previous neuroimaging studies have shown that the presence of conflicting response alternatives in Stroop-like tasks increases activity in dorsal anterior cingulate cortex (ACC), suggesting that the ACC is involved in cognitive control. However, the exact nature of ACC function is still under debate. The prevailing conflict detection hypothesis maintains that the ACC is involved in performance monitoring. According to this view, ACC activity reflects the detection of response conflict and acts as a signal that engages regulative processes subserved by lateral prefrontal brain regions. Here, we provide evidence from functional MRI that challenges this view and favors an alternative view, according to which the ACC has a role in regulation itself. Using an arrow–word Stroop task, subjects responded to incongruent, congruent, and neutral stimuli. A critical prediction made by the conflict detection hypothesis is that ACC activity should be increased only when conflicting response alternatives are present. Our data show that ACC responses are larger for neutral than for congruent stimuli, in the absence of response conflict. This result demonstrates the engagement of the ACC in regulation itself. A computational model of Stroop-like performance instantiating a version of the regulative hypothesis is shown to account for our findings.
  • Roelofs, A. (2005). From Popper to Lakatos: A case for cumulative computational modeling. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 313-330). Mahwah,NJ: Erlbaum.
  • Roelofs, A. (2006). Functional architecture of naming dice, digits, and number words. Language and Cognitive Processes, 21(1/2/3), 78-111. doi:10.1080/01690960400001846.

    Abstract

    Five chronometric experiments examined the functional architecture of naming dice, digits, and number words. Speakers named pictured dice, Arabic digits, or written number words, while simultaneously trying to ignore congruent or incongruent dice, digit, or number word distractors presented at various stimulus onset asynchronies (SOAs). Stroop-like interference and facilitation effects were obtained from digits and words on dice naming latencies, but not from dice on digit and word naming latencies. In contrast, words affected digit naming latencies and digits affected word naming latencies to the same extent. The peak of the interference was always around SOA = 0 ms, whereas facilitation was constant across distractor-first SOAs. These results suggest that digit naming is achieved like word naming rather than dice naming. WEAVER++simulations of the results are reported.
  • Roelofs, A. (2006). Modeling the control of phonological encoding in bilingual speakers. Bilingualism: Language and Cognition, 9(2), 167-176. doi:10.1017/S1366728906002513.

    Abstract

    Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use
    the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual
    speakers have to resist the temptation of encoding word forms using the phonological rules and representations of the other
    language. We argue that the activation of phonological representations is not restricted to the target language and that the
    phonological representations of languages are not separate. We advance a view of bilingual control in which condition-action
    rules determine what is done with the activated phonological information depending on the target language. This view is
    computationally implemented in the WEAVER++ model. We present WEAVER++ simulations of the cognate facilitation effect
    (Costa, Caramazza and Sebasti´an-Gall´es, 2000) and the between-language phonological facilitation effect of spoken
    distractor words in object naming (Hermans, Bongaerts, de Bot and Schreuder, 1998).
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Rohlfing, K., Loehr, D., Duncan, S., Brown, A., Franklin, A., Kimbara, I., Milde, J.-T., Parrill, F., Rose, T., Schmidt, T., Sloetjes, H., Thies, A., & Wellinghof, S. (2006). Comparison of multimodal annotation tools - workshop report. Gesprächforschung - Online-Zeitschrift zur Verbalen Interaktion, 7, 99-123.
  • Roll, P., Vernes, S. C., Bruneau, N., Cillario, J., Ponsole-Lenfant, M., Massacrier, A., Rudolf, G., Khalife, M., Hirsch, E., Fisher, S. E., & Szepetowski, P. (2010). Molecular networks implicated in speech-related disorders: FOXP2 regulates the SRPX2/uPAR complex. Human Molecular Genetics, 19, 4848-4860. doi:10.1093/hmg/ddq415.

    Abstract

    It is a challenge to identify the molecular networks contributing to the neural basis of human speech. Mutations in transcription factor FOXP2 cause difficulties mastering fluent speech (developmental verbal dyspraxia, DVD), while mutations of sushi-repeat protein SRPX2 lead to epilepsy of the rolandic (sylvian) speech areas, with DVD or with bilateral perisylvian polymicrogyria. Pathophysiological mechanisms driven by SRPX2 involve modified interaction with the plasminogen activator receptor (uPAR). Independent chromatin-immunoprecipitation microarray screening has identified the uPAR gene promoter as a potential target site bound by FOXP2. Here, we directly tested for the existence of a transcriptional regulatory network between human FOXP2 and the SRPX2/uPAR complex. In silico searches followed by gel retardation assays identified specific efficient FOXP2 binding sites in each of the promoter regions of SRPX2 and uPAR. In FOXP2-transfected cells, significant decreases were observed in the amounts of both SRPX2 (43.6%) and uPAR (38.6%) native transcripts. Luciferase reporter assays demonstrated that FOXP2 expression yielded marked inhibition of SRPX2 (80.2%) and uPAR (77.5%) promoter activity. A mutant FOXP2 that causes DVD (p.R553H) failed to bind to SRPX2 and uPAR target sites, and showed impaired down-regulation of SRPX2 and uPAR promoter activity. In a patient with polymicrogyria of the left rolandic operculum, a novel FOXP2 mutation (p.M406T) was found in the leucine-zipper (dimerization) domain. p.M406T partially impaired FOXP2 regulation of SRPX2 promoter activity, while that of the uPAR promoter remained unchanged. Together with recently described FOXP2-CNTNPA2 and SRPX2/uPAR links, the FOXP2-SRPX2/uPAR network provides exciting insights into molecular pathways underlying speech-related disorders.

    Additional information

    Roll_et_al_2010_Suppl_Material.doc
  • Rossano, F. (2010). Questioning and responding in Italian. Journal of Pragmatics, 42, 2756-2771. doi:10.1016/j.pragma.2010.04.010.

    Abstract

    Questions are design problems for both the questioner and the addressee. They must be produced as recognizable objects and must be comprehended by taking into account the context in which they occur and the local situated interests of the participants. This paper investigates how people do ‘questioning’ and ‘responding’ in Italian ordinary conversations. I focus on the features of both questions and responses. I first discuss formal linguistic features that are peculiar to questions in terms of intonation contours (e.g. final rise), morphology (e.g. tags and question words) and syntax (e.g. inversion). I then show additional features that characterize their actual implementation in conversation such as their minimality (often the subject or the verb is only implied) and the usual occurrence of speaker gaze towards the recipient during questions. I then look at which social actions (e.g. requests for information, requests for confirmation) the different question types implement and which responses are regularly produced in return. The data shows that previous descriptions of “interrogative markings” are neither adequate nor sufficient to comprehend the actual use of questions in natural conversation.
  • De Rover, M., Petersson, K. M., Van der Werf, S. P., Cools, A. R., Berger, H. J., & Fernández, G. (2008). Neural correlates of strategic memory retrieval: Differentiating between spatial-associative and temporal-associative strategies. Human Brain Mapping, 29, 1068-1079. doi:10.1002/hbm.20445.

    Abstract

    Remembering complex, multidimensional information typically requires strategic memory retrieval, during which information is structured, for instance by spatial- or temporal associations. Although brain regions involved in strategic memory retrieval in general have been identified, differences in retrieval operations related to distinct retrieval strategies are not well-understood. Thus, our aim was to identify brain regions whose activity is differentially involved in spatial-associative and temporal-associative retrieval. First, we showed that our behavioral paradigm probing memory for a set of object-location associations promoted the use of a spatial-associative structure following an encoding condition that provided multiple associations to neighboring objects (spatial-associative condition) and the use of a temporal- associative structure following another study condition that provided predominantly temporal associations between sequentially presented items (temporal-associative condition). Next, we used an adapted version of this paradigm for functional MRI, where we contrasted brain activity related to the recall of object-location associations that were either encoded in the spatial- or the temporal-associative condition. In addition to brain regions generally involved in recall, we found that activity in higher-order visual regions, including the fusiform gyrus, the lingual gyrus, and the cuneus, was relatively enhanced when subjects used a spatial-associative structure for retrieval. In contrast, activity in the globus pallidus and the thalamus was relatively enhanced when subjects used a temporal-associative structure for retrieval. In conclusion, we provide evidence for differential involvement of these brain regions related to different types of strategic memory retrieval and the neural structures described play a role in either spatial-associative or temporal-associative memory retrieval.
  • Rowland, C. F., & Fletcher, S. L. (2006). The effect of sampling on estimates of lexical specificity and error rates. Journal of Child Language, 33(4), 859-877. doi:10.1017/S0305000906007537.

    Abstract

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • Ruano, D., Abecasis, G. R., Glaser, B., Lips, E. S., Cornelisse, L. N., de Jong, A. P. H., Evans, D. M., Davey Smith, G., Timpson, N. J., Smit, A. B., Heutink, P., Verhage, M., & Posthuma, D. (2010). Functional gene group analysis reveals a role of synaptic heterotrimeric G proteins in cognitive ability. American Journal of Human Genetics, 86(2), 113-125. doi:10.1016/j.ajhg.2009.12.006.

    Abstract

    Although cognitive ability is a highly heritable complex trait, only a few genes have been identified, explaining relatively low proportions of the observed trait variation. This implies that hundreds of genes of small effect may be of importance for cognitive ability. We applied an innovative method in which we tested for the effect of groups of genes defined according to cellular function (functional gene group analysis). Using an initial sample of 627 subjects, this functional gene group analysis detected that synaptic heterotrimeric guanine nucleotide binding proteins (G proteins) play an important role in cognitive ability (P(EMP) = 1.9 x 10(-4)). The association with heterotrimeric G proteins was validated in an independent population sample of 1507 subjects. Heterotrimeric G proteins are central relay factors between the activation of plasma membrane receptors by extracellular ligands and the cellular responses that these induce, and they can be considered a point of convergence, or a "signaling bottleneck." Although alterations in synaptic signaling processes may not be the exclusive explanation for the association of heterotrimeric G proteins with cognitive ability, such alterations may prominently affect the properties of neuronal networks in the brain in such a manner that impaired cognitive ability and lower intelligence are observed. The reported association of synaptic heterotrimeric G proteins with cognitive ability clearly points to a new direction in the study of the genetic basis of cognitive ability.
  • Rubio-Fernández, P. (2008). Concept narrowing: The role of context-independent information. Journal of Biomedical Semantics, 25(4), 381-409. doi:10.1093/jos/ffn004.

    Abstract

    The present study aims to investigate the extent to which the process of lexical interpretation is context dependent. It has been uncontroversially agreed in psycholinguistics that interpretation is always affected by sentential context. The major debate in lexical processing research has revolved around the question of whether initial semantic activation is context sensitive or rather exhaustive, that is, whether the effect of context occurs before or only after the information associated to a concept has been accessed from the mental lexicon. However, within post-lexical access processes, the question of whether the selection of a word's meaning components is guided exclusively by contextual relevance, or whether certain meaning components might be selected context independently, has not been such an important focus of research. I have investigated this question in the two experiments reported in this paper and, moreover, have analysed the role that context-independent information in concepts might play in word interpretation. This analysis differs from previous studies on lexical processing in that it places experimental work in the context of a theoretical model of lexical pragmatics.
  • Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. M., & Bekkering, H. (2010). The function of words: Distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22, 1844-1851. doi:10.1162/jocn.2009.21310.

    Abstract

    Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
  • De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.

    Abstract

    A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker’s turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment, manipulating the presence of symbolic (lexicosyntactic) content and intonational contour of utterances recorded in natural conversations. When hearing the original recordings, subjects can anticipate turn endings with the same degree of accuracy attested in real conversation. With intonational contour entirely removed (leaving intact words and syntax, with a completely flat pitch), there is no change in subjects’ accuracy of end-of-turn projection. But in the opposite case (with original intonational contour intact, but with no recognizable words), subjects’ performance deteriorates significantly. These results establish that the symbolic (i.e. lexicosyntactic) content of an utterance is necessary (and possibly sufficient) for projecting the moment of its completion, and thus for regulating conversational turn-taking. By contrast, and perhaps surprisingly, intonational contour is neither necessary nor sufficient for end-of-turn projection.
  • De Ruiter, J. P., & Levinson, S. C. (2008). A biological infrastructure for communication underlies the cultural evolution of languages [Commentary on Christiansen & Chater: Language as shaped by the brain]. Behavioral and Brain Sciences, 31(5), 518-518. doi:10.1017/S0140525X08005086.

    Abstract

    Universal Grammar (UG) is indeed evolutionarily implausible. But if languages are just “adapted” to a large primate brain, it is hard to see why other primates do not have complex languages. The answer is that humans have evolved a specialized and uniquely human cognitive architecture, whose main function is to compute mappings between arbitrary signals and communicative intentions. This underlies the development of language in the human species.
  • De Ruiter, J. P. (2006). Can gesticulation help aphasic people speak, or rather, communicate? Advances in Speech-Language Pathology, 8(2), 124-127. doi:10.1080/14417040600667285.

    Abstract

    As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
  • Salomo, D., Lieven, E., & Tomasello, M. (2010). Young children's sensitivity to new and given information when answering predicate-focus questions. Applied Psycholinguistics, 31, 101-115. doi:10.1017/S014271640999018X.

    Abstract

    In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore new, element. During the last scene the experimenter asked the question (e.g., “What's the frog doing now?”). We found that children expressed the action and the patient in the patient-new condition but expressed only the action in the action-new condition. These results show that children are sensitive to both the predicate-focus question and newness in context. A further finding was that children expressed new patients in their answers more often when there was a verbal context prior to the questions than when there was not.
  • San Roque, L., & Norcliffe, E. (2010). Knowledge asymmetries in grammar and interaction. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529153.
  • Sauter, D. (2010). Can introspection teach us anything about the perception of sounds? [Book review]. Perception, 39, 1300-1302. doi:10.1068/p3909rvw.

    Abstract

    Reviews the book, Sounds and Perception: New Philosophical Essays edited by Matthew Nudds and Casey O'Callaghan (2010). This collection of thought-provoking philosophical essays contains chapters on particular aspects of sound perception, as well as a series of essays focusing on the issue of sound location. The chapters on specific topics include several perspectives on how we hear speech, one of the most well-studied aspects of auditory perception in empirical research. Most of the book consists of a series of essays approaching the experience of hearing sounds by focusing on where sounds are in space. An impressive range of opinions on this issue is presented, likely thanks to the fact that the book's editors represent dramatically different viewpoints. The wave based view argues that sounds are located near the perceiver, although the sounds also provide information about objects around the listener, including the source of the sound. In contrast, the source based view holds that sounds are experienced as near or at their sources. The editors acknowledge that additional methods should be used in conjunction with introspection, but they argue that theories of perceptual experience should nevertheless respect phenomenology. With such a range of views derived largely from the same introspective methodology, it remains unresolved which phenomenological account is to be respected.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107(6), 2408-2412. doi:10.1073/pnas.0908239106.

    Abstract

    Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.
  • Sauter, D. (2010). Are positive vocalizations perceived as communicating happiness across cultural boundaries? [Article addendum]. Communicative & Integrative Biology, 3(5), 440-442. doi:10.4161/cib.3.5.12209.

    Abstract

    Laughter communicates a feeling of enjoyment across cultures, while non-verbal vocalizations of several other positive emotions, such as achievement or sensual pleasure, are recognizable only within, but not across, cultural boundaries. Are these positive vocalizations nevertheless interpreted cross-culturally as signaling positive affect? In a match-to-sample task, positive emotional vocal stimuli were paired with positive and negative facial expressions, by English participants and members of the Himba, a semi-nomadic, culturally isolated Namibian group. The results showed that laughter was associated with a smiling facial expression across both groups, consistent with previous work showing that human laughter is a positive, social signal with deep evolutionary roots. However, non-verbal vocalizations of achievement, sensual pleasure, and relief were not cross-culturally associated with smiling facial expressions, perhaps indicating that these types of vocalizations are not cross-culturally interpreted as communicating a positive emotional state, or alternatively that these emotions are associated with positive facial expression other than smiling. These results are discussed in the context of positive emotional communication in vocal and facial signals. Research on the perception of non-verbal vocalizations of emotions across cultures demonstrates that some affective signals, including laughter, are associated with particular facial configurations and emotional states, supporting theories of emotions as a set of evolved functions that are shared by all humans regardless of cultural boundaries.
  • Sauter, D. (2010). More than happy: The need for disentangling positive emotions. Current Directions in Psychological Science, 19, 36-40. doi:10.1177/0963721409359290.

    Abstract

    Despite great advances in scientific understanding of emotional processes in the last decades, research into the communication of emotions has been constrained by a strong bias toward negative affective states. Typically, studies distinguish between different negative emotions, such as disgust, sadness, anger, and fear. In contrast, most research uses only one category of positive affect, “happiness,” which is assumed to encompass all positive emotional states. This article reviews recent research showing that a number of positive affective states have discrete, recognizable signals. An increased focus on cues other than facial expressions is necessary to understand these positive states and how they are communicated; vocalizations, touch, and postural information offer promising avenues for investigating signals of positive affect. A full scientific understanding of the functions, signals, and mechanisms of emotions requires abandoning the unitary concept of happiness and instead disentangling positive emotions.
  • Sauter, D., Eisner, F., Calder, A. J., & Scott, S. K. (2010). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63(11), 2251-2272. doi:10.1080/17470211003721642.

    Abstract

    Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotionally inflected speech (Banse & Scherer, 1996) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, 1971; Scott et al., 1997), and of positive affective states (Ekman, 1992, 2003; Sauter & Scott, 2007). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
  • Sauter, D., & Eimer, M. (2010). Rapid detection of emotion from human vocalizations. Journal of Cognitive Neuroscience, 22, 474-481. doi:10.1162/jocn.2009.21215.

    Abstract

    The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Reply to Gewald: Isolated Himba settlements still exist in Kaokoland [Letter to the editor]. Proceedings of the National Academy of Sciences of the United States of America, 107(18), E76. doi:10.1073/pnas.1002264107.

    Abstract

    We agree with Gewald (1) that historical and anthropological accounts are essential tools for understanding the Himba culture, and these accounts are valuable to both us and him. However, we contest his claim that the Himba individuals in our study were not culturally isolated. Gewald (1) claims that it would be “unlikely” that the Himba people with whom we worked had “not been exposed to the affective signals of individuals from cultural groups other than their own” as stated in our paper (2). Gewald (1) seems to argue that, because outside groups have had contact with some Himba, this means that these events affected all Himba. Yet, the Himba constitute a group of 20,000-50,000 people (3) living in small settlements scattered across the vast Kaokoland region, an area of 49,000 km2 (4).
  • Sauter, D., & Levinson, S. C. (2010). What's embodied in a smile? [Comment on Niedenthal et al.]. Behavioral and Brain Sciences, 33, 457-458. doi:10.1017/S0140525X10001597.

    Abstract

    Differentiation of the forms and functions of different smiles is needed, but they should be based on empirical data on distinctions that senders and receivers make, and the physical cues that are employed. Such data would allow for a test of whether smiles can be differentiated using perceptual cues alone or whether mimicry or simulation are necessary.
  • Schäfer, M., & Haun, D. B. M. (2010). Sharing among children across cultures. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 45-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529154.
  • Scharenborg, O., & Boves, L. (2010). Computational modelling of spoken-word recognition processes: Design choices and evaluation. Pragmatics & Cognition, 18, 136-164. doi:10.1075/pc.18.1.06sch.

    Abstract

    Computational modelling has proven to be a valuable approach in developing theories of spoken-word processing. In this paper, we focus on a particular class of theories in which it is assumed that the spoken-word recognition process consists of two consecutive stages, with an 'abstract' discrete symbolic representation at the interface between the stages. In evaluating computational models, it is important to bring in independent arguments for the cognitive plausibility of the algorithms that are selected to compute the processes in a theory. This paper discusses the relation between behavioural studies, theories, and computational models of spoken-word recognition. We explain how computational models can be assessed in terms of the goodness of fit with the behavioural data and the cognitive plausibility of the algorithms. An in-depth analysis of several models provides insights into how computational modelling has led to improved theories and to a better understanding of the human spoken-word recognition process.
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.

    Abstract

    Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words.
  • Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.

    Abstract

    Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent.
  • Scheeringa, R., Bastiaansen, M. C. M., Petersson, K. M., Oostenveld, R., Norris, D. G., & Hagoort, P. (2008). Frontal theta EEG activity correlates negatively with the default mode network in resting state. International Journal of Psychophysiology, 67, 242-251. doi:10.1016/j.ijpsycho.2007.05.017.

    Abstract

    We used simultaneously recorded EEG and fMRI to investigate in which areas the BOLD signal correlates with frontal theta power changes, while subjects were quietly lying resting in the scanner with their eyes open. To obtain a reliable estimate of frontal theta power we applied ICA on band-pass filtered (2–9 Hz) EEG data. For each subject we selected the component that best matched the mid-frontal scalp topography associated with the frontal theta rhythm. We applied a time-frequency analysis on this component and used the time course of the frequency bin with the highest overall power to form a regressor that modeled spontaneous fluctuations in frontal theta power. No significant positive BOLD correlations with this regressor were observed. Extensive negative correlations were observed in the areas that together form the default mode network. We conclude that frontal theta activity can be seen as an EEG index of default mode network activity.
  • Schiller, N. O., Schuhmann, T., Neyndorff, A. C., & Jansma, B. M. (2006). The influence of semantic category membership on syntactic decisions: A study using event-related brain potentials. Brain Research, 1082(1), 153-164. doi:10.1016/j.brainres.2006.01.087.

    Abstract

    An event-related brain potentials (ERP) experiment was carried out to investigate the influence of semantic category membership on syntactic decision-making. Native speakers of German viewed a series of words that were semantically marked or unmarked for gender and made go/no-go decisions about the grammatical gender of those words. The electrophysiological results indicated that participants could make a gender decision earlier when words were semantically gender-marked than when they were semantically gender-unmarked. Our data provide evidence for the influence of semantic category membership on the decision of the syntactic gender of a visually presented German noun. More specifically, our results support models of language comprehension in which semantic information processing of words is initiated prior to syntactic information processing is finalized.
  • Schiller, N. O. (2005). Verbal self-monitoring. In A. Cutler (Ed.), Twenty-first Century Psycholinguistics: Four cornerstones (pp. 245-261). Lawrence Erlbaum: Mahwah [etc.].
  • Schiller, N. O., & Costa, A. (2006). Different selection principles of freestanding and bound morphemes in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1201-1207. doi:10.1037/0278-7393.32.5.1201.

    Abstract

    Freestanding and bound morphemes differ in many (psycho)linguistic aspects. Some theorists have claimed that the representation and retrieval of freestanding and bound morphemes in the course of language production are governed by similar processing mechanisms. Alternatively, it has been proposed that both types of morphemes may be selected for production in different ways. In this article, the authors first review the available experimental evidence related to this topic and then present new experimental data pointing to the notion that freestanding and bound morphemes are retrieved following distinct processing principles: freestanding morphemes are subject to competition, bound morphemes not.
  • Schiller, N. O. (2006). Lexical stress encoding in single word production estimated by event-related brain potentials. Brain Research, 1112(1), 201-212. doi:10.1016/j.brainres.2006.07.027.

    Abstract

    An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production.
  • Schiller, N. O., Jansma, B. M., Peters, J., & Levelt, W. J. M. (2006). Monitoring metrical stress in polysyllabic words. Language and Cognitive Processes, 21(1/2/3), 112-140. doi:10.1080/01690960400001861.

    Abstract

    This study investigated the monitoring of metrical stress information in internally generated speech. In Experiment 1, Dutch participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., KAno ‘‘canoe’’) than for targets with final stress (e.g., kaNON ‘‘cannon’’; capital letters indicate stressed syllables). It was demonstrated that monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with trisyllabic picture names. These results are similar to the findings of Wheeldon and Levelt (1995) in a segment monitoring task. The outcome might be interpreted to demonstrate that phonological encoding in speech production is a rightward incremental process. Alternatively, the data might reflect the sequential nature of a perceptual mechanism used to monitor lexical stress.
  • Schiller, N. O., & Caramazza, A. (2006). Grammatical gender selection and the representation of morphemes: The production of Dutch diminutives. Language and Cognitive Processes, 21, 945-973. doi:10.1080/01690960600824344.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners. Pictures of simple objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a noun phrase with the appropriate gender-marked determiner. Auditory (Experiment 1) or visual cues (Experiment 2) indicated whether the noun was to be produced in its standard or diminutive form. Results revealed a cost in naming latencies when target and distractor take different determiner forms independent of whether or not they have the same gender. This replicates earlier results showing that congruency effects are due to competition during the selection of determiner forms rather than gender features. The overall pattern of results supports the view that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from incongruent grammatical features. Selection of the correct determiner form, however, is a competitive process, implying that lexical node and grammatical feature selection operate with distinct principles.

Share this page