Publications

Displaying 501 - 600 of 807
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • Ogdie, M. N., Fisher, S. E., Yang, M., Ishii, J., Francks, C., Loo, S. K., Cantor, R. M., McCracken, J. T., McGough, J. J., Smalley, S. L., & Nelson, S. F. (2004). Attention Deficit Hyperactivity Disorder: Fine mapping supports linkage to 5p13, 6q12, 16p13, and 17p11. American Journal of Human Genetics, 75(4), 661-668. doi:10.1086/424387.

    Abstract

    We completed fine mapping of nine positional candidate regions for attention-deficit/hyperactivity disorder (ADHD) in an extended population sample of 308 affected sibling pairs (ASPs), constituting the largest linkage sample of families with ADHD published to date. The candidate chromosomal regions were selected from all three published genomewide scans for ADHD, and fine mapping was done to comprehensively validate these positional candidate regions in our sample. Multipoint maximum LOD score (MLS) analysis yielded significant evidence of linkage on 6q12 (MLS 3.30; empiric P=.024) and 17p11 (MLS 3.63; empiric P=.015), as well as suggestive evidence on 5p13 (MLS 2.55; empiric P=.091). In conjunction with the previously reported significant linkage on the basis of fine mapping 16p13 in the same sample as this report, the analyses presented here indicate that four chromosomal regions—5p13, 6q12, 16p13, and 17p11—are likely to harbor susceptibility genes for ADHD. The refinement of linkage within each of these regions lays the foundation for subsequent investigations using association methods to detect risk genes of moderate effect size.
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Orfanidou, E., Adam, R., Morgan, G., & McQueen, J. M. (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272-283. doi:10.1016/j.jml.2009.12.001.

    Abstract

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words.
  • Ortega, G., & Morgan, G. (2010). Comparing child and adult development of a visual phonological system. Language interaction and acquisition, 1(1), 67-81. doi:10.1075/lia.1.1.05ort.

    Abstract

    Research has documented systematic articulation differences in young children’s first signs compared with the adult input. Explanations range from the implementation of phonological processes, cognitive limitations and motor immaturity. One way of disentangling these possible explanations is to investigate signing articulation in adults who do not know any sign language but have mature cognitive and motor development. Some preliminary observations are provided on signing accuracy in a group of adults using a sign repetition methodology. Adults make the most errors with marked handshapes and produce movement and location errors akin to those reported for child signers. Secondly, there are both positive and negative influences of sign iconicity on sign repetition in adults. Possible reasons are discussed for these iconicity effects based on gesture.
  • Ortega, G. (2010). MSJE TXT: Un evento social. Lectura y vida: Revista latinoamericana de lectura, 4, 44-53.
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Otake, T., & Cutler, A. (1999). Perception of suprasegmental structure in a nonnative dialect. Journal of Phonetics, 27, 229-253. doi:10.1006/jpho.1999.0095.

    Abstract

    Two experiments examined the processing of Tokyo Japanese pitchaccent distinctions by native speakers of Japanese from two accentlessvariety areas. In both experiments, listeners were presented with Tokyo Japanese speech materials used in an earlier study with Tokyo Japanese listeners, who clearly exploited the pitch-accent information in spokenword recognition. In the "rst experiment, listeners judged from which of two words, di!ering in accentual structure, isolated syllables had been extracted. Both new groups were, overall, as successful at this task as Tokyo Japanese speakers had been, but their response patterns differed from those of the Tokyo Japanese, for instance in that a bias towards H judgments in the Tokyo Japanese responses was weakened in the present groups' responses. In a second experiment, listeners heard word fragments and guessed what the words were; in this task, the speakers from accentless areas again performed significantly above chance, but their responses showed less sensitivity to the information in the input, and greater bias towards vocabulary distribution frequencies, than had been observed with the Tokyo Japanese listeners. The results suggest that experience with a local accentless dialect affects the processing of accent for word recognition in Tokyo Japanese, even for listeners with extensive exposure to Tokyo Japanese.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.

    Abstract

    Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
    Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
    smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
    However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
    these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
    might arise in both modalities into account before it can generalize over the human language faculty.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Perniss, P. M., Thompson, R. L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages [Review article]. Frontiers in Psychology, 1, E227. doi:10.3389/fpsyg.2010.00227.

    Abstract

    Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor and perceptual experience.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Dynamic changes in the functional anatomy of the human brain during recall of abstract designs related to practice. Neuropsychologia, 37, 567-587.

    Abstract

    In the present PET study we explore some functional aspects of the interaction between attentional/control processes and learning/memory processes. The network of brain regions supporting recall of abstract designs were studied in a less practiced and in a well practiced state. The results indicate that automaticity, i.e., a decreased dependence on attentional and working memory resources, develops as a consequence of practice. This corresponds to the practice related decreases of activity in the prefrontal, anterior cingulate, and posterior parietal regions. In addition, the activity of the medial temporal regions decreased as a function of practice. This indicates an inverse relation between the strength of encoding and the activation of the MTL during retrieval. Furthermore, the pattern of practice related increases in the auditory, posterior insular-opercular extending into perisylvian supra marginal region, and the right mid occipito-temporal region, may reflect a lower degree of inhibitory attentional modulation of task irrelevant processing and more fully developed representations of the abstract designs, respectively. We also suggest that free recall is dependent on bilateral prefrontal processing, in particular non-automatic free recall. The present results cofirm previous functional neuroimaging studies of memory retrieval indicating that recall is subserved by a network of interacting brain regions. Furthermore, the results indicate that some components of the neural network subserving free recall may have a dynamic role and that there is a functional restructuring of the information processing networks during the learning process.
  • Petersson, K. M., Reis, A., Castro-Caldas, A., & Ingvar, M. (1999). Effective auditory-verbal encoding activates the left prefrontal and the medial temporal lobes: A generalization to illiterate subjects. NeuroImage, 10, 45-54. doi:10.1006/nimg.1999.0446.

    Abstract

    Recent event-related FMRI studies indicate that the prefrontal (PFC) and the medial temporal lobe (MTL) regions are more active during effective encoding than during ineffective encoding. The within-subject design and the use of well-educated young college students in these studies makes it important to replicate these results in other study populations. In this PET study, we used an auditory word-pair association cued-recall paradigm and investigated a group of healthy upper middle-aged/older illiterate women. We observed a positive correlation between cued-recall success and the regional cerebral blood flow of the left inferior PFC (BA 47) and the MTLs. Specifically, we used the cuedrecall success as a covariate in a general linear model and the results confirmed that the left inferior PFC and the MTLare more active during effective encoding than during ineffective encoding. These effects were observed during encoding of both semantically and phonologically related word pairs, indicating that these effects are robust in the studied population, that is, reproducible within group. These results generalize the results of Brewer et al. (1998, Science 281, 1185– 1187) and Wagner et al. (1998, Science 281, 1188–1191) to an upper middle aged/older illiterate population. In addition, the present study indicates that effective relational encoding correlates positively with the activity of the anterior medial temporal lobe regions.
  • Petersson, K. M., Forkstam, C., & Ingvar, M. (2004). Artificial syntactic violations activate Broca’s region. Cognitive Science, 28(3), 383-407. doi:10.1207/s15516709cog2803_4.

    Abstract

    In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar.We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this studywas to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann's areas 44 and 45) or Broca's region. In the present study, we observed that artificial grammaticality violations activated Broca's region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Learning-related effects and functional neuroimaging. Human Brain Mapping, 7, 234-243. doi:10.1002/(SICI)1097-0193(1999)7:4<234:AID-HBM2>3.0.CO;2-O.

    Abstract

    A fundamental problem in the study of learning is that learning-related changes may be confounded by nonspecific time effects. There are several strategies for handling this problem. This problem may be of greater significance in functional magnetic resonance imaging (fMRI) compared to positron emission tomography (PET). Using the general linear model, we describe, compare, and discuss two approaches for separating learning-related from nonspecific time effects. The first approach makes assumptions on the general behavior of nonspecific effects and explicitly models these effects, i.e., nonspecific time effects are incorporated as a linear or nonlinear confounding covariate in the statistical model. The second strategy makes no a priori assumption concerning the form of nonspecific time effects, but implicitly controls for nonspecific effects using an interaction approach, i.e., learning effects are assessed with an interaction contrast. The two approaches depend on specific assumptions and have specific limitations. With certain experimental designs, both approaches may be used and the results compared, lending particular support to effects that are independent of the method used. A third and perhaps better approach that sometimes may be practically unfeasible is to use a completely temporally balanced experimental design. The choice of approach may be of particular importance when learning related effects are studied with fMRI.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I: Non-inferential methods and statistical models. Philosofical Transactions of the Royal Soeciety B, 354, 1239-1260.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging II: Signal detection and statistical inference. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 354, 1261-1282.
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Ingvar, M., Stone-Elander, S., Petersson, K. M., & Hansson, P. (1999). A PET activation study of dynamic mechanical allodynia in patients with mononeuropathy. Pain, 83, 459-470.

    Abstract

    The objective of this study was to investigate the central processing of dynamic mechanical allodynia in patients with mononeuropathy. Regional cerebral bloodflow, as an indicator of neuronal activity, was measured with positron emission tomography. Paired comparisons were made between three different states; rest, allodynia during brushing the painful skin area, and brushing of the homologous contralateral area. Bilateral activations were observed in the primary somatosensory cortex (S1) and the secondary somatosensory cortex (S2) during allodynia compared to rest. The S1 activation contralateral to the site of the stimulus was more expressed during allodynia than during innocuous touch. Significant activations of the contralateral posterior parietal cortex, the periaqueductal gray (PAG), the thalamus bilaterally and motor areas were also observed in the allodynic state compared to both non-allodynic states. In the anterior cingulate cortex (ACC) there was only a suggested activation when the allodynic state was compared with the non-allodynic states. In order to account for the individual variability in the intensity of allodynia and ongoing spontaneous pain, rCBF was regressed on the individually reported pain intensity, and significant covariations were observed in the ACC and the right anterior insula. Significantly decreased regional blood flow was observed bilaterally in the medial and lateral temporal lobe as well as in the occipital and posterior cingulate cortices when the allodynic state was compared to the non-painful conditions. This finding is consistent with previous studies suggesting attentional modulation and a central coping strategy for known and expected painful stimuli. Involvement of the medial pain system has previously been reported in patients with mononeuropathy during ongoing spontaneous pain. This study reveals a bilateral activation of the lateral pain system as well as involvement of the medial pain system during dynamic mechanical allodynia in patients with mononeuropathy.
  • Petrovic, P., Kalso, E., Petersson, K. M., Andersson, J., Fransson, P., & Ingvar, M. (2010). A prefrontal non-opioid mechanism in placebo analgesia. Pain, 150, 59-65. doi:10.1016/j.pain.2010.03.011.

    Abstract

    ehavioral studies have suggested that placebo analgesia is partly mediated by the endogenous opioid system. Expanding on these results we have shown that the opioid-receptor-rich rostral anterior cingulate cortex (rACC) is activated in both placebo and opioid analgesia. However, there are also differences between the two treatments. While opioids have direct pharmacological effects, acting on the descending pain inhibitory system, placebo analgesia depends on neocortical top-down mechanisms. An important difference may be that expectations are met to a lesser extent in placebo treatment as compared with a specific treatment, yielding a larger error signal. As these processes previously have been shown to influence other types of perceptual experiences, we hypothesized that they also may drive placebo analgesia. Imaging studies suggest that lateral orbitofrontal cortex (lObfc) and ventrolateral prefrontal cortex (vlPFC) are involved in processing expectation and error signals. We re-analyzed two independent functional imaging experiments related to placebo analgesia and emotional placebo to probe for a differential processing in these regions during placebo treatment vs. opioid treatment and to test if this activity is associated with the placebo response. In the first dataset lObfc and vlPFC showed an enhanced activation in placebo analgesia vs. opioid analgesia. Furthermore, the rACC activity co-varied with the prefrontal regions in the placebo condition specifically. A similar correlation between rACC and vlPFC was reproduced in another dataset involving emotional placebo and correlated with the degree of the placebo effect. Our results thus support that placebo is different from specific treatment with a prefrontal top-down influence on rACC.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Kan, C. C., Buitelaar, J. K., & Hagoort, P. (2009). Defeasible reasoning in high-functioning adults with autism: Evidence for impaired exception-handling. Neuropsychologia, 47, 644-651. doi:10.1016/j.neuropsychologia.2008.11.011.

    Abstract

    While autism is one of the most intensively researched psychiatric disorders, little is known about reasoning skills of people with autism. The focus of this study was on defeasible inferences, that is inferences that can be revised in the light of new information. We used a behavioral task to investigate (a) conditional reasoning and (b) the suppression of conditional inferences in high-functioning adults with autism. In the suppression task a possible exception was made salient which could prevent a conclusion from being drawn. We predicted that the autism group would have difficulties dealing with such exceptions because they require mental flexibility to adjust to the context, which is often impaired in autism. The findings confirm our hypothesis that high-functioning adults with autism have a specific difficulty with exception-handling during reasoning. It is suggested that defeasible reasoning is also involved in other cognitive domains. Implications for neural underpinnings of reasoning and autism are discussed.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism. Neuropsychologia, 48, 2940-2951. doi:10.1016/j.neuropsychologia.2010.06.003.

    Abstract

    Several studies have demonstrated that people with ASD and intact language skills still have problems processing linguistic information in context. Given this evidence for reduced sensitivity to linguistic context, the question arises how contextual information is actually processed by people with ASD. In this study, we used event-related brain potentials (ERPs) to examine context sensitivity in high-functioning adults with autistic disorder (HFA) and Asperger syndrome at two levels: at the level of sentence processing and at the level of solving reasoning problems. We found that sentence context as well as reasoning context had an immediate ERP effect in adults with Asperger syndrome, as in matched controls. Both groups showed a typical N400 effect and a late positive component for the sentence conditions, and a sustained negativity for the reasoning conditions. In contrast, the HFA group demonstrated neither an N400 effect nor a sustained negativity. However, the HFA group showed a late positive component which was larger for semantically anomalous sentences than congruent sentences. Because sentence context had a modulating effect in a later phase, semantic integration is perhaps less automatic in HFA, and presumably more elaborate processes are needed to arrive at a sentence interpretation.
  • Pijnacker, J., Hagoort, P., Buitelaar, J., Teunisse, J.-P., & Geurts, B. (2009). Pragmatic inferences in high-functioning adults with autism and Asperger syndrome. Journal of Autism and Developmental Disorders, 39(4), 607-618. doi:10.1007/s10803-008-0661-8.

    Abstract

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on highfunctioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like ‘‘Some sparrows are birds’’. This sentence is logically true, but pragmatically inappropriate if the scalar implicature ‘‘Not all sparrows are birds’’ is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.
  • Pillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J. and 10 morePillas, D., Hoggart, C. J., Evans, D. M., O'Reilly, P. F., Sipilä, K., Lähdesmäki, R., Millwood, I. Y., Kaakinen, M., Netuveli, G., Blane, D., Charoen, P., Sovio, U., Pouta, A., Freimer, N., Hartikainen, A.-L., Laitinen, J., Vaara, S., Glaser, B., Crawford, P., Timpson, N. J., Ring, S. M., Deng, G., Zhang, W., McCarthy, M. I., Deloukas, P., Peltonen, L., Elliott, P., Coin, L. J. M., Smith, G. D., & Jarvelin, M.-R. (2010). Genome-wide association study reveals multiple loci associated with primary tooth development during infancy. PLoS Genetics, 6(2): e1000856. doi:10.1371/journal.pgen.1000856.

    Abstract

    Tooth development is a highly heritable process which relates to other growth and developmental processes, and which interacts with the development of the entire craniofacial complex. Abnormalities of tooth development are common, with tooth agenesis being the most common developmental anomaly in humans. We performed a genome-wide association study of time to first tooth eruption and number of teeth at one year in 4,564 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966) and 1,518 individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC). We identified 5 loci at P<}5x10(-8), and 5 with suggestive association (P{<5x10(-6)). The loci included several genes with links to tooth and other organ development (KCNJ2, EDA, HOXB2, RAD51L1, IGF2BP1, HMGA2, MSRB3). Genes at four of the identified loci are implicated in the development of cancer. A variant within the HOXB gene cluster associated with occlusion defects requiring orthodontic treatment by age 31 years.
  • Poletiek, F. H., & Van Schijndel, T. J. P. (2009). Stimulus set size and statistical coverage of the grammar in artificial grammar learning. Psychonomic Bulletin & Review, 16(6), 1058-1064. doi:10.3758/PBR.16.6.1058.

    Abstract

    Adults and children acquire knowledge of the structure of their environment on the basis of repeated exposure to samples of structured stimuli. In the study of inductive learning, a straightforward issue is how much sample information is needed to learn the structure. The present study distinguishes between two measures for the amount of information in the sample: set size and the extent to which the set of exemplars statistically covers the underlying structure. In an artificial grammar learning experiment, learning was affected by the sample’s statistical coverage of the grammar, but not by its mere size. Our result suggests an alternative explanation of the set size effects on learning found in previous studies (McAndrews & Moscovitch, 1985; Meulemans & Van der Linden, 1997), because, as we argue, set size was confounded with statistical coverage in these studies. nt]mis|This research was supported by a grant from the Netherlands Organization for Scientific Research. We thank Jarry Porsius for his help with the data analyses.
  • Poletiek, F. H. (2009). Popper's Severity of Test as an intuitive probabilistic model of hypothesis testing. Behavioral and Brain Sciences, 32(1), 99-100. doi:10.1017/S0140525X09000454.
  • Poletiek, F. H., & Wolters, G. (2009). What is learned about fragments in artificial grammar learning? A transitional probabilities approach. Quarterly Journal of Experimental Psychology, 62(5), 868-876. doi:10.1080/17470210802511188.

    Abstract

    Learning local regularities in sequentially structured materials is typically assumed to be based on encoding of the frequencies of these regularities. We explore the view that transitional probabilities between elements of chunks, rather than frequencies of chunks, may be the primary factor in artificial grammar learning (AGL). The transitional probability model (TPM) that we propose is argued to provide an adaptive and parsimonious strategy for encoding local regularities in order to induce sequential structure from an input set of exemplars of the grammar. In a variant of the AGL procedure, in which participants estimated the frequencies of bigrams occurring in a set of exemplars they had been exposed to previously, participants were shown to be more sensitive to local transitional probability information than to mere pattern frequencies.
  • St Pourcain, B., Wang, K., Glessner, J. T., Golding, J., Steer, C., Ring, S. M., Skuse, D. H., Grant, S. F. A., Hakonarson, H., & Davey Smith, G. (2010). Association Between a High-Risk Autism Locus on 5p14 and Social Communication Spectrum Phenotypes in the General Population. American Journal of Psychiatry, 167(11), 1364-1372. doi:10.1176/appi.ajp.2010.09121789.

    Abstract

    Objective: Recent genome-wide analysis identified a genetic variant on 5p14.1 (rs4307059), which is associated with risk for autism spectrum disorder. This study investigated whether rs4307059 also operates as a quantitative trait locus underlying a broader autism phenotype in the general population, focusing specifically on the social communication aspect of the spectrum. Method: Study participants were 7,313 children from the Avon Longitudinal Study of Parents and Children. Single-trait and joint-trait genotype associations were investigated for 29 measures related to language and communication, verbal intelligence, social interaction, and behavioral adjustment, assessed between ages 3 and 12 years. Analyses were performed in one-sided or directed mode and adjusted for multiple testing, trait interrelatedness, and random genotype dropout. Results: Single phenotype analyses showed that an increased load of rs4307059 risk allele is associated with stereotyped conversation and lower pragmatic communication skills, as measured by the Children's Communication Checklist (at a mean age of 9.7 years). In addition a trend toward a higher frequency of identification of special educational needs (at a mean age of 11.8 years) was observed. Variation at rs4307059 was also associated with the phenotypic profile of studied traits. This joint signal was fully explained neither by single-trait associations nor by overall behavioral adjustment problems but suggested a combined effect, which manifested through multiple sub-threshold social, communicative, and cognitive impairments. Conclusions: Our results suggest that common variation at 5p14.1 is associated with social communication spectrum phenotypes in the general population and support the role of rs4307059 as a quantitative trait locus for autism spectrum disorder.
  • Powlesland, A. S., Hitchen, P. G., Parry, S., Graham, S. A., Barrio, M. M., Elola, M. T., Mordoh, J., Dell, A., Drickamer, K., & Taylor, M. E. (2009). Targeted glycoproteomic identification of cancer cell glycosylation. Glycobiology, 9, 899-909. doi:10.1093/glycob/cwp065.

    Abstract

    GalMBP is a fragment of serum mannose-binding protein that has been modified to create a probe for galactose-containing ligands. Glycan array screening demonstrated that the carbohydrate-recognition domain of GalMBP selectively binds common groups of tumor-associated glycans, including Lewis-type structures and T antigen, suggesting that engineered glycan-binding proteins such as GalMBP represent novel tools for the characterization of glycoproteins bearing tumor-associated glycans. Blotting of cell extracts and membranes from MCF7 breast cancer cells with radiolabeled GalMBP was used to demonstrate that it binds to a selected set of high molecular weight glycoproteins that could be purified from MCF7 cells on an affinity column constructed with GalMBP. Proteomic and glycomic analysis of these glycoproteins by mass spectrometry showed that they are forms of CD98hc that bear glycans displaying heavily fucosylated termini, including Lewis(x) and Lewis(y) structures. The pool of ligands was found to include the target ligands for anti-CD15 antibodies, which are commonly used to detect Lewis(x) antigen on tumors, and for the endothelial scavenger receptor C-type lectin, which may be involved in tumor metastasis through interactions with this antigen. A survey of additional breast cancer cell lines reveals that there is wide variation in the types of glycosylation that lead to binding of GalMBP. Higher levels of binding are associated either with the presence of outer-arm fucosylated structures carried on a variety of different cell surface glycoproteins or with the presence of high levels of the mucin MUC1 bearing T antigen.

    Additional information

    Powlesland_2009_Suppl_Mat.pdf
  • Praamstra, P., Plat, E. M., Meyer, A. S., & Horstink, M. W. I. M. (1999). Motor cortex activation in Parkinson's disease: Dissociation of electrocortical and peripheral measures of response generation. Movement Disorders, 14, 790-799. doi:10.1002/1531-8257(199909)14:5<790:AID-MDS1011>3.0.CO;2-A.

    Abstract

    This study investigated characteristics of motor cortex activation and response generation in Parkinson's disease with measures of electrocortical activity (lateralized readiness potential [LRP]), electromyographic activity (EMG), and isometric force in a noise-compatibility task. When presented with stimuli consisting of incompatible target and distracter elements asking for responses of opposite hands, patients were less able than control subjects to suppress activation of the motor cortex controlling the wrong response hand. This was manifested in the pattern of reaction times and in an incorrect lateralization of the LRP. Onset latency and rise time of the LRP did not differ between patients and control subjects, but EMG and response force developed more slowly in patients. Moreover, in patients but not in control subjects, the rate of development of EMG and response force decreased as reaction time increased. We hypothesize that this dissociation between electrocortical activity and peripheral measures in Parkinson's disease is the result of changes in motor cortex function that alter the relation between signal-related and movement-related neural activity in the motor cortex. In the LRP, this altered balance may obscure an abnormal development of movement-related neural activity.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., & Gerakaki, S. (2009). Development of processing stress diacritics in reading Greek. Scientific Studies of Reading, 13(6), 453-483. doi:10.1080/10888430903034788.

    Abstract

    In Greek orthography, stress position is marked with a diacritic. We investigated the developmental course of processing the stress diacritic in Grades 2 to 4. Ninety children read 108 pseudowords presented without or with a diacritic either in the same or in a different position relative to the source word. Half of the pseudowords resembled the words they were derived from. Results showed that lexical sources of stress assignment were active in Grade 2 and remained stronger than the diacritic through Grade 4. The effect of the diacritic increased more rapidly and approached the lexical effect with increasing grade. In a second experiment, 90 children read 54 words and 54 pseudowords. The pattern of results for words was similar to that for nonwords suggesting that findings regarding stress assignment using nonwords may generalize to word reading. Decoding of the diacritic does not appear to be the preferred option for developing readers.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2010). The type of shared activity shapes caregiver and infant communication. Gesture, 10(2/3), 279-297. doi:10.1075/gest.10.2-3.08puc.

    Abstract

    For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language.
  • Pylkkänen, L., Martin, A. E., McElree, B., & Smart, A. (2009). The Anterior Midline Field: Coercion or decision making? Brain and Language, 108(3), 184-190. doi:10.1016/j.bandl.2008.06.006.

    Abstract

    To study the neural bases of semantic composition in language processing without confounds from syntactic composition, recent magnetoencephalography (MEG) studies have investigated the processing of constructions that exhibit some type of syntax-semantics mismatch. The most studied case of such a mismatch is complement coercion; expressions such as the author began the book, where an entity-denoting noun phrase is coerced into an eventive meaning in order to match the semantic properties of the event-selecting verb (e.g., ‘the author began reading/writing the book’). These expressions have been found to elicit increased activity in the Anterior Midline Field (AMF), an MEG component elicited at frontomedial sensors at ∼400 ms after the onset of the coercing noun [Pylkkänen, L., & McElree, B. (2007). An MEG study of silent meaning. Journal of Cognitive Neuroscience, 19, 11]. Thus, the AMF constitutes a potential neural correlate of coercion. However, the AMF was generated in ventromedial prefrontal regions, which are heavily associated with decision-making. This raises the possibility that, instead of semantic processing, the AMF effect may have been related to the experimental task, which was a sensicality judgment. We tested this hypothesis by assessing the effect of coercion when subjects were simply reading for comprehension, without a decision-task. Additionally, we investigated coercion in an adjectival rather than a verbal environment to further generalize the findings. Our results show that an AMF effect of coercion is elicited without a decision-task and that the effect also extends to this novel syntactic environment. We conclude that in addition to its role in non-linguistic higher cognition, ventromedial prefrontal regions contribute to the resolution of syntax-semantics mismatches in language processing.
  • Pyykkönen, P., & Järvikivi, J. (2010). Activation and persistence of implicit causality information in spoken language comprehension. Experimental Psychology, 57, 5-16. doi:10.1027/1618-3169/a000002.

    Abstract

    A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account (Greene & McKoon, 1995; Koornneef & Van Berkum, 2006; Van Berkum, Koornneef, Otten, & Nieuwland, 2007). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.
  • Pyykkönen, P., Matthews, D., & Järvikivi, J. (2010). Three-year-olds are sensitive to semantic prominence during online spoken language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25, 115 -129. doi:10.1080/01690960902944014.

    Abstract

    Recent evidence from adult pronoun comprehension suggests that semantic factors such as verb transitivity affect referent salience and thereby anaphora resolution. We tested whether the same semantic factors influence pronoun comprehension in young children. In a visual world study, 3-year-olds heard stories that began with a sentence containing either a high or a low transitivity verb. Looking behaviour to pictures depicting the subject and object of this sentence was recorded as children listened to a subsequent sentence containing a pronoun. Children showed a stronger preference to look to the subject as opposed to the object antecedent in the low transitivity condition. In addition there were general preferences (1) to look to the subject in both conditions and (2) to look more at both potential antecedents in the high transitivity condition. This suggests that children, like adults, are affected by semantic factors, specifically semantic prominence, when interpreting anaphoric pronouns.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Qin, S., Rijpkema, M., Tendolkar, I., Piekema, C., Hermans, E. J., Binder, M., Petersson, K. M., Luo, J., & Fernández, G. (2009). Dissecting medial temporal lobe contributions to item and associative memory formation. NeuroImage, 46, 874-881. doi:10.1016/j.neuroimage.2009.02.039.

    Abstract

    A fundamental and intensively discussed question is whether medial temporal lobe (MTL) processes that lead to non-associative item memories differ in their anatomical substrate from processes underlying associative memory formation. Using event-related functional magnetic resonance imaging, we implemented a novel design to dissociate brain activity related to item and associative memory formation not only by subsequent memory performance and anatomy but also in time, because the two constituents of each pair to be memorized were presented sequentially with an intra-pair delay of several seconds. Furthermore, the design enabled us to reduce potential differences in memory strength between item and associative memory by increasing task difficulty in the item recognition memory test. Confidence ratings for correct item recognition for both constituents did not differ between trials in which only item memory was correct and trials in which item and associative memory were correct. Specific subsequent memory analyses for item and associative memory formation revealed brain activity that appears selectively related to item memory formation in the posterior inferior temporal, posterior parahippocampal, and perirhinal cortices. In contrast, hippocampal and inferior prefrontal activity predicted successful retrieval of newly formed inter-item associations. Our findings therefore suggest that different MTL subregions indeed play distinct roles in the formation of item memory and inter-item associative memory as expected by several dual process models of the MTL memory system.
  • Reesink, G., Singer, R., & Dunn, M. (2009). Explaining the linguistic diversity of Sahul using population models. PLoS Biology, 7(11), e1000241. doi:10.1371/journal.pbio.1000241.

    Abstract

    The region of the ancient Sahul continent (present day Australia and New Guinea, and surrounding islands) is home to extreme linguistic diversity. Even apart from the huge Austronesian language family, which spread into the area after the breakup of the Sahul continent in the Holocene, there are hundreds of languages from many apparently unrelated families. On each of the subcontinents, the generally accepted classification recognizes one large, widespread family and a number of unrelatable smaller families. If these language families are related to each other, it is at a depth which is inaccessible to standard linguistic methods. We have inferred the history of structural characteristics of these languages under an admixture model, using a Bayesian algorithm originally developed to discover populations on the basis of recombining genetic markers. This analysis identifies 10 ancestral language populations, some of which can be identified with clearly defined phylogenetic groups. The results also show traces of early dispersals, including hints at ancient connections between Australian languages and some Papuan groups (long hypothesized, never before demonstrated). Systematic language contact effects between members of big phylogenetic groups are also detected, which can in some cases be identified with a diffusional or substrate signal. Most interestingly, however, there remains striking evidence of a phylogenetic signal, with many languages showing negligible amounts of admixture.
  • Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Richards, J. B., Waterworth, D., O'Rahilly, S., Hivert, M.-F., Loos, R. J. F., Perry, J. R. B., Tanaka, T., Timpson, N. J., Semple, R. K., Soranzo, N., Song, K., Rocha, N., Grundberg, E., Dupuis, J., Florez, J. C., Langenberg, C., Prokopenko, I., Saxena, R., Sladek, R., Aulchenko, Y. and 47 moreRichards, J. B., Waterworth, D., O'Rahilly, S., Hivert, M.-F., Loos, R. J. F., Perry, J. R. B., Tanaka, T., Timpson, N. J., Semple, R. K., Soranzo, N., Song, K., Rocha, N., Grundberg, E., Dupuis, J., Florez, J. C., Langenberg, C., Prokopenko, I., Saxena, R., Sladek, R., Aulchenko, Y., Evans, D., Waeber, G., Erdmann, J., Burnett, M.-S., Sattar, N., Devaney, J., Willenborg, C., Hingorani, A., Witteman, J. C. M., Vollenweider, P., Glaser, B., Hengstenberg, C., Ferrucci, L., Melzer, D., Stark, K., Deanfield, J., Winogradow, J., Grassl, M., Hall, A. S., Egan, J. M., Thompson, J. R., Ricketts, S. L., König, I. R., Reinhard, W., Grundy, S., Wichmann, H.-E., Barter, P., Mahley, R., Kesaniemi, Y. A., Rader, D. J., Reilly, M. P., Epstein, S. E., Stewart, A. F. R., Van Duijn, C. M., Schunkert, H., Burling, K., Deloukas, P., Pastinen, T., Samani, N. J., McPherson, R., Davey Smith, G., Frayling, T. M., Wareham, N. J., Meigs, J. B., Mooser, V., Spector, T. D., & Consortium, G. (2009). A genome-wide association study reveals variants in ARL15 that influence adiponectin levels. PLoS Genetics, 5(12): e1000768. doi:10.1371/journal.pgen.1000768.

    Abstract

    The adipocyte-derived protein adiponectin is highly heritable and inversely associated with risk of type 2 diabetes mellitus (T2D) and coronary heart disease (CHD). We meta-analyzed 3 genome-wide association studies for circulating adiponectin levels (n = 8,531) and sought validation of the lead single nucleotide polymorphisms (SNPs) in 5 additional cohorts (n = 6,202). Five SNPs were genome-wide significant in their relationship with adiponectin (P<} or =5x10(-8)). We then tested whether these 5 SNPs were associated with risk of T2D and CHD using a Bonferroni-corrected threshold of P{< or =0.011 to declare statistical significance for these disease associations. SNPs at the adiponectin-encoding ADIPOQ locus demonstrated the strongest associations with adiponectin levels (P-combined = 9.2x10(-19) for lead SNP, rs266717, n = 14,733). A novel variant in the ARL15 (ADP-ribosylation factor-like 15) gene was associated with lower circulating levels of adiponectin (rs4311394-G, P-combined = 2.9x10(-8), n = 14,733). This same risk allele at ARL15 was also associated with a higher risk of CHD (odds ratio [OR] = 1.12, P = 8.5x10(-6), n = 22,421) more nominally, an increased risk of T2D (OR = 1.11, P = 3.2x10(-3), n = 10,128), and several metabolic traits. Expression studies in humans indicated that ARL15 is well-expressed in skeletal muscle. These findings identify a novel protein, ARL15, which influences circulating adiponectin levels and may impact upon CHD risk.
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.

    Abstract

    The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats.
  • Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.

    Abstract

    The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting.
  • Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.

    Abstract

    This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2).
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roll, P., Vernes, S. C., Bruneau, N., Cillario, J., Ponsole-Lenfant, M., Massacrier, A., Rudolf, G., Khalife, M., Hirsch, E., Fisher, S. E., & Szepetowski, P. (2010). Molecular networks implicated in speech-related disorders: FOXP2 regulates the SRPX2/uPAR complex. Human Molecular Genetics, 19, 4848-4860. doi:10.1093/hmg/ddq415.

    Abstract

    It is a challenge to identify the molecular networks contributing to the neural basis of human speech. Mutations in transcription factor FOXP2 cause difficulties mastering fluent speech (developmental verbal dyspraxia, DVD), while mutations of sushi-repeat protein SRPX2 lead to epilepsy of the rolandic (sylvian) speech areas, with DVD or with bilateral perisylvian polymicrogyria. Pathophysiological mechanisms driven by SRPX2 involve modified interaction with the plasminogen activator receptor (uPAR). Independent chromatin-immunoprecipitation microarray screening has identified the uPAR gene promoter as a potential target site bound by FOXP2. Here, we directly tested for the existence of a transcriptional regulatory network between human FOXP2 and the SRPX2/uPAR complex. In silico searches followed by gel retardation assays identified specific efficient FOXP2 binding sites in each of the promoter regions of SRPX2 and uPAR. In FOXP2-transfected cells, significant decreases were observed in the amounts of both SRPX2 (43.6%) and uPAR (38.6%) native transcripts. Luciferase reporter assays demonstrated that FOXP2 expression yielded marked inhibition of SRPX2 (80.2%) and uPAR (77.5%) promoter activity. A mutant FOXP2 that causes DVD (p.R553H) failed to bind to SRPX2 and uPAR target sites, and showed impaired down-regulation of SRPX2 and uPAR promoter activity. In a patient with polymicrogyria of the left rolandic operculum, a novel FOXP2 mutation (p.M406T) was found in the leucine-zipper (dimerization) domain. p.M406T partially impaired FOXP2 regulation of SRPX2 promoter activity, while that of the uPAR promoter remained unchanged. Together with recently described FOXP2-CNTNPA2 and SRPX2/uPAR links, the FOXP2-SRPX2/uPAR network provides exciting insights into molecular pathways underlying speech-related disorders.

    Additional information

    Roll_et_al_2010_Suppl_Material.doc
  • Rossano, F. (2010). Questioning and responding in Italian. Journal of Pragmatics, 42, 2756-2771. doi:10.1016/j.pragma.2010.04.010.

    Abstract

    Questions are design problems for both the questioner and the addressee. They must be produced as recognizable objects and must be comprehended by taking into account the context in which they occur and the local situated interests of the participants. This paper investigates how people do ‘questioning’ and ‘responding’ in Italian ordinary conversations. I focus on the features of both questions and responses. I first discuss formal linguistic features that are peculiar to questions in terms of intonation contours (e.g. final rise), morphology (e.g. tags and question words) and syntax (e.g. inversion). I then show additional features that characterize their actual implementation in conversation such as their minimality (often the subject or the verb is only implied) and the usual occurrence of speaker gaze towards the recipient during questions. I then look at which social actions (e.g. requests for information, requests for confirmation) the different question types implement and which responses are regularly produced in return. The data shows that previous descriptions of “interrogative markings” are neither adequate nor sufficient to comprehend the actual use of questions in natural conversation.
  • Rossi, G. (2009). Il discorso scritto interattivo degli SMS: Uno studio pragmatico del "messaggiare". Rivista Italiana di Dialettologia, 33, 143-193. doi:10.1400/148734.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Theakston, A. L. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 2: The modals and auxiliary DO. Journal of Speech, Language, and Hearing Research, 52, 1471-1492. doi:10.1044/1092-4388(2009/08-0037a).

    Abstract

    Purpose: The study of auxiliary acquisition is central to work on language development and has attracted theoretical work from both nativist and constructivist approaches. This study is part of a 2-part companion set that represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. The aim of the research described in this part is to track the development of modal auxiliaries and auxiliary DO in questions and declaratives to provide a more complete picture of the development of the auxiliary system in English-speaking children. Method: Twelve English-speaking children participated in 2 tasks designed to elicit auxiliaries CAN, WILL, and DOES in declaratives and yes/no questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of the target auxiliaries differed in complex ways according to auxiliary, polarity, and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusions: These data cannot be explained in full by existing theories of auxiliary acquisition. Researchers working within both generativist and constructivist frameworks need to develop more detailed theories of acquisition that predict the pattern of acquisition observed.
  • Ruano, D., Abecasis, G. R., Glaser, B., Lips, E. S., Cornelisse, L. N., de Jong, A. P. H., Evans, D. M., Davey Smith, G., Timpson, N. J., Smit, A. B., Heutink, P., Verhage, M., & Posthuma, D. (2010). Functional gene group analysis reveals a role of synaptic heterotrimeric G proteins in cognitive ability. American Journal of Human Genetics, 86(2), 113-125. doi:10.1016/j.ajhg.2009.12.006.

    Abstract

    Although cognitive ability is a highly heritable complex trait, only a few genes have been identified, explaining relatively low proportions of the observed trait variation. This implies that hundreds of genes of small effect may be of importance for cognitive ability. We applied an innovative method in which we tested for the effect of groups of genes defined according to cellular function (functional gene group analysis). Using an initial sample of 627 subjects, this functional gene group analysis detected that synaptic heterotrimeric guanine nucleotide binding proteins (G proteins) play an important role in cognitive ability (P(EMP) = 1.9 x 10(-4)). The association with heterotrimeric G proteins was validated in an independent population sample of 1507 subjects. Heterotrimeric G proteins are central relay factors between the activation of plasma membrane receptors by extracellular ligands and the cellular responses that these induce, and they can be considered a point of convergence, or a "signaling bottleneck." Although alterations in synaptic signaling processes may not be the exclusive explanation for the association of heterotrimeric G proteins with cognitive ability, such alterations may prominently affect the properties of neuronal networks in the brain in such a manner that impaired cognitive ability and lower intelligence are observed. The reported association of synaptic heterotrimeric G proteins with cognitive ability clearly points to a new direction in the study of the genetic basis of cognitive ability.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. M., & Bekkering, H. (2010). The function of words: Distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22, 1844-1851. doi:10.1162/jocn.2009.21310.

    Abstract

    Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
  • De Ruiter, L. E. (2009). The prosodic marking of topical referents in the German "Vorfeld" by children and adults. The Linguistic Review, 26, 329-354. doi:10.1515/tlir.2009.012.

    Abstract

    This article reports on the analysis of prosodic marking of topical referents in the German prefield by 5- and 7-year-old children and adults. Natural speech data was obtained from a picture-elicited narration task. The data was analyzed both phonologically and phonetically. In line with previous findings, adult speakers realized topical referents predominantly with the accents L+H* and L*+H, but H* accents and unaccented items were also observed. Children used the same accent types as adults, but the accent types were distributed differently. Also, children aligned pitch minima earlier than adults and produced accents with a decreased speed of pitch change. Possible reasons for these findings are discussed. Contrast – defined in terms of a change of subjecthood – did not affect the choice of pitch accent type and did not influence phonetic realization, underlining the fact that accentuation is often a matter of individual speaker choice.

    Files private

    Request files
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Salomo, D., Lieven, E., & Tomasello, M. (2010). Young children's sensitivity to new and given information when answering predicate-focus questions. Applied Psycholinguistics, 31, 101-115. doi:10.1017/S014271640999018X.

    Abstract

    In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore new, element. During the last scene the experimenter asked the question (e.g., “What's the frog doing now?”). We found that children expressed the action and the patient in the patient-new condition but expressed only the action in the action-new condition. These results show that children are sensitive to both the predicate-focus question and newness in context. A further finding was that children expressed new patients in their answers more often when there was a verbal context prior to the questions than when there was not.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D. (2010). Can introspection teach us anything about the perception of sounds? [Book review]. Perception, 39, 1300-1302. doi:10.1068/p3909rvw.

    Abstract

    Reviews the book, Sounds and Perception: New Philosophical Essays edited by Matthew Nudds and Casey O'Callaghan (2010). This collection of thought-provoking philosophical essays contains chapters on particular aspects of sound perception, as well as a series of essays focusing on the issue of sound location. The chapters on specific topics include several perspectives on how we hear speech, one of the most well-studied aspects of auditory perception in empirical research. Most of the book consists of a series of essays approaching the experience of hearing sounds by focusing on where sounds are in space. An impressive range of opinions on this issue is presented, likely thanks to the fact that the book's editors represent dramatically different viewpoints. The wave based view argues that sounds are located near the perceiver, although the sounds also provide information about objects around the listener, including the source of the sound. In contrast, the source based view holds that sounds are experienced as near or at their sources. The editors acknowledge that additional methods should be used in conjunction with introspection, but they argue that theories of perceptual experience should nevertheless respect phenomenology. With such a range of views derived largely from the same introspective methodology, it remains unresolved which phenomenological account is to be respected.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107(6), 2408-2412. doi:10.1073/pnas.0908239106.

    Abstract

    Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.
  • Sauter, D. (2010). Are positive vocalizations perceived as communicating happiness across cultural boundaries? [Article addendum]. Communicative & Integrative Biology, 3(5), 440-442. doi:10.4161/cib.3.5.12209.

    Abstract

    Laughter communicates a feeling of enjoyment across cultures, while non-verbal vocalizations of several other positive emotions, such as achievement or sensual pleasure, are recognizable only within, but not across, cultural boundaries. Are these positive vocalizations nevertheless interpreted cross-culturally as signaling positive affect? In a match-to-sample task, positive emotional vocal stimuli were paired with positive and negative facial expressions, by English participants and members of the Himba, a semi-nomadic, culturally isolated Namibian group. The results showed that laughter was associated with a smiling facial expression across both groups, consistent with previous work showing that human laughter is a positive, social signal with deep evolutionary roots. However, non-verbal vocalizations of achievement, sensual pleasure, and relief were not cross-culturally associated with smiling facial expressions, perhaps indicating that these types of vocalizations are not cross-culturally interpreted as communicating a positive emotional state, or alternatively that these emotions are associated with positive facial expression other than smiling. These results are discussed in the context of positive emotional communication in vocal and facial signals. Research on the perception of non-verbal vocalizations of emotions across cultures demonstrates that some affective signals, including laughter, are associated with particular facial configurations and emotional states, supporting theories of emotions as a set of evolved functions that are shared by all humans regardless of cultural boundaries.
  • Sauter, D. (2010). More than happy: The need for disentangling positive emotions. Current Directions in Psychological Science, 19, 36-40. doi:10.1177/0963721409359290.

    Abstract

    Despite great advances in scientific understanding of emotional processes in the last decades, research into the communication of emotions has been constrained by a strong bias toward negative affective states. Typically, studies distinguish between different negative emotions, such as disgust, sadness, anger, and fear. In contrast, most research uses only one category of positive affect, “happiness,” which is assumed to encompass all positive emotional states. This article reviews recent research showing that a number of positive affective states have discrete, recognizable signals. An increased focus on cues other than facial expressions is necessary to understand these positive states and how they are communicated; vocalizations, touch, and postural information offer promising avenues for investigating signals of positive affect. A full scientific understanding of the functions, signals, and mechanisms of emotions requires abandoning the unitary concept of happiness and instead disentangling positive emotions.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Sauter, D., Eisner, F., Calder, A. J., & Scott, S. K. (2010). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63(11), 2251-2272. doi:10.1080/17470211003721642.

    Abstract

    Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotionally inflected speech (Banse & Scherer, 1996) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, 1971; Scott et al., 1997), and of positive affective states (Ekman, 1992, 2003; Sauter & Scott, 2007). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
  • Sauter, D., & Eimer, M. (2010). Rapid detection of emotion from human vocalizations. Journal of Cognitive Neuroscience, 22, 474-481. doi:10.1162/jocn.2009.21215.

    Abstract

    The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Reply to Gewald: Isolated Himba settlements still exist in Kaokoland [Letter to the editor]. Proceedings of the National Academy of Sciences of the United States of America, 107(18), E76. doi:10.1073/pnas.1002264107.

    Abstract

    We agree with Gewald (1) that historical and anthropological accounts are essential tools for understanding the Himba culture, and these accounts are valuable to both us and him. However, we contest his claim that the Himba individuals in our study were not culturally isolated. Gewald (1) claims that it would be “unlikely” that the Himba people with whom we worked had “not been exposed to the affective signals of individuals from cultural groups other than their own” as stated in our paper (2). Gewald (1) seems to argue that, because outside groups have had contact with some Himba, this means that these events affected all Himba. Yet, the Himba constitute a group of 20,000-50,000 people (3) living in small settlements scattered across the vast Kaokoland region, an area of 49,000 km2 (4).
  • Sauter, D., & Levinson, S. C. (2010). What's embodied in a smile? [Comment on Niedenthal et al.]. Behavioral and Brain Sciences, 33, 457-458. doi:10.1017/S0140525X10001597.

    Abstract

    Differentiation of the forms and functions of different smiles is needed, but they should be based on empirical data on distinctions that senders and receivers make, and the physical cues that are employed. Such data would allow for a test of whether smiles can be differentiated using perceptual cues alone or whether mimicry or simulation are necessary.
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., & Boves, L. (2010). Computational modelling of spoken-word recognition processes: Design choices and evaluation. Pragmatics & Cognition, 18, 136-164. doi:10.1075/pc.18.1.06sch.

    Abstract

    Computational modelling has proven to be a valuable approach in developing theories of spoken-word processing. In this paper, we focus on a particular class of theories in which it is assumed that the spoken-word recognition process consists of two consecutive stages, with an 'abstract' discrete symbolic representation at the interface between the stages. In evaluating computational models, it is important to bring in independent arguments for the cognitive plausibility of the algorithms that are selected to compute the processes in a theory. This paper discusses the relation between behavioural studies, theories, and computational models of spoken-word recognition. We explain how computational models can be assessed in terms of the goodness of fit with the behavioural data and the cognitive plausibility of the algorithms. An in-depth analysis of several models provides insights into how computational modelling has led to improved theories and to a better understanding of the human spoken-word recognition process.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.

    Abstract

    Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words.

Share this page