Publications

Displaying 301 - 400 of 444
  • Ozyurek, A. (2001). What do speech-gesture mismatches reveal about language specific processing? A comparison of Turkish and English. In C. Cavé, I. Guaitella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication: Actes du Colloque ORAGE 2001 (pp. 567-581). Paris: L'Harmattan.
  • Papafragou, A., & Ozturk, O. (2007). Children's acquisition of modality. In Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA 2) (pp. 320-327). Somerville, Mass.: Cascadilla Press.
  • Papafragou, A. (2007). On the acquisition of modality. In T. Scheffler, & L. Mayol (Eds.), Penn Working Papers in Linguistics. Proceedings of the 30th Annual Penn Linguistics Colloquium (pp. 281-293). Department of Linguistics, University of Pennsylvania.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Space and iconicity in German sign language (DGS). PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57482.

    Abstract

    This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and motion). Its primary theoretical objectives are to characterize the structure of locative descriptions in DGS; to explain the use of frames of reference and perspective in the expression of location and motion; to clarify the interrelationship between the systems of frames of reference, signing perspective, and classifier predicates; and to characterize the interplay between iconicity principles, on the one hand, and grammatical and discourse constraints, on the other hand, in the use of these spatial devices. In more general terms, the dissertation provides a usage-based account of iconic mapping in the visual-spatial modality. The use of space in sign language expression is widely assumed to be guided by iconic principles, which are furthermore assumed to hold in the same way across sign languages. Thus, there has been little expectation of variation between sign languages in the spatial domain in the use of spatial devices. Consequently, perhaps, there has been little systematic investigation of linguistic expression in the spatial domain in individual sign languages, and less investigation of spatial language in extended signed discourse. This dissertation provides an investigation of spatial expressions in DGS by investigating the impact of different constraints on iconicity in sign language structure. The analyses have important implications for our understanding of the role of iconicity in the visual-spatial modality, the possible language-specific variation within the spatial domain in the visual-spatial modality, the structure of spatial language in both natural language modalities, and the relationship between spatial language and cognition

    Additional information

    full text via Radboud Repository
  • Perniss, P. M., Pfau, R., & Steinbach, M. (Eds.). (2007). Visible variation: Cross-linguistic studies in sign language structure. Berlin: Mouton de Gruyter.

    Abstract

    It has been argued that properties of the visual-gestural modality impose a homogenizing effect on sign languages, leading to less structural variation in sign language structure as compared to spoken language structure. However, until recently, research on sign languages was limited to a number of (Western) sign languages. Before we can truly answer the question of whether modality effects do indeed cause less structural variation, it is necessary to investigate the similarities and differences that exist between sign languages in more detail and, especially, to include in this investigation less studied sign languages. The current research climate is testimony to a surge of interest in the study of a geographically more diverse range of sign languages. The volume reflects that climate and brings together work by scholars engaging in comparative sign linguistics research. The 11 articles discuss data from many different signed and spoken languages and cover a wide range of topics from different areas of grammar including phonology (word pictures), morphology (pronouns, negation, and auxiliaries), syntax (word order, interrogative clauses, auxiliaries, negation, and referential shift) and pragmatics (modal meaning and referential shift). In addition to this, the contributions address psycholinguistic issues, aspects of language change, and issues concerning data collection in sign languages, thereby providing methodological guidelines for further research. Although some papers use a specific theoretical framework for analyzing the data, the volume clearly focuses on empirical and descriptive aspects of sign language variation.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (2007). Can't you see the difference? Sources of variation in sign language structure. In P. M. Perniss, R. Pfau, & M. Steinbach (Eds.), Visible variation: Cross-linguistic studies in sign language narratives (pp. 1-34). Berlin: Mouton de Gruyter.
  • Perniss, P. M. (2007). Locative functions of simultaneous perspective constructions in German sign language narrative. In M. Vermeerbergen, L. Leeson, & O. Crasborn (Eds.), Simultaneity in signed language: Form and function (pp. 27-54). Amsterdam: Benjamins.
  • Petersson, K. M., Reis, A., & Ingvar, M. (2001). Cognitive processing in literate and illiterate subjects: A review of some recent behavioral and functional neuroimaging data. Scandinavian Journal of Psychology, 42, 251-267. doi:10.1111/1467-9450.00235.

    Abstract

    The study of illiterate subjects, which for specific socio-cultural reasons did not have the opportunity to acquire basic reading and writing skills, represents one approach to study the interaction between neurobiological and cultural factors in cognitive development and the functional organization of the human brain. In addition the naturally occurring illiteracy may serve as a model for studying the influence of alphabetic orthography on auditory-verbal language. In this paper we have reviewed some recent behavioral and functional neuroimaging data indicating that learning an alphabetic written language modulates the auditory-verbal language system in a non-trivial way and provided support for the hypothesis that the functional architecture of the brain is modulated by literacy. We have also indicated that the effects of literacy and formal schooling is not limited to language related skills but appears to affect also other cognitive domains. In particular, we indicate that formal schooling influences 2D but not 3D visual naming skills. We have also pointed to the importance of using ecologically relevant tasks when comparing literate and illiterate subjects. We also demonstrate the applicability of a network approach in elucidating differences in the functional organization of the brain between groups. The strength of such an approach is the ability to study patterns of interactions between functionally specialized brain regions and the possibility to compare such patterns of brain interactions between groups or functional states. This complements the more commonly used activation approach to functional neuroimaging data, which characterize functionally specialized regions, and provides important data characterizing the functional interactions between these regions.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petersson, K. M., Sandblom, J., Gisselgard, J., & Ingvar, M. (2001). Learning related modulation of functional retrieval networks in man. Scandinavian Journal of Psychology, 42, 197-216. doi:10.1111/1467-9450.00231.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pluymaekers, M. (2007). Affix reduction in spoken Dutch: Probabilistic effects in production and perception. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58146.

    Abstract

    This dissertation investigates the roles of several probabilistic variables in the production and comprehension of reduced Dutch affixes. The central hypothesis is that linguistic units with a high probability of occurrence are
    more likely to be reduced (Jurafsky et al., 2001; Aylett & Turk, 2004). This hypothesis is tested by analyzing the acoustic realizations of affixes, which are meaning-carrying elements embedded in larger lexical units. Most of the results prove to be compatible with the main hypothesis, but some appear to run counter to its predictions. The final chapter of the thesis discusses the implications of these findings for models of speech production, models of speech perception, and probability-based accounts of reduction.

    Additional information

    full text via Radboud Repository
  • Poletiek, F. H. (2001). Hypothesis-testing behaviour. Hove: Psychology Press.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Pye, C., Pfeiler, B., De León, L., Brown, P., & Mateo, P. (2007). Roots or edges? Explaining variation in children's early verb forms across five Mayan languages. In B. Pfeiler (Ed.), Learning indigenous languages: Child language acquisition in Mesoamerica (pp. 15-46). Berlin: Mouton de Gruyter.

    Abstract

    This paper compares the acquisition of verb morphology in five Mayan languages, using a comparative method based on historical linguistics to establish precise equivalences between linguistic categories in the five languages. Earlier work on the acquisition of these languages, based on examination of longitudinal samples of naturally-occuring child language, established that in some of the languages (Tzeltal, Tzotzil) bare roots were the predominant forms for children’s early verbs, but in three other languages (Yukatek, K’iche’, Q’anjobal) unanalyzed portions of the final part of the verb were more likely. That is, children acquiring different Mayan languages initially produce different parts of the adult verb forms. In this paper we analyse the structures of verbs in caregiver speech to these same children, using samples from two-year-old children and their caregivers, and assess the degree to which features of the input might account for the children’s early verb forms in these five Mayan languages. We found that the frequency with which adults produce verbal roots at the extreme right of words and sentences influences the frequency with which children produce bare verb roots in their early verb expressions, while production of verb roots at the extreme left does not, suggesting that the children ignore the extreme left of verbs and sentences when extracting verb roots.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Quené, H., & Janse, E. (2001). Word perception in time-compressed speech [Abstract]. Journal of the Acoustical Society of America, 110, 2738.

    Abstract

    ASA conference abstract
  • Rapold, C. J. (2007). From demonstratives to verb agreement in Benchnon: A diachronic perspective. In A. Amha, M. Mous, & G. Savà (Eds.), Omotic and Cushitic studies: Papers from the Fourth Cushitic Omotic Conference, Leiden, 10-12 April 2003 (pp. 69-88). Cologne: Rüdiger Köppe.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Reis, A., Petersson, K. M., Castro-Caldas, A., & Ingvar, M. (2001). Formal schooling influences two- but not three-dimensional naming skills. Brain and Cognition, 47, 397-411. doi:doi:10.1006/brcg.2001.1316.

    Abstract

    The modulatory influence of literacy on the cognitive system of the human brain has been indicated in behavioral, neuroanatomic, and functional neuroimaging studies. In this study we explored the functional consequences of formal education and the acquisition of an alphabetic written language on two- and three-dimensional visual naming. The results show that illiterate subjects perform significantly worse on immediate naming of two-dimensional representations of common everyday objects compared to literate subjects, both in terms of accuracy and reaction times. In contrast, there was no significant difference when the subjects named the corresponding real objects. The results suggest that formal education and learning to read and to write modulate the cognitive process involved in processing two- but not three-dimensional representations of common everyday objects. Both the results of the reaction time and the error pattern analyses can be interpreted as indicating that the major influence of literacy affects the visual system or the interaction between the visual and the language systems. We suggest that the visual system in a wide sense and/or the interface between the visual and the language system are differently formatted in literate and illiterate subjects. In other words, we hypothesize that the pattern of interactions in the functional–anatomical networks subserving visual naming, that is, the interactions within and between the visual and language processing networks, differ in literate and illiterate subjects
  • Ringersma, J., & Kemps-Snijders, M. (2007). Creating multimedia dictionaries of endangered languages using LEXUS. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 65-68). Baixas, France: ISCA-Int.Speech Communication Assoc.

    Abstract

    This paper reports on the development of a flexible web based lexicon tool, LEXUS. LEXUS is targeted at linguists involved in language documentation (of endangered languages). It allows the creation of lexica within the structure of the proposed ISO LMF standard and uses the proposed concept naming conventions from the ISO data categories, thus enabling interoperability, search and merging. LEXUS also offers the possibility to visualize language, since it provides functionalities to include audio, video and still images to the lexicon. With LEXUS it is possible to create semantic network knowledge bases, using typed relations. The LEXUS tool is free for use. Index Terms: lexicon, web based application, endangered languages, language documentation.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L., Gürel, A., Tatar, S., & Marti, L. (Eds.). (2007). EUROSLA Yearbook 7. Amsterdam: Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Robinson, J. D., & Stivers, T. (2001). Achieving activity transitions in primary-care encounters: From history taking to physical examination. Human Communication Research, 27(2), 253-298. doi:10.1111/j.1468-2958.2001.tb00782.x.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A., & Lamers, M. (2007). Modelling the control of visual attention in Stroop-like tasks. In A. S. Meyer, L. R. Wheeldon, & A. Krott (Eds.), Automaticity and control in language processing (pp. 123-142). Hove: Psychology Press.

    Abstract

    The authors discuss the issue of how visual orienting, selective stimulus processing, and vocal response planning are related in Stroop-like tasks. The evidence suggests that visual orienting is dependent on both visual processing and verbal response planning. They also discuss the issue of selective perceptual processing in Stroop-like tasks. The evidence suggests that space-based and object-based attention lead to a Trojan horse effect in the classic Stroop task, which can be moderated by increasing the spatial distance between colour and word and by making colour and word part of different objects. Reducing the presentation duration of the colour-word stimulus or the duration of either the colour or word dimension reduces Stroop interference. This paradoxical finding was correctly simulated by the WEAVER++ model. Finally, the authors discuss evidence on the neural correlates of executive attention, in particular, the ACC. The evidence suggests that the ACC plays a role in regulation itself rather than only signalling the need for regulation.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (Eds.), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sandberg, A., Lansner, A., & Petersson, K. M. (2001). Selective enhancement of recall through plasticity modulation in an autoassociative memory. Neurocomputing, 38(40), 867-873. doi:10.1016/S0925-2312(01)00363-0.

    Abstract

    The strength of a memory trace is modulated by a variety of factors such as arousal, attention, context, type of processing during encoding, salience and novelty of the experience. Some of these factors can be modeled as a variable plasticity level in the memory system, controlled by arousal or relevance-estimating systems. We demonstrate that a Bayesian confidence propagation neural network with learning time constant modulated in this way exhibits enhanced recall of an item tagged as salient. Proactive and retroactive inhibition of other items is also demonstrated as well as an inverted U-shape response to overall plasticity
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., Sturm, J., & Boves, L. (2001). Business listings in automatic directory assistance. In Interspeech - Eurospeech 2001 - 7th European Conference on Speech Communication and Technology (pp. 2381-2384). ISCA Archive.

    Abstract

    So far most attempts to automate Directory Assistance services focused on private listings, because it is not known precisely how callers will refer to a business listings. The research described in this paper, carried out in the SMADA project, tries to fill this gap. The aim of the research is to model the expressions people use when referring to a business listing by means of rules, in order to automatically create a vocabulary, which can be part of an automated DA service. In this paper a rule-base procedure is proposed, which derives rules from the expressions people use. These rules are then used to automatically create expressions from directory listings. Two categories of businesses, viz. hospitals and the hotel and catering industry, are used to explain this procedure. Results for these two categories are used to discuss the problem of the over- and undergeneration of expressions.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2007). Early decision making in continuous speech. In M. Grimm, & K. Kroschel (Eds.), Robust speech recognition and understanding (pp. 333-350). I-Tech Education and Publishing.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schiller, N. O., Greenhall, J. A., Shelton, J. R., & Caramazza, A. (2001). Serial order effects in spelling errors: Evidence from two dysgraphic patients. Neurocase, 7, 1-14. doi:10.1093/neucas/7.1.1.

    Abstract

    This study reports data from two dysgraphic patients, TH and PB, whose errors in spelling most often occurred in the final part of words. The probability of making an error increased monotonically towards the end of words. Long words were affected more than short words, and performance was similar across different output modalities (writing, typing and oral spelling). This error performance was found despite the fact that both patients showed normal ability to repeat the same words orally and to access their full spelling in tasks that minimized the involvement of working memory. This pattern of performance locates their deficit to the mechanism that keeps graphemic representations active for further processing, and shows that the functioning of this mechanism is not controlled or "refreshed" by phonological (or articulatory) processes. Although the overall performance pattern is most consistent with a deficit to the graphemic buffer, the strong tendency for errors to occur at the ends of words is unlike many classic "graphemic buffer patients" whose errors predominantly occur at word-medial positions. The contrasting patterns are discussed in terms of different types of impairment to the graphemic buffer.
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Senft, G. (2007). Reference and 'référence dangereuse' to persons in Kilivila: An overview and a case study. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 309-337). Cambridge: Cambridge University Press.

    Abstract

    Based on the conversation analysts’ insights into the various forms of third person reference in English, this paper first presents the inventory of forms Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, offers its speakers for making such references. To illustrate such references to third persons in talk-in-interaction in Kilivila, a case study on gossiping is presented in the second part of the paper. This case study shows that ambiguous anaphoric references to two first mentioned third persons turn out to not only exceed and even violate the frame of a clearly defined situational-intentional variety of Kilivila that is constituted by the genre “gossip”, but also that these references are extremely dangerous for speakers in the Trobriand Islanders’ society. I illustrate how this culturally dangerous situation escalates and how other participants of the group of gossiping men try to “repair” this violation of the frame of a culturally defined and metalinguistically labelled “way of speaking”. The paper ends with some general remarks on how the understanding of forms of person reference in a language is dependent on the culture specific context in which they are produced.
  • Senft, G. (2007). The Nijmegen space games: Studying the interrelationship between language, culture and cognition. In J. Wassmann, & K. Stockhaus (Eds.), Person, space and memory in the contemporary Pacific: Experiencing new worlds (pp. 224-244). New York: Berghahn Books.

    Abstract

    One of the central aims of the "Cognitive Anthropology Research Group" (since 1998 the "Department of Language and Cognition of the MPI for Psycholinguistics") is to research the relationship between language, culture and cognition and the conceptualization of space in various languages and cultures. Ever since its foundation in 1991 the group has been developing methods to elicit cross-culturally and cross-linguistically comparable data for this research project. After a brief summary of the central considerations that served as guidelines for the developing of these elicitation devices, this paper first presents a broad selection of the "space games" developed and used for data elicitation in the groups' various fieldsites so far. The paper then discusses the advantages and shortcomings of these data elicitation devices. Finally, it is argued that methodologists developing such devices find themselves in a position somewhere between Scylla and Charybdis - at least, if they take the requirement seriously that the elicited data should be comparable not only cross-culturally but also cross-linguistically.
  • Senft, G. (2001). Das Präsentieren des Forschers im Felde: Eine Einführung auf den Trobriand Inseln. In C. Sütterlin, & F. S. Salter (Eds.), Irenäus Eibl-Eibesfeldt: Zu Person und Werk, Festschrift zum 70. Geburtstag (pp. 188-197). Frankfurt am Main: Peter Lang.
  • Senft, G. (2001). [Review of the book Handbook of language and ethnic identity ed. by Joshua A. Fishman]. Linguistics, 39, 188-190. doi:10.1515/ling.2001.004.
  • Senft, G. (2001). [Review of the book Language Death by David Crystal]. Linguistics, 39, 815-822. doi:10.1515/ling.2001.032.
  • Senft, G. (2001). [Review of the book Malinowski's Kiriwina: Fieldwork photography 1915-1918 by Michael W. Young]. Paideuma, 47, 260-263.
  • Senft, G. (2001). [Review of the CD Betel Nuts by Christopher Roberts (1996)]. Kulele, 3, 115-122.

    Abstract

    (TMCD 9602). Taipei: Trees Music & Art, 12-1, Lane 10, Sec. 2, Hsin Yi Rd. Taipei, TAIWAN. Distributed by Sony Music Entertainment (Taiwan)Ltd.,6th fl. No 35 , Lane 11, Kwang-Fu N. Rd., Taipei TAIWAN (CD accompanied by a full color bucklet)
  • Senft, G. (2007). "Ich weiß nicht, was soll es bedeuten.." - Ethnolinguistische Winke zur Rolle von umfassenden Metadaten bei der (und für die) Arbeit mit Corpora. In W. Kallmeyer, & G. Zifonun (Eds.), Sprachkorpora - Datenmengen und Erkenntnisfortschritt (pp. 152-168). Berlin: Walter de Gruyter.

    Abstract

    Arbeitet man als muttersprachlicher Sprecher des Deutschen mit Corpora gesprochener oder geschriebener deutscher Sprache, dann reflektiert man in aller Regel nur selten über die Vielzahl von kulturspezifischen Informationen, die in solchen Texten kodifiziert sind – vor allem, wenn es sich bei diesen Daten um Texte aus der Gegenwart handelt. In den meisten Fällen hat man nämlich keinerlei Probleme mit dem in den Daten präsupponierten und als allgemein bekannt erachteten Hintergrundswissen. Betrachtet man dagegen Daten in Corpora, die andere – vor allem nicht-indoeuropäische – Sprachen dokumentieren, dann wird einem schnell bewußt, wieviel an kulturspezifischem Wissen nötig ist, um diese Daten adäquat zu verstehen. In meinem Vortrag illustriere ich diese Beobachtung an einem Beispiel aus meinem Corpus des Kilivila, der austronesischen Sprache der Trobriand-Insulaner von Papua-Neuguinea. Anhand eines kurzen Auschnitts einer insgesamt etwa 26 Minuten dauernden Dokumentation, worüber und wie sechs Trobriander miteinander tratschen und klatschen, zeige ich, was ein Hörer oder Leser eines solchen kurzen Daten-Ausschnitts wissen muß, um nicht nur dem Gespräch überhaupt folgen zu können, sondern auch um zu verstehen, was dabei abläuft und wieso ein auf den ersten Blick absolut alltägliches Gespräch plötzlich für einen Trobriander ungeheuer an Brisanz und Bedeutung gewinnt. Vor dem Hintergrund dieses Beispiels weise ich dann zum Schluß meines Beitrags darauf hin, wie unbedingt nötig und erforderlich es ist, in allen Corpora bei der Erschließung und Kommentierung von Datenmaterialien durch sogenannte Metadaten solche kulturspezifischen Informationen explizit zu machen.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (2007). Language, culture and cognition: Frames of spatial reference and why we need ontologies of space [Abstract]. In A. G. Cohn, C. Freksa, & B. Bebel (Eds.), Spatial cognition: Specialization and integration (pp. 12).

    Abstract

    One of the many results of the "Space" research project conducted at the MPI for Psycholinguistics is that there are three "Frames of spatial Reference" (FoRs), the relative, the intrinsic and the absolute FoR. Cross-linguistic research showed that speakers who prefer one FoR in verbal spatial references rely on a comparable coding system for memorizing spatial configurations and for making inferences with respect to these spatial configurations in non-verbal problem solving. Moreover, research results also revealed that in some languages these verbal FoRs also influence gestural behavior. These results document the close interrelationship between language, culture and cognition in the domain "Space". The proper description of these interrelationships in the spatial domain requires language and culture specific ontologies.
  • Senft, G. (2007). Nominal classification. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford handbook of cognitive linguistics (pp. 676-696). Oxford: Oxford University Press.

    Abstract

    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.
  • Senft, G. (2001). Frames of spatial reference in Kilivila. Studies in Language, 25(3), 521-555. doi:10.1075/sl.25.3.05sen.

    Abstract

    Members of the MPI for Psycholinguistics are researching the interrelationship between language, cognition and the conceptualization of space in various languages. Research results show that there are three frames of spatial reference, the absolute, the relative, and the intrinsic frame of reference. This study first presents results of this research in general and then discusses the results for Kilivila. Speakers of this Austronesian language prefer the intrinsic frame of reference for the location of objects with respect to each other in a given spatial configuration. But they prefer an absolute frame of reference system in referring to the spatial orientation of objects in a given
    spatial configuration. Moreover, the hypothesis is confirmed that languages seem to influence the choice and the kind of conceptual parameters their speakers use to solve non-verbal problems within the domain of space.
  • Senft, G. (2001). Kevalikuliku: Earthquake magic from the Tobriand Islands (for Unshakebles). In A. Pawley, M. Ross, & D. Tryon (Eds.), The boy from Bundaberg: Studies in Melanesian linguistics in honour of Tom Dutton (pp. 323-331). Canberra: Pacific Linguistics.
  • Senft, G. (2001). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen: Affiziert sprachliche Relativität die Philosophie? In L. Salwiczek, & W. Wickler (Eds.), Wie wir die Welt erkennen: Erkenntnisweisen im interdisziplinären Diskurs (pp. 203-242). Freiburg: Karl Alber.
  • Senft, G. (2001). Ritual communication and linguistic ideology [Comment on Joel Robbins]. Current Anthropology, 42, 606.
  • Senft, G., Majid, A., & Levinson, S. C. (2007). The language of taste. In A. Majid (Ed.), Field Manual Volume 10 (pp. 42-45). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492913.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (2001). A view of language. Oxford: Oxford University Press.
  • Seuren, P. A. M. (2001). [Review of the book The real professor Higgins: The life and career of Daniel Jones by Berverly Collins and Inger M. Mees]. Linguistics, 39(4), 822-832. doi:10.1515/ling.2001.032.
  • Seuren, P. A. M. (2001). Language and philosophy. In N. J. Smelser, & P. B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences. Volume 12 (pp. 8297-8303). Amsterdam, NL: Elsevier.
  • Seuren, P. A. M. (2001). Lexical meaning and metaphor. In E. N. Enikö (Ed.), Cognition in language use (pp. 422-431). Antwerp, Belgium: International Pragmatics Association (IPrA).
  • Seuren, P. A. M. (2001). The cognitive dimension in language study. Folia Linguistica, 35(3-4), 209-242. doi:10.1515/flin.2001.35.3-4.209.
  • Seuren, P. A. M. (2001). Simple and transparent [Commentary on The worlds simplest grammars are creole grammars by John H. McWhorter]. Linguistic Typology, 5(2-3), 176-180. doi:10.1515/lity.2001.002.
  • Seuren, P. A. M. (2001). Sprachwissenschaft des Abendlandes. Eine Ideengeschichte von der Antike bis zur Gegenwart. Hohengehren: Schneider Verlaq.

    Abstract

    Translation of the first four chapters of Western linguistics: An historical introduction (1998)
  • Seuren, P. A. M., Capretta, V., & Geuvers, H. (2001). The logic and mathematics of occasion sentences. Linguistics & Philosophy, 24(5), 531-595. doi:10.1023/A:1017592000325.

    Abstract

    The prime purpose of this paper is, first, to restore to discourse-bound occasion sentences their rightful central place in semantics and secondly, taking these as the basic propositional elements in the logical analysis of language, to contribute to the development of an adequate logic of occasion sentences and a mathematical (Boolean) foundation for such a logic, thus preparing the ground for more adequate semantic, logical and mathematical foundations of the study of natural language. Some of the insights elaborated in this paper have appeared in the literature over the past thirty years, and a number of new developments have resulted from them. The present paper aims atproviding an integrated conceptual basis for this new development in semantics. In Section 1 it is argued that the reduction by translation of occasion sentences to eternal sentences, as proposed by Russell and Quine, is semantically and thus logically inadequate. Natural language is a system of occasion sentences, eternal sentences being merely boundary cases. The logic hasfewer tasks than is standardly assumed, as it excludes semantic calculi, which depend crucially on information supplied by cognition and context and thus belong to cognitive psychology rather than to logic. For sentences to express a proposition and thus be interpretable and informative, they must first be properly anchored in context. A proposition has a truth value when it is, moreover, properly keyed in the world, i.e. is about a situation in the world. Section 2 deals with the logical properties of natural language. It argues that presuppositional phenomena require trivalence and presents the trivalent logic PPC3, with two kinds of falsity and two negations. It introduces the notion of Σ-space for a sentence A (or A/A, the set of situations in which A is true) as the basis of logical model theory, and the notion of PA/ (the Σ-space of the presuppositions of A), functioning as a `private' subuniverse for A/A. The trivalent Kleene calculus is reinterpreted as a logical account of vagueness, rather than of presupposition. PPC3 and the Kleene calculus are refinements of standard bivalent logic and can be combined into one logical system. In Section 3 the adequacy of PPC3 as a truth-functional model of presupposition is considered more closely and given a Boolean foundation. In a noncompositional extended Boolean algebra, three operators are defined: 1a for the conjoined presuppositions of a, ã for the complement of a within 1a, and â for the complement of 1a within Boolean 1. The logical properties of this extended Boolean algebra are axiomatically defined and proved for all possible models. Proofs are provided of the consistency and the completeness of the system. Section 4 is a provisional exploration of the possibility of using the results obtained for a new discourse-dependent account of the logic of modalities in natural language. The overall result is a modified and refined logical and model-theoretic machinery, which takes into account both the discourse-dependency of natural language sentences and the necessity of selecting a key in the world before a truth value can be assigned
  • Seyfeddinipur, M., & Kita, S. (2001). Gestures and dysfluencies in speech. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 266-270). Paris, France: Éditions L'Harmattan.
  • Siddiqui, M. R., Meisner, S., Tosh, K., Balakrishnan, K., Ghei, S., Fisher, S. E., Golding, M., Narayan, N. P. S., Sitaraman, T., Sengupta, U., Pitchappan, R., & Hill, A. V. (2001). A major susceptibility locus for leprosy in India maps to chromosome 10p13 [Letter]. Nature Genetics, 27, 439-441. doi:10.1038/86958.

    Abstract

    Leprosy, a chronic infectious disease caused by Mycobacterium leprae, is prevalent in India, where about half of the world's estimated 800,000 cases occur. A role for the genetics of the host in variable susceptibility to leprosy has been indicated by familial clustering, twin studies, complex segregation analyses and human leukocyte antigen (HLA) association studies. We report here a genetic linkage scan of the genomes of 224 families from South India, containing 245 independent affected sibpairs with leprosy, mainly of the paucibacillary type. In a two-stage genome screen using 396 microsatellite markers, we found significant linkage (maximum lod score (MLS) = 4.09, P < 2x10-5) on chromosome 10p13 for a series of neighboring microsatellite markers, providing evidence for a major locus for this prevalent infectious disease. Thus, despite the polygenic nature of infectious disease susceptibility, some major, non-HLA-linked loci exist that may be mapped through obtainable numbers of affected sibling pairs.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Smits, R. (2001). Hierarchical categorization of coarticulated phonemes: A theoretical analysis. Perception and Psychophysics, 63, 1109-1139. doi:10.3758/BF03194529.

    Abstract

    This article is concerned with the question of how listeners recognize coarticulated phonemes. The problem is approached from a pattern classificationperspective. First, the potential acoustical effects of coarticulation are defined in terms of the patterns that form the input to a classifier.Next, a categorization model called HICAT is introduced that incorporates hierarchical dependencies to optimally dealwith this input. The model allows the position, orientation, and steepness of one phoneme boundary to depend on the perceivedvalue of a neighboring phoneme. It is argued that, if listeners do behave like statistical pattern recognizers, they may use the categorization strategies incorporated in the model. The HICAT model is compared with existing categorizationmodels, among which are the fuzzylogical model of perception and Nearey’s diphone-biased secondary-cuemodel. Finally, a method is presented by which categorization strategies that are likely to be used by listeners can be predicted from distributions of acoustical cues as they occur in natural speech.
  • Smits, R. (2001). Evidence for hierarchial categorization of coarticulated phonemes. Journal of Experimental Psychology: Human Perception and Performance, 27, 1145-1162. doi:10.1037/0096-1523.27.5.1145.

    Abstract

    The reported research investigates how listeners recognize coarticulated phonemes. First, 2 data sets from experiments on the recognition of coarticulated phonemes published by D. H. Whalen (1989) are reanalyzed. The analyses indicate that listeners used categorization strategies involving a hierarchical dependency. Two new experiments are reported investigating the production and perception of fricative-vowel syllables. On the basis of measurements of acoustic cues on a large set of natural utterances, it was predicted that listeners would use categorization strategies involving a dependency of the fricative categorization on the perceived vowel. The predictions were tested in a perception experiment using a 2-dimensional synthetic fricative-vowel continuum. Model analyses of the results pooled across listeners confirmed the predictions. Individual analyses revealed some variability in the categorization dependencies used by different participants.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • Soto-Faraco, S., Sebastian-Galles, N., & Cutler, A. (2001). Segmental and suprasegmental mismatch in lexical access. Journal of Memory and Language, 45, 412-432. doi:10.1006/jmla.2000.2783.

    Abstract

    Four cross-modal priming experiments in Spanish addressed the role of suprasegmental and segmental information in the activation of spoken words. Listeners heard neutral sentences ending with word fragments (e.g., princi-) and made lexical decisions on letter strings presented at fragment offset. Responses were compared for fragment primes that fully matched the spoken form of the initial portion of target words, versus primes that mismatched in a single element (stress pattern; one vowel; one consonant), versus control primes. Fully matching primes always facilitated lexical decision responses, in comparison to the control condition, while mismatching primes always produced inhibition. The respective strength of the contribution of stress, vowel, and consonant (one feature mismatch or more) information did not differ statistically. The results support a model of spoken-word recognition involving automatic activation of word forms and competition between activated words, in which the activation process is sensitive to all acoustic information relevant to the language’s phonology.
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Stevens, M. E. (2007). Perceptual adaptation to phonological differences between language varieties. PhD Thesis, University of Ghent, Ghent.
  • Stevens, M. A., McQueen, J. M., & Hartsuiker, R. J. (2007). No lexically-driven perceptual adjustments of the [x]-[h] boundary. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1897-1900). Dudweiler: Pirrot.

    Abstract

    Listeners can make perceptual adjustments to phoneme categories in response to a talker who consistently produces a specific phoneme ambiguously. We investigate here whether this type of perceptual learning is also used to adapt to regional accent differences. Listeners were exposed to words produced by a Flemish talker whose realization of [x℄or [h℄ was ambiguous (producing [x℄like [h℄is a property of the West-Flanders regional accent). Before and after exposure they categorized a [x℄-[h℄continuum. For both Dutch and Flemish listeners there was no shift of the categorization boundary after exposure to ambiguous sounds in [x℄- or [h℄-biasing contexts. The absence of a lexically-driven learning effect for this contrast may be because [h℄is strongly influenced by coarticulation. As is not stable across contexts, it may be futile to adapt its representation when new realizations are heard
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2007). Person reference in interaction. In N. J. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 1-20). Cambridge: Cambridge University Press.
  • Stivers, T. (2007). Prescribing under pressure: Parent-physician conversations and antibiotics. Oxford: Oxford University Press.

    Abstract

    This book examines parent-physician conversations in detail, showing how parents put pressure on doctors in largely covert ways, for instance in specific communication practices for explaining why they have brought their child to the doctor or answering a history-taking question. This book also shows how physicians yield to this seemingly subtle pressure evidencing that apparently small differences in wording have important consequences for diagnosis and treatment recommendations. Following parents use of these interactional practices, physicians are more likely to make concessions, alter their diagnosis or alter their treatment recommendation. This book also shows how small changes in the way physicians present their findings and recommendations can decrease parent pressure for antibiotics. This book carefully documents the important and observable link between micro social interaction and macro public health domains.
  • Stivers, T., & Heritage, J. (2001). Breaking the sequential mould: Narrative and other methods of answering "more than the question" during medical history taking. Text, 21(1), 151-185.
  • Stivers, T. (2007). Alternative recognitionals in person reference. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 73-96). Cambridge: Cambridge University Press.
  • Stivers, T. (2001). Negotiating who presents the problem: Next speaker selection in pediatric encounters. Journal of Communication, 51(2), 1-31.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.

Share this page