Publications

Displaying 301 - 400 of 439
  • Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.

    Abstract

    The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats.
  • Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.

    Abstract

    The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting.
  • Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.

    Abstract

    This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2).
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roll, P., Vernes, S. C., Bruneau, N., Cillario, J., Ponsole-Lenfant, M., Massacrier, A., Rudolf, G., Khalife, M., Hirsch, E., Fisher, S. E., & Szepetowski, P. (2010). Molecular networks implicated in speech-related disorders: FOXP2 regulates the SRPX2/uPAR complex. Human Molecular Genetics, 19, 4848-4860. doi:10.1093/hmg/ddq415.

    Abstract

    It is a challenge to identify the molecular networks contributing to the neural basis of human speech. Mutations in transcription factor FOXP2 cause difficulties mastering fluent speech (developmental verbal dyspraxia, DVD), while mutations of sushi-repeat protein SRPX2 lead to epilepsy of the rolandic (sylvian) speech areas, with DVD or with bilateral perisylvian polymicrogyria. Pathophysiological mechanisms driven by SRPX2 involve modified interaction with the plasminogen activator receptor (uPAR). Independent chromatin-immunoprecipitation microarray screening has identified the uPAR gene promoter as a potential target site bound by FOXP2. Here, we directly tested for the existence of a transcriptional regulatory network between human FOXP2 and the SRPX2/uPAR complex. In silico searches followed by gel retardation assays identified specific efficient FOXP2 binding sites in each of the promoter regions of SRPX2 and uPAR. In FOXP2-transfected cells, significant decreases were observed in the amounts of both SRPX2 (43.6%) and uPAR (38.6%) native transcripts. Luciferase reporter assays demonstrated that FOXP2 expression yielded marked inhibition of SRPX2 (80.2%) and uPAR (77.5%) promoter activity. A mutant FOXP2 that causes DVD (p.R553H) failed to bind to SRPX2 and uPAR target sites, and showed impaired down-regulation of SRPX2 and uPAR promoter activity. In a patient with polymicrogyria of the left rolandic operculum, a novel FOXP2 mutation (p.M406T) was found in the leucine-zipper (dimerization) domain. p.M406T partially impaired FOXP2 regulation of SRPX2 promoter activity, while that of the uPAR promoter remained unchanged. Together with recently described FOXP2-CNTNPA2 and SRPX2/uPAR links, the FOXP2-SRPX2/uPAR network provides exciting insights into molecular pathways underlying speech-related disorders.

    Additional information

    Roll_et_al_2010_Suppl_Material.doc
  • Rösler, D., & Skiba, R. (1988). Möglichkeiten für den Einsatz einer Lehrmaterial-Datenbank in der Lehrerfortbildung. Deutsch lernen, 14(1), 24-31.
  • Rossano, F. (2010). Questioning and responding in Italian. Journal of Pragmatics, 42, 2756-2771. doi:10.1016/j.pragma.2010.04.010.

    Abstract

    Questions are design problems for both the questioner and the addressee. They must be produced as recognizable objects and must be comprehended by taking into account the context in which they occur and the local situated interests of the participants. This paper investigates how people do ‘questioning’ and ‘responding’ in Italian ordinary conversations. I focus on the features of both questions and responses. I first discuss formal linguistic features that are peculiar to questions in terms of intonation contours (e.g. final rise), morphology (e.g. tags and question words) and syntax (e.g. inversion). I then show additional features that characterize their actual implementation in conversation such as their minimality (often the subject or the verb is only implied) and the usual occurrence of speaker gaze towards the recipient during questions. I then look at which social actions (e.g. requests for information, requests for confirmation) the different question types implement and which responses are regularly produced in return. The data shows that previous descriptions of “interrogative markings” are neither adequate nor sufficient to comprehend the actual use of questions in natural conversation.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • Ruano, D., Abecasis, G. R., Glaser, B., Lips, E. S., Cornelisse, L. N., de Jong, A. P. H., Evans, D. M., Davey Smith, G., Timpson, N. J., Smit, A. B., Heutink, P., Verhage, M., & Posthuma, D. (2010). Functional gene group analysis reveals a role of synaptic heterotrimeric G proteins in cognitive ability. American Journal of Human Genetics, 86(2), 113-125. doi:10.1016/j.ajhg.2009.12.006.

    Abstract

    Although cognitive ability is a highly heritable complex trait, only a few genes have been identified, explaining relatively low proportions of the observed trait variation. This implies that hundreds of genes of small effect may be of importance for cognitive ability. We applied an innovative method in which we tested for the effect of groups of genes defined according to cellular function (functional gene group analysis). Using an initial sample of 627 subjects, this functional gene group analysis detected that synaptic heterotrimeric guanine nucleotide binding proteins (G proteins) play an important role in cognitive ability (P(EMP) = 1.9 x 10(-4)). The association with heterotrimeric G proteins was validated in an independent population sample of 1507 subjects. Heterotrimeric G proteins are central relay factors between the activation of plasma membrane receptors by extracellular ligands and the cellular responses that these induce, and they can be considered a point of convergence, or a "signaling bottleneck." Although alterations in synaptic signaling processes may not be the exclusive explanation for the association of heterotrimeric G proteins with cognitive ability, such alterations may prominently affect the properties of neuronal networks in the brain in such a manner that impaired cognitive ability and lower intelligence are observed. The reported association of synaptic heterotrimeric G proteins with cognitive ability clearly points to a new direction in the study of the genetic basis of cognitive ability.
  • Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. M., & Bekkering, H. (2010). The function of words: Distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22, 1844-1851. doi:10.1162/jocn.2009.21310.

    Abstract

    Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
  • Salomo, D., Lieven, E., & Tomasello, M. (2010). Young children's sensitivity to new and given information when answering predicate-focus questions. Applied Psycholinguistics, 31, 101-115. doi:10.1017/S014271640999018X.

    Abstract

    In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore new, element. During the last scene the experimenter asked the question (e.g., “What's the frog doing now?”). We found that children expressed the action and the patient in the patient-new condition but expressed only the action in the action-new condition. These results show that children are sensitive to both the predicate-focus question and newness in context. A further finding was that children expressed new patients in their answers more often when there was a verbal context prior to the questions than when there was not.
  • Sauter, D. (2010). Can introspection teach us anything about the perception of sounds? [Book review]. Perception, 39, 1300-1302. doi:10.1068/p3909rvw.

    Abstract

    Reviews the book, Sounds and Perception: New Philosophical Essays edited by Matthew Nudds and Casey O'Callaghan (2010). This collection of thought-provoking philosophical essays contains chapters on particular aspects of sound perception, as well as a series of essays focusing on the issue of sound location. The chapters on specific topics include several perspectives on how we hear speech, one of the most well-studied aspects of auditory perception in empirical research. Most of the book consists of a series of essays approaching the experience of hearing sounds by focusing on where sounds are in space. An impressive range of opinions on this issue is presented, likely thanks to the fact that the book's editors represent dramatically different viewpoints. The wave based view argues that sounds are located near the perceiver, although the sounds also provide information about objects around the listener, including the source of the sound. In contrast, the source based view holds that sounds are experienced as near or at their sources. The editors acknowledge that additional methods should be used in conjunction with introspection, but they argue that theories of perceptual experience should nevertheless respect phenomenology. With such a range of views derived largely from the same introspective methodology, it remains unresolved which phenomenological account is to be respected.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107(6), 2408-2412. doi:10.1073/pnas.0908239106.

    Abstract

    Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.
  • Sauter, D. (2010). Are positive vocalizations perceived as communicating happiness across cultural boundaries? [Article addendum]. Communicative & Integrative Biology, 3(5), 440-442. doi:10.4161/cib.3.5.12209.

    Abstract

    Laughter communicates a feeling of enjoyment across cultures, while non-verbal vocalizations of several other positive emotions, such as achievement or sensual pleasure, are recognizable only within, but not across, cultural boundaries. Are these positive vocalizations nevertheless interpreted cross-culturally as signaling positive affect? In a match-to-sample task, positive emotional vocal stimuli were paired with positive and negative facial expressions, by English participants and members of the Himba, a semi-nomadic, culturally isolated Namibian group. The results showed that laughter was associated with a smiling facial expression across both groups, consistent with previous work showing that human laughter is a positive, social signal with deep evolutionary roots. However, non-verbal vocalizations of achievement, sensual pleasure, and relief were not cross-culturally associated with smiling facial expressions, perhaps indicating that these types of vocalizations are not cross-culturally interpreted as communicating a positive emotional state, or alternatively that these emotions are associated with positive facial expression other than smiling. These results are discussed in the context of positive emotional communication in vocal and facial signals. Research on the perception of non-verbal vocalizations of emotions across cultures demonstrates that some affective signals, including laughter, are associated with particular facial configurations and emotional states, supporting theories of emotions as a set of evolved functions that are shared by all humans regardless of cultural boundaries.
  • Sauter, D. (2010). More than happy: The need for disentangling positive emotions. Current Directions in Psychological Science, 19, 36-40. doi:10.1177/0963721409359290.

    Abstract

    Despite great advances in scientific understanding of emotional processes in the last decades, research into the communication of emotions has been constrained by a strong bias toward negative affective states. Typically, studies distinguish between different negative emotions, such as disgust, sadness, anger, and fear. In contrast, most research uses only one category of positive affect, “happiness,” which is assumed to encompass all positive emotional states. This article reviews recent research showing that a number of positive affective states have discrete, recognizable signals. An increased focus on cues other than facial expressions is necessary to understand these positive states and how they are communicated; vocalizations, touch, and postural information offer promising avenues for investigating signals of positive affect. A full scientific understanding of the functions, signals, and mechanisms of emotions requires abandoning the unitary concept of happiness and instead disentangling positive emotions.
  • Sauter, D., Eisner, F., Calder, A. J., & Scott, S. K. (2010). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63(11), 2251-2272. doi:10.1080/17470211003721642.

    Abstract

    Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotionally inflected speech (Banse & Scherer, 1996) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, 1971; Scott et al., 1997), and of positive affective states (Ekman, 1992, 2003; Sauter & Scott, 2007). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
  • Sauter, D., & Eimer, M. (2010). Rapid detection of emotion from human vocalizations. Journal of Cognitive Neuroscience, 22, 474-481. doi:10.1162/jocn.2009.21215.

    Abstract

    The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Reply to Gewald: Isolated Himba settlements still exist in Kaokoland [Letter to the editor]. Proceedings of the National Academy of Sciences of the United States of America, 107(18), E76. doi:10.1073/pnas.1002264107.

    Abstract

    We agree with Gewald (1) that historical and anthropological accounts are essential tools for understanding the Himba culture, and these accounts are valuable to both us and him. However, we contest his claim that the Himba individuals in our study were not culturally isolated. Gewald (1) claims that it would be “unlikely” that the Himba people with whom we worked had “not been exposed to the affective signals of individuals from cultural groups other than their own” as stated in our paper (2). Gewald (1) seems to argue that, because outside groups have had contact with some Himba, this means that these events affected all Himba. Yet, the Himba constitute a group of 20,000-50,000 people (3) living in small settlements scattered across the vast Kaokoland region, an area of 49,000 km2 (4).
  • Sauter, D., & Levinson, S. C. (2010). What's embodied in a smile? [Comment on Niedenthal et al.]. Behavioral and Brain Sciences, 33, 457-458. doi:10.1017/S0140525X10001597.

    Abstract

    Differentiation of the forms and functions of different smiles is needed, but they should be based on empirical data on distinctions that senders and receivers make, and the physical cues that are employed. Such data would allow for a test of whether smiles can be differentiated using perceptual cues alone or whether mimicry or simulation are necessary.
  • Scharenborg, O., & Boves, L. (2010). Computational modelling of spoken-word recognition processes: Design choices and evaluation. Pragmatics & Cognition, 18, 136-164. doi:10.1075/pc.18.1.06sch.

    Abstract

    Computational modelling has proven to be a valuable approach in developing theories of spoken-word processing. In this paper, we focus on a particular class of theories in which it is assumed that the spoken-word recognition process consists of two consecutive stages, with an 'abstract' discrete symbolic representation at the interface between the stages. In evaluating computational models, it is important to bring in independent arguments for the cognitive plausibility of the algorithms that are selected to compute the processes in a theory. This paper discusses the relation between behavioural studies, theories, and computational models of spoken-word recognition. We explain how computational models can be assessed in terms of the goodness of fit with the behavioural data and the cognitive plausibility of the algorithms. An in-depth analysis of several models provides insights into how computational modelling has led to improved theories and to a better understanding of the human spoken-word recognition process.
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.

    Abstract

    Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words.
  • Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.

    Abstract

    Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent.
  • Schmale, R., Cristia, A., Seidl, A., & Johnson, E. K. (2010). Developmental changes in infants’ ability to cope with dialect variation in word recognition. Infancy, 15, 650-662. doi:10.1111/j.1532-7078.2010.00032.x.

    Abstract

    Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid gains in vocabulary growth. One particular kind of variability that infants must accommodate is that of dialectal accent, because most children will encounter speakers from different regions and backgrounds. In this study, we explored developmental changes in infants’ ability to recognize words in continuous speech by familiarizing them with words spoken by a speaker of their own region (North Midland-American English) or a different region (Southern Ontario Canadian English), and testing them with passages spoken by a speaker of the opposite dialectal accent. Our results demonstrate that 12- but not 9-month-olds readily recognize words in the face of dialectal variation.
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2005). Neuronal coherence as a mechanism of effective corticospinal interaction. Science, 308, 111-113. doi:10.1126/science.1107027.

    Abstract

    Neuronal groups can interact with each other even if they are widely separated. One group might modulate its firing rate or its internal oscillatory synchronization to influence another group. We propose that coherence between two neuronal groups is a mechanism of efficient interaction, because it renders mutual input optimally timed and thereby maximally effective. Modulations of subjects' readiness to respond in a simple reaction-time task were closely correlated with the strength of gamma-band (40 to 70 hertz) coherence between motor cortex and spinal cord neurons. This coherence may contribute to an effective corticospinal interaction and shortened reaction times.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil I. Linguistische Berichte, 141, 371-400.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil II. Linguistische Berichte, 142, 451-475.
  • Sekine, K., & Furuyama, N. (2010). Developmental change of discourse cohesion in speech and gestures among Japanese elementary school children. Rivista di psicolinguistica applicata, 10(3), 97-116. doi:10.1400/152613.

    Abstract

    This study investigates the development of bi-modal reference maintenance by focusing on how Japanese elementary school children introduce and track animate referents in their narratives. Sixty elementary school children participated in this study, 10 from each school year (from 7 to 12 years of age). They were instructed to remember a cartoon and retell the story to their parents. We found that although there were no differences in the speech indices among the different ages, the average scores for the gesture indices of the 12-year-olds were higher than those of the other age groups. In particular, the amount of referential gestures radically increased at 12, and these children tended to use referential gestures not only for tracking referents but also for introducing characters. These results indicate that the ability to maintain a reference to create coherent narratives increases at about age 12.
  • Sekine, K. (2010). The role of gestures contributing to speech production in children. The Japanese Journal of Qualitative Psychology, 9, 115-132.
  • Senft, G. (1992). Bakavilisi Biga - or: What happens to English words in the Kilivila Language? Language and Linguistics in Melanesia, 23, 13-49.
  • Senft, G. (1988). A grammar of Manam by Frantisek Lichtenberk [Book review]. Language and linguistics in Melanesia, 18, 169-173.
  • Senft, G. (2010). Argonauten mit Außenbordmotoren - Feldforschung auf den Trobriand-Inseln (Papua-Neuguinea) seit 1982. Mitteilungen der Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte, 31, 115-130.

    Abstract

    Seit 1982 erforsche ich die Sprache und die Kultur der Trobriand-Insulaner in Papua-Neuguinea. Nach inzwischen 15 Reisen zu den Trobriand-Inseln, die sich bis heute zu nahezu vier Jahren Leben und Arbeit im Dorf Tauwema auf der Insel Kaile'una addieren, wurde ich von Markus Schindlbeck und Alix Hänsel dazu eingeladen, den Mitgliedern der „Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte“ über meine Feldforschungen zu berichten. Das werde ich im Folgenden tun. Zunächst beschreibe ich, wie ich zu den Trobriand-Inseln kam, wie ich mich dort zurechtgefunden habe und berichte dann, welche Art von Forschung ich all die Jahre betrieben, welche Formen von Sprach- und Kulturwandel ich dabei beobachtet und welche Erwartungen ich auf der Basis meiner bisherigen Erfahrungen für die Zukunft der Trobriander und für ihre Sprache und ihre Kultur habe.
  • Senft, G. (2010). [Review of the book Consequences of contact: Language ideologies and sociocultural transformations in Pacific societies ed. by Miki Makihara and Bambi B. Schieffelin]. Paideuma. Mitteilungen zur Kulturkunde, 56, 308-313.
  • Senft, G. (1988). [Review of the book Functional syntax: Anaphora, discourse and empathy by Susumu Kuno]. Journal of Pragmatics, 12, 396-399. doi:10.1016/0378-2166(88)90040-9.
  • Senft, G. (1992). [Review of the book The Yimas language of New Guinea by William A. Foley]. Linguistics, 30, 634-639.
  • Senft, G. (2005). [Review of the book Malinowski: Odyssey of an anthropologist 1884-1920 by Michael Young]. Oceania, 75(3), 302-302.
  • Senft, G. (2005). [Review of the book The art of Kula by Shirley F. Campbell]. Anthropos, 100, 247-249.
  • Senft, G. (1992). Everything we always thought we knew about space - but did not bother to question. Working Papers of the Cognitive Anthropology Research group at the MPI for Psycholinguistics, 10.
  • Senft, G. (1992). What happened to "the fearless tailor" in Kilivila: A European fairy tale - from the South Seas. Anthropos, 87, 407-421.
  • Senghas, A., Ozyurek, A., & Kita, S. (2005). [Response to comment on Children creating core properties of language: Evidence from an emerging sign language in Nicaragua]. Science, 309(5731), 56c-56c. doi:10.1126/science.1110901.
  • Seuren, P. A. M. (2010). A logic-based approach to problems in pragmatics. Poznań Studies in Contemporary Linguistics, 519-532. doi:10.2478/v10010-010-0026-2.

    Abstract

    After an exposé of the programme involved, it is shown that the Gricean maxims fail to do their job in so far as they are meant to account for the well-known problem of natural intuitions of logical entailment that deviate from standard modern logic. It is argued that there is no reason why natural logical and ontological intuitions should conform to standard logic, because standard logic is based on mathematics while natural logical and ontological intuitions derive from a cognitive system in people's minds (supported by their brain structures). A proposal is then put forward to try a totally different strategy, via (a) a grammatical reduction of surface sentences to their logico-semantic form and (b) via logic itself, in particular the notion of natural logic, based on a natural ontology and a natural set theory. Since any logical system is fully defined by (a) its ontology and its overarching notions and axioms regarding truth, (b) the meanings of its operators, and (c) the ranges of its variables, logical systems can be devised that deviate from modern logic in any or all of the above respects, as long as they remain consistent. This allows one, as an empirical enterprise, to devise a natural logic, which is as sound as standard logic but corresponds better with natural intuitions. It is hypothesised that at least two varieties of natural logic must be assumed in order to account for natural logical and ontological intuitions, since culture and scholastic education have elevated modern societies to a higher level of functionality and refinement. These two systems correspond, with corrections and additions, to Hamilton's 19th-century logic and to the classic Square of Opposition, respectively. Finally, an evaluation is presented, comparing the empirical success rates of the systems envisaged.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M., & Hamans, C. (2010). Antifunctionality in language change. Folia Linguistica, 44(1), 127-162. doi:10.1515/flin.2010.005.

    Abstract

    The main thesis of the article is that language change is only partially subject to criteria of functionality and that, as a rule, opposing forces are also at work which often correlate directly with psychological and sociopsychological parameters reflecting themselves in all areas of linguistic competence. We sketch a complex interplay of horizontal versus vertical, deliberate versus nondeliberate, functional versus antifunctional linguistic changes, which, through a variety of processes have an effect upon the languages concerned, whether in the lexicon, the grammar, the phonology or the phonetics. Despite the overall unclarity regarding the notion of functionality in language, there are clear cases of both functionality and antifunctionality. Antifunctionality is deliberately striven for by groups of speakers who wish to distinguish themselves from other groups, for whatever reason. Antifunctionality, however, also occurs as a, probably unwanted, result of syntactic change in the acquisition process by young or adult language learners. The example is discussed of V-clustering through Predicate Raising in German and Dutch, a process that started during the early Middle Ages and was highly functional as long as it occurred on a limited scale but became antifunctional as it pervaded the entire complementation system of these languages.
  • Seuren, P. A. M. (1968). [Review of the book Negation and the comparative particle in English by André Joly]. Neophilologus, 52, 337-338. doi:10.1007/BF01515481.
  • Seuren, P. A. M. (1988). [Review of the book Pidgin and Creole linguistics by P. Mühlhäusler]. Studies in Language, 12(2), 504-513.
  • Seuren, P. A. M. (1988). [Review of the Collins Cobuild English Language Dictionary (Collins Birmingham University International Language Database)]. Journal of Semantics, 6, 169-174. doi:10.1093/jos/6.1.169.
  • Seuren, P. A. M. (2005). Eubulides as a 20th-century semanticist. Language Sciences, 27(1), 75-95. doi:10.1016/j.langsci.2003.12.001.

    Abstract

    It is the purpose of the present paper to highlight the figure of Eubulides, a relatively unknown Greek philosopher who lived ±405–330 BC and taught at Megara, not far from Athens. He is mainly known for his four paradoxes (the Liar, the Sorites, the Electra, and the Horns), and for the mutual animosity between him and his younger contemporary Aristotle. The Megarian school of philosophy was one of the main sources of the great Stoic tradition in ancient philosophy. What has never been made explicit in the literature is the importance of the four paradoxes for the study of meaning in natural language: they summarize the whole research programme of 20th century formal or formally oriented semantics, including the problems of vague predicates (Sorites), intensional contexts (Electra), and presuppositions (Horns). One might say that modern formal or formally oriented semantics is essentially an attempt at finding linguistically tenable answers to problems arising in the context of Aristotelian thought. It is a surprising and highly significant fact that a contemporary of Aristotle already spotted the main weaknesses of the Aristotelian paradigm.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1988). Presupposition and negation. Journal of Semantics, 6(3/4), 175-226. doi:10.1093/jos/6.1.175.

    Abstract

    This paper is an attempt to show that given the available observations on the behaviour of negation and presuppositions there is no simpler explanation than to assume that natural language has two distinct negation operators, the minimal negation which preserves presuppositions and the radical negation which does not. The three-valued logic emerging from this distinction, and especially its model-theory, are discussed in detail. It is, however, stressed that the logic itself is only epiphenomenal on the structures and processes involved in the interpretation of sentences. Horn (1985) brings new observations to bear, related with metalinguistic uses of negation, and proposes a “pragmatic” ambiguity in negation to the effect that in descriptive (or “straight”) use negation is the classical bivalent operator, whereas in metalinguistic use it is non-truthfunctional but only pragmatic. Van der Sandt (to appear) accepts Horn's observations but proposes a different solution: he proposes an ambiguity in the argument clause of the negation operator (which, for him, too, is classical and bivalent), according to whether the negation takes only the strictly asserted proposition or covers also the presuppositions, the (scalar) implicatures and other implications (in particular of style and register) of the sentence expressing that proposition. These theories are discussed at some length. The three-valued analysis is defended on the basis of partly new observations, which do not seem to fit either Horn's or Van der Sandt's solution. It is then placed in the context of incremental discourse semantics, where both negations are seen to do the job of keeping increments out of the discourse domain, though each does so in its own specific way. The metalinguistic character of the radical negation is accounted for in terms of the incremental apparatus. The metalinguistic use of negation in denials of implicatures or implications of style and register is regarded as a particular form of minimal negation, where the negation denies not the proposition itself but the appropriateness of the use of an expression in it. This appropriateness negation is truth-functional and not pragmatic, but it applies to a particular, independently motivated, analysis of the argument clause. The ambiguity of negation in natural language is different from the ordinary type of ambiguity found in the lexicon. Normally, lexical ambiguities are idiosyncratic, highly contingent, and unpredictable from language to language. In the case of negation, however, the two meanings are closely related, both truth-conditionally and incrementally. Moreover, the mechanism of discourse incrementation automatically selects the right meaning. These properties are taken to provide a sufficient basis for discarding the, otherwise valid, objection that negation is unlikely to be ambiguous because no known language makes a lexical distinction between the two readings.
  • Shapiro, K. A., Mottaghy, F. M., Schiller, N. O., Poeppel, T. D., Flüss, M. O., Müller, H. W., Caramazza, A., & Krause, B. J. (2005). Dissociating neural correlates for nouns and verbs. NeuroImage, 24(4), 1058-1067. doi:10.1016/j.neuroimage.2004.10.015.

    Abstract

    Dissociations in the ability to produce words of different grammatical categories are well established in neuropsychology but have not been corroborated fully with evidence from brain imaging. Here we report on a PET study designed to reveal the anatomical correlates of grammatical processes involving nouns and verbs. German-speaking subjects were asked to produce either plural and singular nouns, or first-person plural and singular verbs. Verbs, relative to nouns, activated a left frontal cortical network, while the opposite contrast (nouns–verbs) showed greater activation in temporal regions bilaterally. Similar patterns emerged when subjects performed the task with pseudowords used as nouns or as verbs. These results converge with findings from lesion studies and suggest that grammatical category is an important dimension of organization for knowledge of language in the brain.
  • Sharp, D. J., Scott, S. K., Cutler, A., & Wise, R. J. S. (2005). Lexical retrieval constrained by sound structure: The role of the left inferior frontal gyrus. Brain and Language, 92(3), 309-319. doi:10.1016/j.bandl.2004.07.002.

    Abstract

    Positron emission tomography was used to investigate two competing hypotheses about the role of the left inferior frontal gyrus (IFG) in word generation. One proposes a domain-specific organization, with neural activation dependent on the type of information being processed, i.e., surface sound structure or semantic. The other proposes a process-specific organization, with activation dependent on processing demands, such as the amount of selection needed to decide between competing lexical alternatives. In a novel word retrieval task, word reconstruction (WR), subjects generated real words from heard non-words by the substitution of either a vowel or consonant. Both types of lexical retrieval, informed by sound structure alone, produced activation within anterior and posterior left IFG regions. Within these regions there was greater activity for consonant WR, which is more difficult and imposes greater processing demands. These results support a process-specific organization of the anterior left IFG.
  • Sicoli, M. A. (2010). Shifting voices with participant roles: Voice qualities and speech registers in Mesoamerica. Language in Society, 39(4), 521-553. doi:10.1017/S0047404510000436.

    Abstract

    Although an increasing number of sociolinguistic researchers consider functions of voice qualities as stylistic features, few studies consider cases where voice qualities serve as the primary signs of speech registers. This article addresses this gap through the presentation of a case study of Lachixio Zapotec speech registers indexed though falsetto, breathy, creaky, modal, and whispered voice qualities. I describe the system of contrastive speech registers in Lachixio Zapotec and then track a speaker on a single evening where she switches between three of these registers. Analyzing line-by-line conversational structure I show both obligatory and creative shifts between registers that co-occur with shifts in the participant structures of the situated social interactions. I then examine similar uses of voice qualities in other Zapotec languages and in the two unrelated language families Nahuatl and Mayan to suggest the possibility that such voice registers are a feature of the Mesoamerican culture area.
  • Sidnell, J., & Stivers, T. (Eds.). (2005). Multimodal Interaction [Special Issue]. Semiotica, 156.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Skiba, R., & Dittmar, N. (1992). Pragmatic, semantic and syntactic constraints and grammaticalization: A longitudinal perspective. Studies in Second Language Acquisition, 14, 323-349. doi:10.1017/S0272263100011141.
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Snowdon, C. T., Pieper, B. A., Boe, C. Y., Cronin, K. A., Kurian, A. V., & Ziegler, T. E. (2010). Variation in oxytocin is related to variation in affiliative behavior in monogamous, pairbonded tamarins. Hormones and Behavior, 58(4), 614-618. doi:10.1016/j.yhbeh.2010.06.014.

    Abstract

    Oxytocin plays an important role in monogamous pairbonded female voles, but not in polygamous voles. Here we examined a socially monogamous cooperatively breeding primate where both sexes share in parental care and territory defense for within species variation in behavior and female and male oxytocin levels in 14 pairs of cotton-top tamarins (Saguinus oedipus). In order to obtain a stable chronic assessment of hormones and behavior, we observed behavior and collected urinary hormonal samples across the tamarins’ 3-week ovulatory cycle. We found similar levels of urinary oxytocin in both sexes. However, basal urinary oxytocin levels varied 10-fold across pairs and pair-mates displayed similar oxytocin levels. Affiliative behavior (contact, grooming, sex) also varied greatly across the sample and explained more than half the variance in pair oxytocin levels. The variables accounting for variation in oxytocin levels differed by sex. Mutual contact and grooming explained most of the variance in female oxytocin levels, whereas sexual behavior explained most of the variance in male oxytocin levels. The initiation of contact by males and solicitation of sex by females were related to increased levels of oxytocin in both. This study demonstrates within-species variation in oxytocin that is directly related to levels of affiliative and sexual behavior. However, different behavioral mechanisms influence oxytocin levels in males and females and a strong pair relationship (as indexed by high levels of oxytocin) may require the activation of appropriate mechanisms for both sexes.
  • Srivastava, S., Budwig, N., & Narasimhan, B. (2005). A developmental-functionalist view of the development of transitive and intratransitive constructions in a Hindi-speaking child: A case study. International Journal of Idiographic Science, 2.
  • Stivers, T. (2005). Parent resistance to physicians' treatment recommendations: One resource for initiating a negotiation of the treatment decision. Health Communication, 18(1), 41-74. doi:10.1207/s15327027hc1801_3.

    Abstract

    This article examines pediatrician-parent interaction in the context of acute pediatric encounters for children with upper respiratory infections. Parents and physicians orient to treatment recommendations as normatively requiring parent acceptance for physicians to close the activity. Through acceptance, withholding of acceptance, or active resistance, parents have resources with which to negotiate for a treatment outcome that is in line with their own wants. This article offers evidence that even in acute care, shared decision making not only occurs but, through normative constraints, is mandated for parents and physicians to reach accord in the treatment decision.
  • Stivers, T. (2005). Modified repeats: One method for asserting primary rights from second position. Research on Language and Social Interaction, 38(2), 131-158. doi:10.1207/s15327973rlsi3802_1.

    Abstract

    In this article I examine one practice speakers have for confirming when confirmation was not otherwise relevant. The practice involves a speaker repeating an assertion previously made by another speaker in modified form with stress on the copula/auxiliary. I argue that these modified repeats work to undermine the first speaker's default ownership and rights over the claim and instead assert the primacy of the second speaker's rights to make the statement. Two types of modified repeats are identified: partial and full. Although both involve competing for primacy of the claim, they occur in distinct sequential environments: The former are generally positioned after a first claim was epistemically downgraded, whereas the latter are positioned following initial claims that were offered straightforwardly, without downgrading.
  • Stivers, T., & Rossano, F. (2010). A scalar view of response relevance. Research on Language and Social Interaction, 43, 49-56. doi:10.1080/08351810903471381.
  • Stivers, T. (2010). An overview of the question-response system in American English conversation. Journal of Pragmatics, 42, 2772-2781. doi:10.1016/j.pragma.2010.04.011.

    Abstract

    This article, part of a 10 language comparative project on question–response sequences, discusses these sequences in American English conversation. The data are video-taped spontaneous naturally occurring conversations involving two to five adults. Relying on these data I document the basic distributional patterns of types of questions asked (polar, Q-word or alternative as well as sub-types), types of social actions implemented by these questions (e.g., repair initiations, requests for confirmation, offers or requests for information), and types of responses (e.g., repetitional answers or yes/no tokens). I show that declarative questions are used more commonly in conversation than would be suspected by traditional grammars of English and questions are used for a wider range of functions than grammars would suggest. Finally, this article offers distributional support for the idea that responses that are better “fitted” with the question are preferred.
  • Stivers, T., & Enfield, N. J. (2010). A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42, 2620-2626. doi:10.1016/j.pragma.2010.04.002.

    Abstract

    no abstract is available for this article
  • Stivers, T., & Sidnell, J. (2005). Introduction: Multimodal interaction. Semiotica, 156(1/4), 1-20. doi:10.1515/semi.2005.2005.156.1.

    Abstract

    That human social interaction involves the intertwined cooperation of different modalities is uncontroversial. Researchers in several allied fields have, however, only recently begun to document the precise ways in which talk, gesture, gaze, and aspects of the material surround are brought together to form coherent courses of action. The papers in this volume are attempts to develop this line of inquiry. Although the authors draw on a range of analytic, theoretical, and methodological traditions (conversation analysis, ethnography, distributed cognition, and workplace studies), all are concerned to explore and illuminate the inherently multimodal character of social interaction. Recent studies, including those collected in this volume, suggest that different modalities work together not only to elaborate the semantic content of talk but also to constitute coherent courses of action. In this introduction we present evidence for this position. We begin by reviewing some select literature focusing primarily on communicative functions and interactive organizations of specific modalities before turning to consider the integration of distinct modalities in interaction.
  • Stivers, T. (2005). Non-antibiotic treatment recommendations: Delivery formats and implications for parent resistance. Social Science & Medicine, 60(5), 949-964. doi:10.1016/j.socscimed.2004.06.040.

    Abstract

    This study draws on a database of 570 community-based acute pediatric encounters in the USA and uses conversation analysis as a methodology to identify two formats physicians use to recommend non-antibiotic treatment in acute pediatric care (using a subset of 309 cases): recommendations for particular treatment (e.g., “I’m gonna give her some cough medicine.”) and recommendations against particular treatment (e.g., “She doesn’t need any antibiotics.”). The findings are that the presentation of a specific affirmative recommendation for treatment is less likely to engender parent resistance to a non-antibiotic treatment recommendation than a recommendation against particular treatment even if the physician later offers a recommendation for particular treatment. It is suggested that physicians who provide a specific positive treatment recommendation followed by a negative recommendation are most likely to attain parent alignment and acceptance when recommending a non-antibiotic treatment for a viral upper respiratory illness.
  • Stivers, T., & Rossano, F. (2010). Mobilizing response. Research on Language and Social Interaction, 43, 3-31. doi:10.1080/08351810903471258.

    Abstract

    A fundamental puzzle in the organization of social interaction concerns how one individual elicits a response from another. This article asks what it is about some sequentially initial turns that reliably mobilizes a coparticipant to respond and under what circumstances individuals are accountable for producing a response. Whereas a linguistic approach suggests that this is what oquestionso (more generally) and interrogativity (more narrowly) are for, a sociological approach to social interaction suggests that the social action a person is implementing mobilizes a recipient's response. We find that although both theories have merit, neither adequately solves the puzzle. We argue instead that different actions mobilize response to different degrees. Speakers then design their turns to perform actions, and with particular response-mobilizing features of turn-design speakers can hold recipients more accountable for responding or not. This model of response relevance allows sequential position, action, and turn design to each contribute to response relevance.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (Eds.). (2010). Question-response sequences in conversation across ten languages [Special Issue]. Journal of Pragmatics, 42(10). doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2010). Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42, 2615-2619. doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., & Hayashi, M. (2010). Transformative answers: One way to resist a question's constraints. Language in Society, 39, 1-25. doi:10.1017/S0047404509990637.

    Abstract

    A number of Conversation Analytic studies have documented that question recipients have a variety of ways to push against the constraints that questions impose on them. This article explores the concept of transformative answers – answers through which question recipients retroactively adjust the question posed to them. Two main sorts of adjustments are discussed: question term transformations and question agenda transformations. It is shown that the operations through which interactants implement term transformations are different from the operations through which they implement agenda transformations. Moreover, term-transforming answers resist only the question’s design, while agenda-transforming answers effectively resist both design and agenda, thus implying that agenda-transforming answers resist more strongly than design-transforming answers. The implications of these different sorts of transformations for alignment and affiliation are then explored.
  • Striano, T., & Liszkowski, U. (2005). Sensitivity to the context of facial expression in the still face at 3-, 6-, and 9-months of age. Infant Behavior and Development, 28(1), 10-19. doi:10.1016/j.infbeh.2004.06.004.

    Abstract

    Thirty-eight 3-, 6-, and 9-month-old infants interacted in a face to face situation with a female stranger who disrupted the on-going interaction with 30 s Happy and Neutral still face episodes. Three- and 6-month-olds manifested a robust still face response for gazing and smiling. For smiling, 9-month-olds manifested a floor effect such that no still face effect could be shown. For gazing, 9-month-olds' still face response was modulated by the context of interaction such that it was less pronounced if a happy still face was presented first. The findings point to a developmental transition by the end of the first year, whereby infants' still face response becomes increasingly influenced by the context of social interaction. (C) 2004 Published by Elsevier Inc. [References: 35]
  • Swingley, D. (2005). Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50(1), 86-132. doi:10.1016/j.cogpsych.2004.06.001.

    Abstract

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words, rather than missegmentations. In English and Dutch, these word-forms exhibit the strong–weak (trochaic) pattern that guides lexical segmentation after 8 months, suggesting that the trochaic parsing bias is learned as a generalization from statistically extracted bisyllables, and not via attention to short utterances or to high-frequency bisyllables. Extracted word-forms come from various syntactic classes, and exhibit distributional characteristics enabling rudimentary sorting of words into syntactic categories. The results highlight the importance of infants’ first year in language learning: though they may know the meanings of very few words, infants are well on their way to building a vocabulary.
  • Swingley, D. (2005). 11-month-olds' knowledge of how familiar words sounds. Developmental Science, 8(5), 432-443. doi:10.1111/j.1467-7687.2005.00432.

    Abstract

    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2005). The acquisition of auxiliary syntax: BE and HAVE. Cognitive Linguistics, 16(1), 247-277. doi:10.1515/cogl.2005.16.1.247.

    Abstract

    This study examined patterns of auxiliary provision and omission for the auxiliaries BE and HAVE in a longitudinal data set from 11 children between the ages of two and three years. Four possible explanations for auxiliary omission—a lack of lexical knowledge, performance limitations in production, the Optional Infinitive hypothesis, and patterns of auxiliary use in the input—were examined. The data suggest that although none of these accounts provides a full explanation for the pattern of auxiliary use and nonuse observed in children's early speech, integrating input-based and lexical learning-based accounts of early language acquisition within a constructivist approach appears to provide a possible framework in which to understand the patterns of auxiliary use found in the children's speech. The implications of these findings for models of children's early language acquisition are discussed.
  • Torreira, F., Adda-Decker, M., & Ernestus, M. (2010). The Nijmegen corpus of casual French. Speech Communication, 52, 201-212. doi:10.1016/j.specom.2009.10.004.

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual French (NCCFr). The corpus contains a total of over 36 h of recordings of 46 French speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around 90 min of speech from every pair of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Comparisons with the ESTER corpus of journalistic speech show that the two corpora contain speech of considerably different registers. A number of indicators of casualness, including swear words, casual words, verlan, disfluencies and word repetitions, are more frequent in the NCCFr than in the ESTER corpus, while the use of double negation, an indicator of formal speech, is less frequent. In general, these estimates of casualness are constant through the three parts of the recording sessions and across speakers. Based on these facts, we conclude that our corpus is a rich resource of highly casual speech, and that it can be effectively exploited by researchers in language science and technology.

    Files private

    Request files
  • Tucker, B. V., & Warner, N. (2010). What it means to be phonetic or phonological: The case of Romanian devoiced nasals. Phonology, 27, 289-324. doi:10.1017/S0952675710000138.

    Abstract

    phonological patterns and detailed phonetic patterns can combine to produce unusual acoustic results, but criteria for what aspects of a pattern are phonetic and what aspects are phonological are often disputed. Early literature on Romanian makes mention of nasal devoicing in word-final clusters (e.g. in /basm/ 'fairy-tale'). Using acoustic, aerodynamic and ultrasound data, the current work investigates how syllable structure, prosodic boundaries, phonetic paradigm uniformity and assimilation influence Romanian nasal devoicing. It provides instrumental phonetic documentation of devoiced nasals, a phenomenon that has not been widely studied experimentally, in a phonetically underdocumented language. We argue that sound patterns should not be separated into phonetics and phonology as two distinct systems, but neither should they all be grouped together as a single, undifferentiated system. Instead, we argue for viewing the distinction between phonetics and phonology as a largely continuous multidimensional space, within which sound patterns, including Romanian nasal devoicing, fall.
  • Uddén, J., Folia, V., & Petersson, K. M. (2010). The neuropharmacology of implicit learning. Current Neuropharmacology, 8, 367-381. doi:10.2174/157015910793358178.

    Abstract

    Two decades of pharmacologic research on the human capacity to implicitly acquire knowledge as well as cognitive skills and procedures have yielded surprisingly few conclusive insights. We review the empirical literature of the neuropharmacology of implicit learning. We evaluate the findings in the context of relevant computational models related to neurotransmittors such as dopamine, serotonin, acetylcholine and noradrenalin. These include models for reinforcement learning, sequence production, and categorization. We conclude, based on the reviewed literature, that one can predict improved implicit acquisition by moderately elevated dopamine levels and impaired implicit acquisition by moderately decreased dopamine levels. These effects are most prominent in the dorsal striatum. This is supported by a range of behavioral tasks in the empirical literature. Similar predictions can be made for serotonin, although there is yet a lack of support in the literature for serotonin involvement in classical implicit learning tasks. There is currently a lack of evidence for a role of the noradrenergic and cholinergic systems in implicit and related forms of learning. GABA modulators, including benzodiazepines, seem to affect implicit learning in a complex manner and further research is needed. Finally, we identify allosteric AMPA receptors modulators as a potentially interesting target for future investigation of the neuropharmacology of procedural and implicit learning.
  • Vainio, M., Järvikivi, J., Aalto, D., & Suni, A. (2010). Phonetic tone signals phonological quantity and word structure. Journal of the Acoustical Society of America, 128, 1313-1321. doi:10.1121/1.3467767.

    Abstract

    Many languages exploit suprasegmental devices in signaling word meaning. Tone languages exploit fundamental frequency whereas quantity languages rely on segmental durations to distinguish otherwise similar words. Traditionally, duration and tone have been taken as mutually exclusive. However, some evidence suggests that, in addition to durational cues, phonological quantity is associated with and co-signaled by changes in fundamental frequency in quantity languages such as Finnish, Estonian, and Serbo-Croat. The results from the present experiment show that the structure of disyllabic word stems in Finnish are indeed signaled tonally and that the phonological length of the stressed syllable is further tonally distinguished within the disyllabic sequence. The results further indicate that the observed association of tone and duration in perception is systematically exploited in speech production in Finnish.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Gijn, R. (2010). [Review of the book Complementation ed. by R. M. W. Dixon, A. Aikhenvald]. Studies in Language, 34(1), 187-194. doi:10.1075/sl.34.1.06van.
  • Van Putten, S. (2010). [Review of the book Focus structures in African languages: The interaction of focus and grammar", edited by Enoch Oladé Aboh, Katharina Hartmann & Malte Zimmermann]. Journal of African Languages and Linguistics, 31(1), 101-104. doi:10.1515/JALL.2010.006.
  • Van Gijn, R., & Hirtzel, V. (2010). [Review of the book The Anthropology of color, ed. by Robert E. MacLaura, Galina V. Paramei and Don Dedrick]. Journal of Linguistic Anthropology, 20(1), 241-245.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 443-467. doi:10.1037/0278-7393.31.3.443.

    Abstract

    The authors examined whether people can use their knowledge of the wider discourse rapidly enough to anticipate specific upcoming words as a sentence is unfolding. In an event-related brain potential (ERP) experiment, subjects heard Dutch stories that supported the prediction of a specific noun. To probe whether this noun was anticipated at a preceding indefinite article, stories were continued with a gender-marked adjective whose suffix mismatched the upcoming noun's syntactic gender. Prediction-inconsistent adjectives elicited a differential ERP effect, which disappeared in a no-discourse control experiment. Furthermore, in self-paced reading, prediction-inconsistent adjectives slowed readers down before the noun. These findings suggest that people can indeed predict upcoming words in fluent discourse and, moreover, that these predicted words can immediately begin to participate in incremental parsing operations.
  • Van Halteren, H., Baayen, R. H., Tweedie, F., Haverkort, M., & Neijt, A. (2005). New machine learning methods demonstrate the existence of a human stylome. Journal of Quantitative Linguistics, 12(1), 65-77. doi:10.1080/09296170500055350.

    Abstract

    Earlier research has shown that established authors can be distinguished by measuring specific properties of their writings, their stylome as it were. Here, we examine writings of less experienced authors. We succeed in distinguishing between these authors with a very high probability, which implies that a stylome exists even in the general population. However, the number of traits needed for so successful a distinction is an order of magnitude larger than assumed so far. Furthermore, traits referring to syntactic patterns prove less distinctive than traits referring to vocabulary, but much more distinctive than expected on the basis of current generativist theories of language learning.
  • Van der Linden, M., Van Turennout, M., & Indefrey, P. (2010). Formation of category representations in superior temporal sulcus. Journal of Cognitive Neuroscience, 22, 1270-1282. doi:10.1162/jocn.2009.21270.

    Abstract

    The human brain contains cortical areas specialized in representing object categories. Visual experience is known to change the responses in these category-selective areas of the brain. However, little is known about how category training specifically affects cortical category selectivity. Here, we investigated the experience-dependent formation of object categories using an fMRI adaptation paradigm. Outside the scanner, subjects were trained to categorize artificial bird types into arbitrary categories (jungle birds and desert birds). After training, neuronal populations in the occipito-temporal cortex, such as the fusiform and the lateral occipital gyrus, were highly sensitive to perceptual stimulus differences. This sensitivity was not present for novel birds, indicating experience-related changes in neuronal representations. Neurons in STS showed category selectivity. A release from adaptation in STS was only observed when two birds in a pair crossed the category boundary. This dissociation could not be explained by perceptual similarities because the physical difference between birds from the same side of the category boundary and between birds from opposite sides of the category boundary was equal. Together, the occipito-temporal cortex and the STS have the properties suitable for a system that can both generalize across stimuli and discriminate between them.
  • Van Gijn, R. (2010). Middle voice and ideophones, a diachronic connection: The case of Yurakaré. Studies in Language, 34, 273-297. doi:10.1075/sl.34.2.02gij.

    Abstract

    Kemmer (1993) argues that middle voice markers almost always arise diachronically through the semantic extension of a reflexive marker to other semantic uses related to reflexive. In this paper I will argue for an alternative diachronic path that has led to the development of the middle marker in Yurakaré (unclassified, Bolivia): through ideophone-verb constructions. Taking this perspective helps explain a number of synchronic peculiarities of the middle marker in Yurakaré, and it introduces a previously unnoticed channel for middle voice markers to arise.

    Files private

    Request files
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2010). Is there pain in champagne? Semantic involvement of words within words during sense-making. Journal of Cognitive Neuroscience, 22, 2618-2626. doi:10.1162/jocn.2009.21336.

    Abstract

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e. g., pie in pirate) or at their offsets (e. g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial-and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.

Share this page