Publications

Displaying 401 - 485 of 485
  • Seyfeddinipur, M., Kita, S., & Indefrey, P. (2008). How speakers interrupt themselves in managing problems in speaking: Evidence from self-repairs. Cognition, 108(3), 837-842. doi:10.1016/j.cognition.2008.05.004.

    Abstract

    When speakers detect a problem in what they are saying, they must decide whether or not to interrupt themselves and repair the problem, and if so, when. Speakers will maximize accuracy if they interrupt themselves as soon as they detect a problem, but they will maximize fluency if they go on speaking until they are ready to produce the repair. Speakers must choose between these options. In a corpus analysis, we identified 448 speech repairs, classified them as major (as in a fresh start) or minor (as in a phoneme correction), and measured the interval between suspension and repair. The results showed that speakers interrupted themselves not at the moment they detected the problem but at the moment they were ready to produce the repair. Speakers preferred fluency over accuracy.
  • Shapiro, K. A., Mottaghy, F. M., Schiller, N. O., Poeppel, T. D., Flüss, M. O., Müller, H. W., Caramazza, A., & Krause, B. J. (2005). Dissociating neural correlates for nouns and verbs. NeuroImage, 24(4), 1058-1067. doi:10.1016/j.neuroimage.2004.10.015.

    Abstract

    Dissociations in the ability to produce words of different grammatical categories are well established in neuropsychology but have not been corroborated fully with evidence from brain imaging. Here we report on a PET study designed to reveal the anatomical correlates of grammatical processes involving nouns and verbs. German-speaking subjects were asked to produce either plural and singular nouns, or first-person plural and singular verbs. Verbs, relative to nouns, activated a left frontal cortical network, while the opposite contrast (nouns–verbs) showed greater activation in temporal regions bilaterally. Similar patterns emerged when subjects performed the task with pseudowords used as nouns or as verbs. These results converge with findings from lesion studies and suggest that grammatical category is an important dimension of organization for knowledge of language in the brain.
  • Sharp, D. J., Scott, S. K., Cutler, A., & Wise, R. J. S. (2005). Lexical retrieval constrained by sound structure: The role of the left inferior frontal gyrus. Brain and Language, 92(3), 309-319. doi:10.1016/j.bandl.2004.07.002.

    Abstract

    Positron emission tomography was used to investigate two competing hypotheses about the role of the left inferior frontal gyrus (IFG) in word generation. One proposes a domain-specific organization, with neural activation dependent on the type of information being processed, i.e., surface sound structure or semantic. The other proposes a process-specific organization, with activation dependent on processing demands, such as the amount of selection needed to decide between competing lexical alternatives. In a novel word retrieval task, word reconstruction (WR), subjects generated real words from heard non-words by the substitution of either a vowel or consonant. Both types of lexical retrieval, informed by sound structure alone, produced activation within anterior and posterior left IFG regions. Within these regions there was greater activity for consonant WR, which is more difficult and imposes greater processing demands. These results support a process-specific organization of the anterior left IFG.
  • Shatzman, K. B., & Schiller, N. O. (2004). The word frequency effect in picture naming: Contrasting two hypotheses using homonym pictures. Brain and Language, 90(1-3), 160-169. doi:10.1016/S0093-934X(03)00429-2.

    Abstract

    Models of speech production disagree on whether or not homonyms have a shared word-form representation. To investigate this issue, a picture-naming experiment was carried out using Dutch homonyms of which both meanings could be presented as a picture. Naming latencies for the low-frequency meanings of homonyms were slower than for those of the high-frequency meanings. However, no frequency effect was found for control words, which matched the frequency of the homonyms meanings. Subsequent control experiments indicated that the difference in naming latencies for the homonyms could be attributed to processes earlier than wordform retrieval. Specifically, it appears that low name agreement slowed down the naming of the low-frequency homonym pictures.
  • Sidnell, J., & Stivers, T. (Eds.). (2005). Multimodal Interaction [Special Issue]. Semiotica, 156.
  • Skiba, R., Wittenburg, F., & Trilsbeek, P. (2004). New DoBeS web site: Contents & functions. Language Archive Newsletter, 1(2), 4-4.
  • Srivastava, S., Budwig, N., & Narasimhan, B. (2005). A developmental-functionalist view of the development of transitive and intratransitive constructions in a Hindi-speaking child: A case study. International Journal of Idiographic Science, 2.
  • Stefansson, H., Rujescu, D., Cichon, S., Pietilainen, O. P. H., Ingason, A., Steinberg, S., Fossdal, R., Sigurdsson, E., Sigmundsson, T., Buizer-Voskamp, J. E., Hansen, T., Jakobsen, K. D., Muglia, P., Francks, C., Matthews, P. M., Gylfason, A., Halldorsson, B. V., Gudbjartsson, D., Thorgeirsson, T. E., Sigurdsson, A. and 55 moreStefansson, H., Rujescu, D., Cichon, S., Pietilainen, O. P. H., Ingason, A., Steinberg, S., Fossdal, R., Sigurdsson, E., Sigmundsson, T., Buizer-Voskamp, J. E., Hansen, T., Jakobsen, K. D., Muglia, P., Francks, C., Matthews, P. M., Gylfason, A., Halldorsson, B. V., Gudbjartsson, D., Thorgeirsson, T. E., Sigurdsson, A., Jonasdottir, A., Jonasdottir, A., Bjornsson, A., Mattiasdottir, S., Blondal, T., Haraldsson, M., Magnusdottir, B. B., Giegling, I., Möller, H.-J., Hartmann, A., Shianna, K. V., Ge, D., Need, A. C., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Paunio, T., Toulopoulou, T., Bramon, E., Forti, M. D., Murray, R., Ruggeri, M., Vassos, E., Tosato, S., Walshe, M., Li, T., Vasilescu, C., Muhleisen, T. W., Wang, A. G., Ullum, H., Djurovic, S., Melle, I., Olesen, J., Kiemeney, L. A., Franke, B., Sabatti, C., Freimer, N. B., Gulcher, J. R., Thorsteinsdottir, U., Kong, A., Andreassen, O. A., Ophoff, R. A., Georgi, A., Rietschel, M., Werge, T., Petursson, H., Goldstein, D. B., Nothen, M. M., Peltonen, L., Collier, D. A., St. Clair, D., & Stefansson, K. (2008). Large recurrent microdeletions associated with schizophrenia [Letter to Nature]. Nature, 455(7210), 232-236. doi:10.1038/nature07229.

    Abstract

    Reduced fecundity, associated with severe mental disorders, places negative selection pressure on risk alleles and may explain, in part, why common variants have not been found that confer risk of disorders such as autism, schizophrenia and mental retardation. Thus, rare variants may account for a larger fraction of the overall genetic risk than previously assumed. In contrast to rare single nucleotide mutations, rare copy number variations (CNVs) can be detected using genome-wide single nucleotide polymorphism arrays. This has led to the identification of CNVs associated with mental retardation and autism. In a genome-wide search for CNVs associating with schizophrenia, we used a population-based sample to identify de novo CNVs by analysing 9,878 transmissions from parents to offspring. The 66 de novo CNVs identified were tested for association in a sample of 1,433 schizophrenia cases and 33,250 controls. Three deletions at 1q21.1, 15q11.2 and 15q13.3 showing nominal association with schizophrenia in the first sample (phase I) were followed up in a second sample of 3,285 cases and 7,951 controls (phase II). All three deletions significantly associate with schizophrenia and related psychoses in the combined sample. The identification of these rare, recurrent risk variants, having occurred independently in multiple founders and being subject to negative selection, is important in itself. CNV analysis may also point the way to the identification of additional and more prevalent risk variants in genes and pathways involved in schizophrenia.

    Additional information

    Suppl.Material.pdf
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T. (2005). Parent resistance to physicians' treatment recommendations: One resource for initiating a negotiation of the treatment decision. Health Communication, 18(1), 41-74. doi:10.1207/s15327027hc1801_3.

    Abstract

    This article examines pediatrician-parent interaction in the context of acute pediatric encounters for children with upper respiratory infections. Parents and physicians orient to treatment recommendations as normatively requiring parent acceptance for physicians to close the activity. Through acceptance, withholding of acceptance, or active resistance, parents have resources with which to negotiate for a treatment outcome that is in line with their own wants. This article offers evidence that even in acute care, shared decision making not only occurs but, through normative constraints, is mandated for parents and physicians to reach accord in the treatment decision.
  • Stivers, T. (2008). Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation. Research on Language and Social Interaction, 41(1), 31-57. doi:10.1080/08351810701691123.

    Abstract

    Through stories, tellers communicate their stance toward what they are reporting. Story recipients rely on different interactional resources to display alignment with the telling activity and affiliation with the teller's stance. In this article, I examine the communication resources participants to tellings rely on to manage displays of alignment and affiliation during the telling. The primary finding is that whereas vocal continuers simply align with the activity in progress, nods also claim access to the teller's stance toward the events (whether directly or indirectly). In mid-telling, when a recipient nods, she or he claims to have access to the teller's stance toward the event being reported, which in turn conveys preliminary affiliation with the teller's position and that the story is on track toward preferred uptake at story completion. Thus, the concepts of structural alignment and social affiliation are separate interactional issues and are managed by different response tokens in the mid-telling sequential environment.
  • Stivers, T. (2005). Modified repeats: One method for asserting primary rights from second position. Research on Language and Social Interaction, 38(2), 131-158. doi:10.1207/s15327973rlsi3802_1.

    Abstract

    In this article I examine one practice speakers have for confirming when confirmation was not otherwise relevant. The practice involves a speaker repeating an assertion previously made by another speaker in modified form with stress on the copula/auxiliary. I argue that these modified repeats work to undermine the first speaker's default ownership and rights over the claim and instead assert the primacy of the second speaker's rights to make the statement. Two types of modified repeats are identified: partial and full. Although both involve competing for primacy of the claim, they occur in distinct sequential environments: The former are generally positioned after a first claim was epistemically downgraded, whereas the latter are positioned following initial claims that were offered straightforwardly, without downgrading.
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Stivers, T., & Sidnell, J. (2005). Introduction: Multimodal interaction. Semiotica, 156(1/4), 1-20. doi:10.1515/semi.2005.2005.156.1.

    Abstract

    That human social interaction involves the intertwined cooperation of different modalities is uncontroversial. Researchers in several allied fields have, however, only recently begun to document the precise ways in which talk, gesture, gaze, and aspects of the material surround are brought together to form coherent courses of action. The papers in this volume are attempts to develop this line of inquiry. Although the authors draw on a range of analytic, theoretical, and methodological traditions (conversation analysis, ethnography, distributed cognition, and workplace studies), all are concerned to explore and illuminate the inherently multimodal character of social interaction. Recent studies, including those collected in this volume, suggest that different modalities work together not only to elaborate the semantic content of talk but also to constitute coherent courses of action. In this introduction we present evidence for this position. We begin by reviewing some select literature focusing primarily on communicative functions and interactive organizations of specific modalities before turning to consider the integration of distinct modalities in interaction.
  • Stivers, T. (2005). Non-antibiotic treatment recommendations: Delivery formats and implications for parent resistance. Social Science & Medicine, 60(5), 949-964. doi:10.1016/j.socscimed.2004.06.040.

    Abstract

    This study draws on a database of 570 community-based acute pediatric encounters in the USA and uses conversation analysis as a methodology to identify two formats physicians use to recommend non-antibiotic treatment in acute pediatric care (using a subset of 309 cases): recommendations for particular treatment (e.g., “I’m gonna give her some cough medicine.”) and recommendations against particular treatment (e.g., “She doesn’t need any antibiotics.”). The findings are that the presentation of a specific affirmative recommendation for treatment is less likely to engender parent resistance to a non-antibiotic treatment recommendation than a recommendation against particular treatment even if the physician later offers a recommendation for particular treatment. It is suggested that physicians who provide a specific positive treatment recommendation followed by a negative recommendation are most likely to attain parent alignment and acceptance when recommending a non-antibiotic treatment for a viral upper respiratory illness.
  • Striano, T., & Liszkowski, U. (2005). Sensitivity to the context of facial expression in the still face at 3-, 6-, and 9-months of age. Infant Behavior and Development, 28(1), 10-19. doi:10.1016/j.infbeh.2004.06.004.

    Abstract

    Thirty-eight 3-, 6-, and 9-month-old infants interacted in a face to face situation with a female stranger who disrupted the on-going interaction with 30 s Happy and Neutral still face episodes. Three- and 6-month-olds manifested a robust still face response for gazing and smiling. For smiling, 9-month-olds manifested a floor effect such that no still face effect could be shown. For gazing, 9-month-olds' still face response was modulated by the context of interaction such that it was less pronounced if a happy still face was presented first. The findings point to a developmental transition by the end of the first year, whereby infants' still face response becomes increasingly influenced by the context of social interaction. (C) 2004 Published by Elsevier Inc. [References: 35]
  • Swingley, D. (2005). Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50(1), 86-132. doi:10.1016/j.cogpsych.2004.06.001.

    Abstract

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words, rather than missegmentations. In English and Dutch, these word-forms exhibit the strong–weak (trochaic) pattern that guides lexical segmentation after 8 months, suggesting that the trochaic parsing bias is learned as a generalization from statistically extracted bisyllables, and not via attention to short utterances or to high-frequency bisyllables. Extracted word-forms come from various syntactic classes, and exhibit distributional characteristics enabling rudimentary sorting of words into syntactic categories. The results highlight the importance of infants’ first year in language learning: though they may know the meanings of very few words, infants are well on their way to building a vocabulary.
  • Swingley, D. (2005). 11-month-olds' knowledge of how familiar words sounds. Developmental Science, 8(5), 432-443. doi:10.1111/j.1467-7687.2005.00432.

    Abstract

    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Taylor, L. J., Lev-Ari, S., & Zwaan, R. A. (2008). Inferences about action engage action systems. Brain and Language, 107(1), 62-67. doi:10.1016/j.bandl.2007.08.004.

    Abstract

    Verbal descriptions of actions activate compatible motor responses [Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558–565]. Previous studies have found that the motor processes for manual rotation are engaged in a direction-specific manner when a verb disambiguates the direction of rotation [e.g. “unscrewed;” Zwaan, R. A., & Taylor, L. (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General, 135, 1–11]. The present experiment contributes to this body of work by showing that verbs that leave direction ambiguous (e.g. “turned”) do not necessarily yield such effects. Rather, motor resonance is associated with a word that disambiguates some element of an action, as meaning is being integrated across sentences. The findings are discussed within the context of discourse processes, inference generation, motor activation, and mental simulation.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernandez, G. (2008). Contributions of the medial temporal lobe to declarative memory retrieval: Manipulating the amount of contextual retrieval. Learning and Memory, 15(9), 611-617. doi:10.1101/lm.916708.

    Abstract

    We investigated how the hippocampus and its adjacent mediotemporal structures contribute to contextual and noncontextual declarative memory retrieval by manipulating the amount of contextual information across two levels of the same contextual dimension in a source memory task. A first analysis identified medial temporal lobe (MTL) substructures mediating either contextual or noncontextual retrieval. A linearly weighted analysis elucidated which MTL substructures show a gradually increasing neural activity, depending on the amount of contextual information retrieved. A hippocampal engagement was found during both levels of source memory but not during item memory retrieval. The anterior MTL including the perirhinal cortex was only engaged during item memory retrieval by an activity decrease. Only the posterior parahippocampal cortex showed an activation increasing with the amount of contextual information retrieved. If one assumes a roughly linear relationship between the blood-oxygenation level-dependent (BOLD) signal and the associated cognitive process, our results suggest that the posterior parahippocampal cortex is involved in contextual retrieval on the basis of memory strength while the hippocampus processes representations of item-context binding. The anterior MTL including perirhinal cortex seems to be particularly engaged in familiarity-based item recognition. If one assumes departure from linearity, however, our results can also be explained by one-dimensional modulation of memory strength.
  • Terrill, A., & Burenhult, N. (2008). Orientation as a strategy of spatial reference. Studies in Language, 32(1), 93-136. doi:10.1075/sl.32.1.05ter.

    Abstract

    This paper explores a strategy of spatial expression which utilizes orientation, a way of describing the spatial relationship of entities by means of reference to their facets. We present detailed data and analysis from two languages, Jahai (Mon-Khmer, Malay Peninsula) and Lavukaleve (Papuan isolate, Solomon Islands), and supporting data from five more languages, to show that the orientation strategy is a major organizing principle in these languages. This strategy has not previously been recognized in the literature as a unitary phenomenon, and the languages which employ it present particular challenges to existing typologies of spatial frames of reference.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2005). The acquisition of auxiliary syntax: BE and HAVE. Cognitive Linguistics, 16(1), 247-277. doi:10.1515/cogl.2005.16.1.247.

    Abstract

    This study examined patterns of auxiliary provision and omission for the auxiliaries BE and HAVE in a longitudinal data set from 11 children between the ages of two and three years. Four possible explanations for auxiliary omission—a lack of lexical knowledge, performance limitations in production, the Optional Infinitive hypothesis, and patterns of auxiliary use in the input—were examined. The data suggest that although none of these accounts provides a full explanation for the pattern of auxiliary use and nonuse observed in children's early speech, integrating input-based and lexical learning-based accounts of early language acquisition within a constructivist approach appears to provide a possible framework in which to understand the patterns of auxiliary use found in the children's speech. The implications of these findings for models of children's early language acquisition are discussed.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Toni, I., De Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language beyond action. Journal of Physiology, 102, 71-79. doi:10.1016/j.jphysparis.2008.03.005.

    Abstract

    The discovery of mirror neurons in macaques and of a similar system in humans has provided a new and fertile neurobiological ground for rooting a variety of cognitive faculties. Automatic sensorimotor resonance has been invoked as the key elementary process accounting for disparate (dys)functions, like imitation, ideomotor apraxia, autism, and schizophrenia. In this paper, we provide a critical appraisal of three of these claims that deal with the relationship between language and the motor system. Does language comprehension require the motor system? Was there an evolutionary switch from manual gestures to speech as the primary mode of language? Is human communication explained by automatic sensorimotor resonances? A positive answer to these questions would open the tantalizing possibility of bringing language and human communication within the fold of the motor system. We argue that the available empirical evidence does not appear to support these claims, and their theoretical scope fails to account for some crucial features of the phenomena they are supposed to explain. Without denying the enormous importance of the discovery of mirror neurons, we highlight the limits of their explanatory power for understanding language and communication.
  • Trilsbeek, P. (2004). Report from DoBeS training week. Language Archive Newsletter, 1(3), 12-12.
  • Trilsbeek, P. (2004). DoBeS Training Course. Language Archive Newsletter, 1(2), 6-6.
  • Uddén, J., Folia, V., Forkstam, C., Ingvar, M., Fernández, G., Overeem, S., Van Elswijk, G., Hagoort, P., & Petersson, K. M. (2008). The inferior frontal cortex in artificial syntax processing: An rTMS study. Brain Research, 1224, 69-78. doi:10.1016/j.brainres.2008.05.070.

    Abstract

    The human capacity to implicitly acquire knowledge of structured sequences has recently been investigated in artificial grammar learning using functional magnetic resonance imaging. It was found that the left inferior frontal cortex (IFC; Brodmann's area (BA) 44/45) was related to classification performance. The objective of this study was to investigate whether the IFC (BA 44/45) is causally related to classification of artificial syntactic structures by means of an off-line repetitive transcranial magnetic stimulation (rTMS) paradigm. We manipulated the stimulus material in a 2 × 2 factorial design with grammaticality status and local substring familiarity as factors. The participants showed a reliable effect of grammaticality on classification of novel items after 5days of exposure to grammatical exemplars without performance feedback in an implicit acquisition task. The results show that rTMS of BA 44/45 improves syntactic classification performance by increasing the rejection rate of non-grammatical items and by shortening reaction times of correct rejections specifically after left-sided stimulation. A similar pattern of results is observed in FMRI experiments on artificial syntactic classification. These results suggest that activity in the inferior frontal region is causally related to artificial syntax processing.
  • Van Berkum, J. J. A., Van den Brink, D., Tesink, C. M. J. Y., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20(4), 580-591. doi:10.1162/jocn.2008.20054.

    Abstract

    When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., “If only I looked like Britney Spears” in a male voice, or “I have a large tattoo on my back” spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200–300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard “Gricean” models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence (“semantics”) before working out what it really means given the wider communicative context and the particular speaker (“pragmatics”). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.
  • Van den Brink, D., & Hagoort, P. (2004). The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. Journal of Cognitive Neuroscience, 16(6), 1068-1084. doi:10.1162/0898929041502670.

    Abstract

    An event-related brain potential experiment was carried out to investigate the influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension. Subjects were presented with constraining spoken sentences that contained a critical word that was either (a) congruent, (b) semantically and syntactically incongruent, but beginning with the same initial phonemes as the congruent critical word, or (c) semantically and syntactically incongruent, beginning with phonemes that differed from the congruent critical word. Relative to the congruent condition, an N200 effect reflecting difficulty in the lexical selection process was obtained in the semantically and syntactically incongruent condition where word onset differed from that of the congruent critical word. Both incongruent conditions elicited a large N400 followed by a left anterior negativity (LAN) time-locked to the moment of word category violation and a P600 effect. These results would best fit within a cascaded model of spoken-word processing, proclaiming an optimal use of contextual information during spokenword identification by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input.
  • Van Berkum, J. J. A. (2008). Understanding sentences in context: What brain waves can tell us. Current Directions in Psychological Science, 17(6), 376-380. doi:10.1111/j.1467-8721.2008.00609.x.

    Abstract

    Language comprehension looks pretty easy. You pick up a novel and simply enjoy the plot, or ponder the human condition. You strike a conversation and listen to whatever the other person has to say. Although what you're taking in is a bunch of letters and sounds, what you really perceive—if all goes well—is meaning. But how do you get from one to the other so easily? The experiments with brain waves (event-related brain potentials or ERPs) reviewed here show that the linguistic brain rapidly draws upon a wide variety of information sources, including prior text and inferences about the speaker. Furthermore, people anticipate what might be said about whom, they use heuristics to arrive at the earliest possible interpretation, and if it makes sense, they sometimes even ignore the grammar. Language comprehension is opportunistic, proactive, and, above all, immediately context-dependent.
  • Van Berkum, J. J. A. (1986). De cognitieve psychologie op zoek naar grondslagen. Kennis en Methode: Tijdschrift voor wetenschapsfilosofie en methodologie, X, 348-360.
  • Van Berkum, J. J. A. (1986). Doordacht gevoel: Emoties als informatieverwerking. De Psycholoog, 21(9), 417-423.
  • Van Alphen, P. M., De Bree, E., Gerrits, E., De Jong, J., Wilsenach, C., & Wijnen, F. (2004). Early language development in children with a genetic risk of dyslexia. Dyslexia, 10, 265-288. doi:10.1002/dys.272.

    Abstract

    We report on a prospective longitudinal research programme exploring the connection between language acquisition deficits and dyslexia. The language development profile of children at-risk for dyslexia is compared to that of age-matched controls as well as of children who have been diagnosed with specific language impairment (SLI). The experiments described concern the perception and production of grammatical morphology, categorical perception of speech sounds, phonological processing (non-word repetition), mispronunciation detection, and rhyme detection. The results of each of these indicate that the at-risk children as a group underperform in comparison to the controls, and that, in most cases, they approach the SLI group. It can be concluded that dyslexia most likely has precursors in language development, also in domains other than those traditionally considered conditional for the acquisition of literacy skills. The dyslexia-SLI connection awaits further, particularly qualitative, analyses.
  • Van den Bos, E., & Poletiek, F. H. (2008). Effects of grammar complexity on artificial grammar learning. Memory & Cognition, 36(6), 1122-1131. doi:10.3758/MC.36.6.1122.

    Abstract

    The present study identified two aspects of complexity that have been manipulated in the implicit learning literature and investigated how they affect implicit and explicit learning of artificial grammars. Ten finite state grammars were used to vary complexity. The results indicated that dependency length is more relevant to the complexity of a structure than is the number of associations that have to be learned. Although implicit learning led to better performance on a grammaticality judgment test than did explicit learning, it was negatively affected by increasing complexity: Performance decreased as there was an increase in the number of previous letters that had to be taken into account to determine whether or not the next letter was a grammatical continuation. In particular, the results suggested that implicit learning of higher order dependencies is hampered by the presence of longer dependencies. Knowledge of first-order dependencies was acquired regardless of complexity and learning mode.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Van Alphen, P. M., & Smits, R. (2004). Acoustical and perceptual analysis of the voicing distinction in Dutch initial plosives: The role of prevoicing. Journal of Phonetics, 32(4), 455-491. doi:10.1016/j.wocn.2004.05.001.

    Abstract

    Three experiments investigated the voicing distinction in Dutch initial labial and alveolar plosives. The difference between voiced and voiceless Dutch plosives is generally described in terms of the presence or absence of prevoicing (negative voice onset time). Experiment 1 showed, however, that prevoicing was absent in 25% of voiced plosive productions across 10 speakers. The production of prevoicing was influenced by place of articulation of the plosive, by whether the plosive occurred in a consonant cluster or not, and by speaker sex. Experiment 2 was a detailed acoustic analysis of the voicing distinction, which identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. These secondary cues were different for the two places of articulation. We discuss the paradox raised by these findings: although prevoicing is the most reliable cue to the voicing distinction for listeners, it is not reliably produced by speakers.
  • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 443-467. doi:10.1037/0278-7393.31.3.443.

    Abstract

    The authors examined whether people can use their knowledge of the wider discourse rapidly enough to anticipate specific upcoming words as a sentence is unfolding. In an event-related brain potential (ERP) experiment, subjects heard Dutch stories that supported the prediction of a specific noun. To probe whether this noun was anticipated at a preceding indefinite article, stories were continued with a gender-marked adjective whose suffix mismatched the upcoming noun's syntactic gender. Prediction-inconsistent adjectives elicited a differential ERP effect, which disappeared in a no-discourse control experiment. Furthermore, in self-paced reading, prediction-inconsistent adjectives slowed readers down before the noun. These findings suggest that people can indeed predict upcoming words in fluent discourse and, moreover, that these predicted words can immediately begin to participate in incremental parsing operations.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Van Halteren, H., Baayen, R. H., Tweedie, F., Haverkort, M., & Neijt, A. (2005). New machine learning methods demonstrate the existence of a human stylome. Journal of Quantitative Linguistics, 12(1), 65-77. doi:10.1080/09296170500055350.

    Abstract

    Earlier research has shown that established authors can be distinguished by measuring specific properties of their writings, their stylome as it were. Here, we examine writings of less experienced authors. We succeed in distinguishing between these authors with a very high probability, which implies that a stylome exists even in the general population. However, the number of traits needed for so successful a distinction is an order of magnitude larger than assumed so far. Furthermore, traits referring to syntactic patterns prove less distinctive than traits referring to vocabulary, but much more distinctive than expected on the basis of current generativist theories of language learning.
  • Van den Bos, E., & Poletiek, F. H. (2008). Intentional artificial grammar learning: When does it work? European Journal of Cognitive Psychology, 20(4), 793-806. doi:10.1080/09541440701554474.

    Abstract

    Actively searching for the rules of an artificial grammar has often been shown to produce no more knowledge than memorising exemplars without knowing that they have been generated by a grammar. The present study investigated whether this ineffectiveness of intentional learning could be overcome by removing dual task demands and providing participants with more specific instructions. The results only showed a positive effect of learning intentionally for participants specifically instructed to find out which letters are allowed to follow each other. These participants were also unaffected by a salient feature. In contrast, for participants who did not know what kind of structure to expect, intentional learning was not more effective than incidental learning and knowledge acquisition was guided by salience.
  • Van Wingen, G. A., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J. K., & Fernández, G. (2008). Progesterone selectively increases amygdala reactivity in women. Molecular Psychiatry, 13, 325-333. doi:doi:10.1038/sj.mp.4002030.

    Abstract

    The acute neural effects of progesterone are mediated by its neuroactive metabolites allopregnanolone and pregnanolone. These neurosteroids potentiate the inhibitory actions of c-aminobutyric acid (GABA). Progesterone is known to produce anxiolytic effects in animals, but recent animal studies suggest that pregnanolone increases anxiety after a period of low allopregnanolone concentration. This effect is potentially mediated by the amygdala and related to the negative mood symptoms in humans that are observed during increased allopregnanolone levels. Therefore, we investigated with functional magnetic resonance imaging (MRI) whether a single progesterone administration to healthy young women in their follicular phase modulates the amygdala response to salient, biologically relevant stimuli. The progesterone administration increased the plasma concentrations of progesterone and allopregnanolone to levels that are reached during the luteal phase and early pregnancy. The imaging results show that progesterone selectively increased amygdala reactivity. Furthermore, functional connectivity analyses indicate that progesterone modulated functional coupling of the amygdala with distant brain regions. These results reveal a neural mechanism by which progesterone may mediate adverse effects on anxiety and mood.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 109-127.

    Abstract

    The acquisition of non-modal auxiliaries has been assumed to constitute an important step in the acquisition of finiteness in Germanic languages (cf. Jordens/Dimroth 2005, Jordens 2004, Becker 2005). This paper focuses onthe role of the auxiliary hebben (>to have<) in the acquisition of Dutch as a second language. More specifically, it investigates whether learners' production of hebben is related to their acquisition of two phenomena commonly associated with finiteness, i.e., topicalization and negation. Data are presented from 16 Turkish and 36 Moroccan learners of Dutch who participated in an experiment involving production and imitation tasks. The production data suggest that learners use topicalization and post-verbal negation only after they have learned to produce the auxiliary hebben. The results from the imitation task indicate, that learners are more sensitive to topicalization and post-verbal negation in sentences with hebben than in sentences with lexical verbs. Interestingly this holds also for learners that did not show productive command of hebben in the production tasks. Thus, in general, the results of the experiment provide support for the idea that non-modal auxiliaries are crucial in the acquisition of (certain properties of) finiteness.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Toegepaste Taalwetenschap in Artikelen, 73, 41-52.
  • Verhoeven, L., Baayen, R. H., & Schreuder, R. (2004). Orthographic constraints and frequency effects in complex word identification. Written Language and Literacy, 7(1), 49-59.

    Abstract

    In an experimental study we explored the role of word frequency and orthographic constraints in the reading of Dutch bisyllabic words. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme E may represent three different vowels: /ε /, /e/, or /œ /. In the experiment, skilled adult readers were presented lists of bisyllabic words containing the vowel E in the initial syllable and the same grapheme or another vowel in the second syllable. We expected word frequency to be related to word latency scores. On the basis of general word frequency data, we also expected the interpretation of the initial syllable as a stressed /e/ to be facilitated as compared to the interpretation of an unstressed /œ /. We found a strong negative correlation between word frequency and latency scores. Moreover, for words with E in either syllable we found a preference for a stressed /e/ interpretation, indicating a lexical frequency effect. The results are discussed with reference to a parallel dual-route model of word decoding.
  • Vernes, S. C., Newbury, D. F., Abrahams, B. S., Winchester, L., Nicod, J., Groszer, M., Alarcón, M., Oliver, P. L., Davies, K. E., Geschwind, D. H., Monaco, A. P., & Fisher, S. E. (2008). A functional genetic link between distinct developmental language disorders. New England Journal of Medicine, 359(22), 2337 -2345. doi:10.1056/NEJMoa0802828.

    Abstract

    BACKGROUND: Rare mutations affecting the FOXP2 transcription factor cause a monogenic speech and language disorder. We hypothesized that neural pathways downstream of FOXP2 influence more common phenotypes, such as specific language impairment. METHODS: We performed genomic screening for regions bound by FOXP2 using chromatin immunoprecipitation, which led us to focus on one particular gene that was a strong candidate for involvement in language impairments. We then tested for associations between single-nucleotide polymorphisms (SNPs) in this gene and language deficits in a well-characterized set of 184 families affected with specific language impairment. RESULTS: We found that FOXP2 binds to and dramatically down-regulates CNTNAP2, a gene that encodes a neurexin and is expressed in the developing human cortex. On analyzing CNTNAP2 polymorphisms in children with typical specific language impairment, we detected significant quantitative associations with nonsense-word repetition, a heritable behavioral marker of this disorder (peak association, P=5.0x10(-5) at SNP rs17236239). Intriguingly, this region coincides with one associated with language delays in children with autism. CONCLUSIONS: The FOXP2-CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language.

    Additional information

    nejm_vernes_2337sa1.pdf
  • Viaro, M., Bercelli, F., & Rossano, F. (2008). Una relazione terapeutica: Il terapeuta allenatore. Connessioni: Rivista di consulenza e ricerca sui sistemi umani, 20, 95-105.
  • Vigliocco, G., Vinson, D. P., Indefrey, P., Levelt, W. J. M., & Hellwig, F. M. (2004). Role of grammatical gender and semantics in German word production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 483-497. doi:10.1037/0278-7393.30.2.483.

    Abstract

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Marx, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender preservation effect in German. In all experiments, semantic substitution errors were induced using a continuous naming paradigm. In Experiment 1, it was found that gender preservation disappeared when speakers produced bare nouns. Gender preservation was found when speakers produced phrases with determiners marked for gender (Experiment 2) but not when the produced determiners were not marked for gender (Experiment 3). These results are discussed in the context of models of lexical retrieval during production.
  • Voermans, N. C., Petersson, K. M., Daudey, L., Weber, B., Van Spaendonck, K. P., Kremer, H. P. H., & Fernández, G. (2004). Interaction between the Human Hippocampus and the Caudate Nucleus during Route Recognition. Neuron, 43, 427-435. doi:10.1016/j.neuron.2004.07.009.

    Abstract

    Navigation through familiar environments can rely upon distinct neural representations that are related to different memory systems with either the hippo-campus or the caudate nucleus at their core. However,it is a fundamental question whether and how these systems interact during route recognition. To address this issue, we combined a functional neuroimaging approach with a naturally occurring, well-controlled humanmodel of caudate nucleus dysfunction (i.e., pre-clinical and early-stage Huntington’s disease). Our results reveal a noncompetitive interaction so that the hippocampus compensates for gradual caudate nucleus dysfunction with a gradual activity increase,maintaining normal behavior. Furthermore, we revealed an interaction between medial temporal and caudate activity in healthy subjects, which was adaptively modified in Huntington patients to allow compensatory hippocampal processing. Thus, the two memory systems contribute in a noncompetitive, co operative manner to route recognition, which enables Polthe hippocampus to compensate seamlessly for the functional degradation of the caudate nucleus
  • De Vos, C. (2008). Janger Kolok: de Balinese dovendans. Woord en Gebaar, 12-13.
  • De Vos, C. (2004). Over de biologische functie van taal: Pinker vs. Chomsky. Honours Review, 2(1), 20-25.

    Abstract

    Hoe is de complexe taal van de mens ontstaan? Geleidelijk door natuurlijke selectie, omdat groeiende grammaticale vermogens voor de mens een evolutionair voordeel opleverden? Of plotseling, als onbedoeld bijproduct of neveneffect van een genetische mutatie, zonder dat er sprake is van een adaptief proces? In dit artikel zet ik de argumenten van Pinker en Bloom voor de eerste stelling tegenover argumenten van Chomsky en Gould voor de tweede stelling. Vervolgens laat ik zien dat deze twee extreme standpunten ruimte bieden voor andere opties, die nader onderzoek waard zijn. Zo kan genetisch onderzoek in de komende decennia informatie opleveren, die nuancering van beide standpunten noodzakelijk maakt.
  • Wagner, A., & Ernestus, M. (2008). Identification of phonemes: Differences between phoneme classes and the effect of class size. Phonetica, 65(1-2), 106-127. doi:10.1159/000132389.

    Abstract

    This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners' response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme's class in the listener's native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
  • Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157-163.

    Abstract

    Previous research has shown that inertial cues resulting from passive transport through a large environment do not necessarily facilitate acquiring knowledge about its layout. Here we examine whether the additional body-based cues that result from active movement facilitate the acquisition of spatial knowledge. Three groups of participants learned locations along an 840-m route. One group walked the route during learning, allowing access to body-based cues (i.e., vestibular, proprioceptive, and efferent information). Another group learned by sitting in the laboratory, watching videos made from the first group. A third group watched a specially made video that minimized potentially confusing head-on-trunk rotations of the viewpoint. All groups were tested on their knowledge of directions in the environment as well as on its configural properties. Having access to body-based information reduced pointing error by a small but significant amount. Regardless of the sensory information available during learning, participants exhibited strikingly common biases.
  • Warner, N., Smits, R., McQueen, J. M., & Cutler, A. (2005). Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication, 46(1), 53-72. doi:10.1016/j.specom.2005.01.003.

    Abstract

    We report detailed analyses of a very large database on timing of speech perception collected by Smits et al. (Smits, R., Warner, N., McQueen, J.M., Cutler, A., 2003. Unfolding of phonetic information over time: A database of Dutch diphone perception. J. Acoust. Soc. Am. 113, 563–574). Eighteen listeners heard all possible diphones of Dutch, gated in portions of varying size and presented without background noise. The present report analyzes listeners’ responses across gates in terms of phonological features (voicing, place, and manner for consonants; height, backness, and length for vowels). The resulting patterns for feature perception differ from patterns reported when speech is presented in noise. The data are also analyzed for effects of stress and of phonological context (neighboring vowel vs. consonant); effects of these factors are observed to be surprisingly limited. Finally, statistical effects, such as overall phoneme frequency and transitional probabilities, along with response biases, are examined; these too exercise only limited effects on response patterns. The results suggest highly accurate speech perception on the basis of acoustic information alone.
  • Warner, N., Kim, J., Davis, C., & Cutler, A. (2005). Use of complex phonological patterns in speech processing: Evidence from Korean. Journal of Linguistics, 41(2), 353-387. doi:10.1017/S0022226705003294.

    Abstract

    Korean has a very complex phonology, with many interacting alternations. In a coronal-/i/ sequence, depending on the type of phonological boundary present, alternations such as palatalization, nasal insertion, nasal assimilation, coda neutralization, and intervocalic voicing can apply. This paper investigates how the phonological patterns of Korean affect processing of morphemes and words. Past research on languages such as English, German, Dutch, and Finnish has shown that listeners exploit syllable structure constraints in processing speech and segmenting it into words. The current study shows that in parsing speech, listeners also use much more complex patterns that relate the surface phonological string to various boundaries.
  • Warner, N., Jongman, A., Sereno, J., & Kemps, R. J. J. K. (2004). Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics, 32(2), 251-276. doi:10.1016/S0095-4470(03)00032-9.

    Abstract

    Words which are expected to contain the same surface string of segments may, under identical prosodic circumstances, sometimes be realized with slight differences in duration. Some researchers have attributed such effects to differences in the words’ underlying forms (incomplete neutralization), while others have suggested orthographic influence and extremely careful speech as the cause. In this paper, we demonstrate such sub-phonemic durational differences in Dutch, a language which some past research has found not to have such effects. Past literature has also shown that listeners can often make use of incomplete neutralization to distinguish apparent homophones. We extend perceptual investigations of this topic, and show that listeners can perceive even durational differences which are not consistently observed in production. We further show that a difference which is primarily orthographic rather than underlying can also create such durational differences. We conclude that a wide variety of factors, in addition to underlying form, can induce speakers to produce slight durational differences which listeners can also use in perception.
  • Wassenaar, M., & Hagoort, P. (2005). Word-category violations in patients with Broca's aphasia: An ERP study. Brain and Language, 92, 117-137. doi:10.1016/j.bandl.2004.05.011.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line syntactic processing in patients with Broca’s aphasia. Subjects were visually presented with sentences that were either syntactically correct or contained violations of word-category. Three groups of subjects were tested: Broca patients (N=11), non-aphasic patients with a right hemisphere (RH) lesion (N=9), and healthy aged-matched controls (N=15). Both control groups appeared sensitive to the violations of word-category as shown by clear P600/SPS effects. The Broca patients displayed only a very reduced and delayed P600/SPS effect. The results are discussed in the context of a lexicalist parsing model. It is concluded that Broca patients are hindered to detect on-line violations of word-category, if word class information is incomplete or delayed available.
  • Wassenaar, M., Brown, C. M., & Hagoort, P. (2004). ERP-effects of subject-verb agreement violations in patients with Broca's aphasia. Journal of Cognitive Neuroscience, 16(4), 553-576. doi:10.1162/089892904323057290.

    Abstract

    This article presents electrophysiological data on on-line syntactic processing during auditory sentence comprehension in patients with Broca's aphasia. Event-related brain potentials (ERPs) were recorded from the scalp while subjects listened to sentences that were either syntactically correct or contained violations of subject-verb agreement. Three groups of subjects were tested: Broca patients (n = 10), nonaphasic patients with a right-hemisphere (RH) lesion (n = 5), and healthy agedmatched controls (n = 12). The healthy, control subjects showed a P600/SPS effect as response to the agreement violations. The nonaphasic patients with an RH lesion showed essentially the same pattern. The overall group of Broca patients did not show this sensitivity. However, the sensitivity was modulated by the severity of the syntactic comprehension impairment. The largest deviation from the standard P600/SPS effect was found in the patients with the relatively more severe syntactic comprehension impairment. In addition, ERPs to tones in a classical tone oddball paradigm were also recorded. Similar to the normal control subjects and RH patients, the group of Broca patients showed a P300 effect in the tone oddball condition. This indicates that aphasia in itself does not lead to a general reduction in all cognitive ERP effects. It was concluded that deviations from the standard P600/SPS effect in the Broca patients reflected difficulties with on-line maintaining of number information across clausal boundaries for establishing subject-verb agreement.
  • Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal of Memory and Language, 50(1), 1-25. doi:10.1016/S0749-596X(03)00105-0.

    Abstract

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, ‘lid,’ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
  • Weber, K., & Lavric, A. (2008). Syntactic anomaly elicits a lexico-semantic (N400) ERP effect in the second but not in the first language. Psychophysiology, 45(6), 920-925. doi:10.1111/j.1469-8986.2008.00691.x.

    Abstract

    Recent brain potential research into first versus second language (L1 vs. L2) processing revealed striking responses to morphosyntactic features absent in the mother tongue. The aim of the present study was to establish whether the presence of comparable morphosyntactic features in L1 leads to more similar electrophysiological L1 and L2 profiles. ERPs were acquired while German-English bilinguals and native speakers of English read sentences. Some sentences were meaningful and well formed, whereas others contained morphosyntactic or semantic violations in the final word. In addition to the expected P600 component, morphosyntactic violations in L2 but not L1 led to an enhanced N400. This effect may suggest either that resolution of morphosyntactic anomalies in L2 relies on the lexico-semantic system or that the weaker/slower morphological mechanisms in L2 lead to greater sentence wrap-up difficulties known to result in N400 enhancement.
  • Wegener, C. (2005). Major word classes in Savosavo. Grazer Linguistische Studien, 64, 29-52.
  • Widlok, T. (2004). Ethnography in language Documentation. Language Archive Newsletter, 1(3), 4-6.
  • Widlok, T. (2008). Landscape unbounded: Space, place, and orientation in ≠Akhoe Hai// om and beyond. Language Sciences, 30(2/3), 362-380. doi:10.1016/j.langsci.2006.12.002.

    Abstract

    Even before it became a common place to assume that “the Eskimo have a hundred words for snow” the languages of hunting and gathering people have played an important role in debates about linguistic relativity concerning geographical ontologies. Evidence from languages of hunter-gatherers has been used in radical relativist challenges to the overall notion of a comparative typology of generic natural forms and landscapes as terms of reference. It has been invoked to emphasize a personalized relationship between humans and the non-human world. It is against this background that this contribution discusses the landscape terminology of ≠Akhoe Hai//om, a Khoisan language spoken by “Bushmen” in Namibia. Landscape vocabulary is ubiquitous in ≠Akhoe Hai//om due to the fact that the landscape plays a critical role in directionals and other forms of “topographical gossip” and due to merges between landscape and group terminology. This system of landscape-cum-group terminology is outlined and related to the use of place names in the area.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Williams, N. M., Williams, H., Majounie, E., Norton, N., Glaser, B., Morris, H. R., Owen, M. J., & O'Donovan, M. C. (2008). Analysis of copy number variation using quantitative interspecies competitive PCR. Nucleic Acids Research, 36(17): e112. doi:10.1093/nar/gkn495.

    Abstract

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2005). The language archive at the MPI: Contents, tools, and technologies. Language Archives Newsletter, 5, 7-9.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P. (2008). Die CLARIN Forschungsinfrastruktur. ÖGAI-journal (Österreichische Gesellschaft für Artificial Intelligence), 27, 10-17.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U., Vasishta, M. N., & Sethna, M. (2005). Implementation of Indian Sign Language in educational settings. Asia Pacific Disability Rehabilitation Journal, 16(1), 16-40.

    Abstract

    This article reports on several sub-projects of research and development related to the use of Indian Sign Language in educational settings. In many countries around the world, sign languages are now recognised as the legitimate, full-fledged languages of the deaf communities that use them. In India, the development of sign language resources and their application in educational contexts, is still in its initial stages. The work reported on here, is the first principled and comprehensive effort of establishing educational programmes in Indian Sign Language at a national level. Programmes are of several types: a) Indian Sign Language instruction for hearing people; b) sign language teacher training programmes for deaf people; and c) educational materials for use in schools for the Deaf. The conceptual approach used in the programmes for deaf students is known as bilingual education, which emphasises the acquisition of a first language, Indian Sign Language, alongside the acquisition of spoken languages, primarily in their written form.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zhang, J., Bao, S., Furumai, R., Kucera, K. S., Ali, A., Dean, N. M., & Wang, X.-F. (2005). Protein phosphatase 5 is required for ATR-mediated checkpoint activation. Molecular and Cellular Biology, 25, 9910-9919. doi:10.1128/​MCB.25.22.9910-9919.2005.

    Abstract

    In response to DNA damage or replication stress, the protein kinase ATR is activated and subsequently transduces genotoxic signals to cell cycle control and DNA repair machinery through phosphorylation of a number of downstream substrates. Very little is known about the molecular mechanism by which ATR is activated in response to genotoxic insults. In this report, we demonstrate that protein phosphatase 5 (PP5) is required for the ATR-mediated checkpoint activation. PP5 forms a complex with ATR in a genotoxic stress-inducible manner. Interference with the expression or the activity of PP5 leads to impairment of the ATR-mediated phosphorylation of hRad17 and Chk1 after UV or hydroxyurea treatment. Similar results are obtained in ATM-deficient cells, suggesting that the observed defect in checkpoint signaling is the consequence of impaired functional interaction between ATR and PP5. In cells exposed to UV irradiation, PP5 is required to elicit an appropriate S-phase checkpoint response. In addition, loss of PP5 leads to premature mitosis after hydroxyurea treatment. Interestingly, reduced PP5 activity exerts differential effects on the formation of intranuclear foci by ATR and replication protein A, implicating a functional role for PP5 in a specific stage of the checkpoint signaling pathway. Taken together, our results suggest that PP5 plays a critical role in the ATR-mediated checkpoint activation.
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.

Share this page