Publications

Displaying 301 - 400 of 790
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23-B32. doi:10.1016/j.cognition.2004.10.003.

    Abstract

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P., Kleinschmidt, A., Merboldt, K.-D., Krüger, G., Brown, C. M., Hagoort, P., & Frahm, J. (1997). Equivalent responses to lexical and nonlexical visual stimuli in occipital cortex: a functional magnetic resonance imaging study. Neuroimage, 5, 78-81. doi:10.1006/nimg.1996.0232.

    Abstract

    Stimulus-related changes in cerebral blood oxygenation were measured using high-resolution functional magnetic resonance imaging sequentially covering visual occipital areas in contiguous sections. During dynamic imaging, healthy subjects silently viewed pseudowords, single false fonts, or length-matched strings of the same false fonts. The paradigm consisted of a sixfold alternation of an activation and a control task. With pseudowords as activation vs single false fonts as control, responses were seen mainly in medial occipital cortex. These responses disappeared when pseudowords were alternated with false font strings as the control and reappeared when false font strings instead of pseudowords served as activation and were alternated with single false fonts. The string-length contrast alone, therefore, is sufficient to account for the activation pattern observed in medial visual cortex when word-like stimuli are contrasted with single characters.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2005). Neighbourhood density effects in auditory nonword processing in aphasia. Brain and Language, 95, 24-25. doi:10.1016/j.bandl.2005.07.027.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janzen, G., & Hawlik, M. (2005). Orientierung im Raum: Befunde zu Entscheidungspunkten. Zeitschrift für Psychology, 213, 179-186.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.

    Abstract

    Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between
    visual processing and conceptual processing. For example, upon hearing the word for a missing referent
    with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
    a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
    shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
    who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
    common exhibit language-mediated eye movements like those made by older children and adults?
    That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
    lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
    between named and seen objects. This indicates that language-mediated visual search need not
    depend on stored labels for concepts.
  • Johnson, E. K., & Huettig, F. (2011). Eye movements during language-mediated visual search reveal a strong link between overt visual attention and lexical processing in 36-months-olds. Psychological Research, 75, 35-42. doi:10.1007/s00426-010-0285-4.

    Abstract

    The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e. when asked to find an object with a typical color (e.g. strawberry), children tended to fixate more upon an object that had the same (e.g. red plane) as opposed to a different (e.g. yellow plane) color. They did so regardless of the fact that they have had plenty of time to recognize the pictures for what they are, i.e. planes not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.
  • Johnson, E. K. (2005). English-learning infants' representations of word-forms with iambic stress. Infancy, 7(1), 95-105. doi:10.1207/s15327078in0701_8.

    Abstract

    Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head-turn preference study was used to investigate the nature of English- learners' representations of iambic word onsets. Fifty-four 10.5-month-olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near-familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near-familiar test group (near-familiar vs. unfamiliar) oriented equally long to near-familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.

    Abstract

    A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jolink, A. (2005). Finite linking in normally developing Dutch children and children with specific language impairment. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 61-81.
  • Jones, C. R., Pickles, A., Falcaro, M., Marsden, A. J., Happé, F., Scott, S. K., Sauter, D., Tregay, J., Phillips, R. J., Baird, G., Simonoff, E., & Charman, T. (2011). A multimodal approach to emotion recognition ability in autism spectrum disorders. Journal of Child Psychology and Psychiatry, 52(3), 275-285. doi:10.1111/j.1469-7610.2010.02328.x.

    Abstract

    Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal; hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. Methods: We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (>= 80 vs. < 80). Results: There was no significant difference between groups for the majority of emotions and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. Conclusions: The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.
  • Jordan, F. (2011). A phylogenetic analysis of the evolution of Austronesian sibling terminologies. Human Biology, 83, 297-321. doi:10.3378/027.083.0209.

    Abstract

    Social structure in human societies is underpinned by the variable expression of ideas about relatedness between different types of kin. We express these ideas through language in our kin terminology: to delineate who is kin and who is not, and to attach meanings to the types of kin labels associated with different individuals. Cross-culturally, there is a regular and restricted range of patterned variation in kin terminologies, and to date, our understanding of this diversity has been hampered by inadequate techniques for dealing with the hierarchical relatedness of languages (Galton’s Problem). Here I use maximum-likelihood and Bayesian phylogenetic comparative methods to begin to tease apart the processes underlying the evolution of kin terminologies in the Austronesian language family, focusing on terms for siblings. I infer (1) the probable ancestral states and (2) evolutionary models of change for the semantic distinctions of relative age (older/younger sibling) and relative sex (same sex/opposite-sex). Analyses show that early Austronesian languages contained the relative-age, but not the relative-sex distinction; the latter was reconstructed firmly only for the ancestor of Eastern Malayo-Polynesian languages. Both distinctions were best characterized by evolutionary models where the gains and losses of the semantic distinctions were equally likely. A multi-state model of change examined how the relative-sex distinction could be elaborated and found that some transitions in kin terms were not possible: jumps from absence to heavily elaborated were very unlikely, as was piece-wise dismantling of elaborate distinctions. Cultural ideas about what types of kin distinctions are important can be embedded in the semantics of language; using a phylogenetic evolutionary framework we can understand how those distinctions in meaning change through time.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordens, P. (1997). Introducing the basic variety. Second Language Research, 13(4), 289-300. doi:10.1191%2F026765897672176425.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kempen, G. (1990). Een slordig gestapeld servies [Review of the book Tranen van de krokodil by Piet Vroon]. Intermediair, 26(17), 67-69.
  • Kempen, G. (1999). Fiets en (centri)fuge. Onze Taal, 68, 88.
  • Kempen, G. (1990). Microcomputers en cognitiewetenschap. SURF: Tijdschrift over Computerdienstverlening in het Hoger Onderwijs en Onderzoek, 4(3), 2.
  • Kempen, G., & Jongen-Janner, E. (1990). Naar een flexibele methode voor algoritmisch grammatica- en spellingonderwijs. Pedagogisch Tijdschrift, 15, 280-289.
  • Kempen, G. (1990). Representation in memory: Volume 2, chapter 8, pp. 511–587 by David E. Rumelhart and Donald A. Norman [Book review]. Acta Psychologica, 75, 191-192. doi:10.1016/0001-6918(90)90107-Q.
  • Kempen, G. (1990). Taaltechnologie en de toekomst van tekstautomatisering. Informatie, 32, 724-727.
  • Kempen, G. (1984). Taaltechnologie voor het Nederlands: Vorderingen bij de bouw van een Nederlandstalig dialoog- en auteursysteem. Toegepaste Taalwetenschap in Artikelen, 19, 48-58.
  • Kempen, G., Konst, L., & De Smedt, K. (1984). Taaltechnologie voor het Nederlands: Vorderingen bij de bouw van een Nederlandstalig dialoog- en auteursysteem. Informatie, 26, 878-881.
  • Kempen, G. (1988). Preface. Acta Psychologica, 69(3), 205-206. doi:10.1016/0001-6918(88)90032-7.
  • Kempen, G. (1997). Van taalbarrières naar linguïstische snelwegen: Inrichting van een technische taalinfrastructuur voor het Nederlands. Grenzen aan veeltaligheid: Taalgebruik en bestuurlijke doeltreffendheid in de instellingen van de Europese Unie, 43-48.
  • Kemps, R. J. J. K., Wurm, L. H., Ernestus, M., Schreuder, R., & Baayen, R. H. (2005). Prosodic cues for morphological complexity in Dutch and English. Language and Cognitive Processes, 20(1/2), 43-73. doi:10.1080/01690960444000223.

    Abstract

    Previous work has shown that Dutch listeners use prosodic information in the speech signal to optimise morphological processing: Listeners are sensitive to prosodic differences between a noun stem realised in isolation and a noun stem realised as part of a plural form (in which the stem is followed by an unstressed syllable). The present study, employing a lexical decision task, provides an additional demonstration of listeners' sensitivity to prosodic cues in the stem. This sensitivity is shown for two languages that differ in morphological productivity: Dutch and English. The degree of morphological productivity does not correlate with listeners' sensitivity to prosodic cues in the stem, but it is reflected in differential sensitivities to the word-specific log odds ratio of encountering an unshortened stem (i.e., a stem in isolation) versus encountering a shortened stem (i.e., a stem followed by a suffix consisting of one or more unstressed syllables). In addition to being sensitive to the prosodic cues themselves, listeners are also sensitive to the probabilities of occurrence of these prosodic cues.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2005). Prosodic cues for morphological complexity: The case of Dutch plural nouns. Memory & Cognition, 33(3), 430-446.

    Abstract

    It has recently been shown that listeners use systematic differences in vowel length and intonation to resolve ambiguities between onset-matched simple words (Davis, Marslen-Wilson, & Gaskell, 2002; Salverda, Dahan, & McQueen, 2003). The present study shows that listeners also use prosodic information in the speech signal to optimize morphological processing. The precise acoustic realization of the stem provides crucial information to the listener about the morphological context in which the stem appears and attenuates the competition between stored inflectional variants. We argue that listeners are able to make use of prosodic information, even though the speech signal is highly variable within and between speakers, by virtue of the relative invariance of the duration of the onset. This provides listeners with a baseline against which the durational cues in a vowel and a coda can be evaluated. Furthermore, our experiments provide evidence for item-specific prosodic effects.
  • Keune, K., Ernestus, M., Van Hout, R., & Baayen, R. H. (2005). Variation in Dutch: From written "mogelijk" to spoken "mok". Corpus Linguistics and Linguistic Theory, 1(2), 183-223. doi:10.1515/cllt.2005.1.2.183.

    Abstract

    In Dutch, high-frequency words with the suffix -lijk are often highly reduced in spontaneous unscripted speech. This study addressed socio-geographic variation in the reduction of such words against the backdrop of the variation in their use in written and spoken Dutch. Multivariate analyses of the frequencies with which the words were used in a factorially contrasted set of subcorpora revealed signi ficant variation involving the speaker's country, sex, and education level for spoken Dutch, and involving country and register for written Dutch. Acoustic analyses revealed that Dutch men reduced most often, while Flemish highly educated women reduced least. Two linguistic context effects emerged, one prosodic, and the other pertaining to the flow of information. Words in sentence final position showed less reduction, while words that were better predictable from the preceding word in the sentence(based on mutual information) tended to be reduced more often. The increased probability of reduction for forms that are more predictable in context, combined with the loss of the suffix in the more extremely reduced forms, suggests that highfrequency words in -lijk are undergoing a process of erosion that causes them to gravitate towards monomorphemic function words.
  • Kidd, E., Stewart, A. J., & Serratrice, L. (2011). Children do not overcome lexical biases where adults do: The role of the referential scene in garden-path recovery. Journal of Child Language, 38(1), 222-234. doi:10.1017/s0305000909990316.

    Abstract

    In this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it became apparent that a test sentence was syntactically ambiguous, suggesting they considered the two alternative analyses in parallel. In contrast, the children appeared not to reanalyze their initial analysis, even over shorter distances than have been investigated in prior research. We argue that this reflects the children's over-reliance on bottom-up, lexical cues to interpretation. The implications for the development of parsing routines are discussed
  • Kidd, E., Kemp, N., & Quinn, S. (2011). Did you have a choccie bickie this arvo? A quantitative look at Australian hypocoristics. Language Sciences, 33(3), 359-368. doi:10.1016/j.langsci.2010.11.006.

    Abstract

    This paper considers the use and representation of Australian hypocoristics (e.g., choccie → chocolate, arvo → afternoon). One-hundred-and-fifteen adult speakers of Australian English aged 17–84 years generated as many tokens of hypocoristics as they could in 10 min. The resulting corpus was analysed along a number of dimensions in an attempt to identify (i) general age- and gender-related trends in hypocoristic knowledge and use, and (ii) linguistic properties of each hypocoristic class. Following Bybee’s (1985, 1995) lexical network approach, we conclude that Australian hypocoristics are the product of the same linguistic processes that capture other inflectional morphological processes.
  • Kidd, E., & Bavin, E. L. (2005). Lexical and referential cues to sentence interpretation: An investigation of children's interpretations of ambiguous sentences. Journal of Child Language, 32(4), 855-876. doi:10.1017/S0305000905007051.

    Abstract

    This paper reports on an investigation of children's (aged 3;5–9;8) comprehension of sentences containing ambiguity of prepositional phrase (PP) attachment. Results from a picture selection study (N=90) showed that children use verb semantics and preposition type to resolve the ambiguity, with older children also showing sensitivity to the definiteness of the object NP as a cue to interpretation. Study 2 investigated three- and five-year-old children's (N=47) ability to override an instrumental interpretation of ambiguous PPs in order to process attributes of the referential scene. The results showed that while five-year-olds are capable of incorporating aspects of the referential scene into their interpretations, three-year-olds are not as successful. Overall, the results suggest that children are attuned very early to the lexico-semantic co-occurrences that have been shown to aid ambiguity resolution in adults, but that more diffuse cues to interpretation are used only later in development
  • Kidd, E., Lieven, E., & Tomasello, M. (2010). Lexical frequency and exemplar-based learning effects in language acquisition: evidence from sentential complements. Language Sciences, 32(1), 132-142. doi:10.1016/j.langsci.2009.05.002.

    Abstract

    Usage-based approaches to language acquisition argue that children acquire the grammar of their target language using general-cognitive learning principles. The current paper reports on an experiment that tested a central assumption of the usage-based approach: argument structure patterns are connected to high frequency verbs that facilitate acquisition. Sixty children (N = 60) aged 4- and 6-years participated in a sentence recall/lexical priming experiment that manipulated the frequency with which the target verbs occurred in the finite sentential complement construction in English. The results showed that the children performed better on sentences that contained high frequency verbs. Furthermore, the children’s performance suggested that their knowledge of finite sentential complements relies most heavily on one particular verb – think, supporting arguments made by Goldberg [Goldberg, A.E., 2006. Constructions at Work: The Nature of Generalization in Language. Oxford University Press, Oxford], who argued that skewed input facilitates language learning.
  • Kidd, E., & Kirjavainen, M. (2011). Investigating the contribution of procedural and declarative memory to the acquisition of past tense morphology: Evidence from Finnish. Language and Cognitive Processes, 26(4-6), 794-829. doi:10.1080/01690965.2010.493735.

    Abstract

    The present paper reports on a study that investigated the role of procedural and declarative memory in the acquisition of Finnish past tense morphology. Two competing models were tested. Ullman's (2004) declarative/procedural model predicts that procedural memory supports the acquisition of regular morphology, whereas declarative memory supports the acquisition of irregular morphology. In contrast, single-route approaches predict that declarative memory should support lexical learning, which in turn should predict morphological acquisition. One-hundred and twenty-four (N=124) monolingual Finnish-speaking children aged 4;0–6;7 completed tests of procedural and declarative memory, tests of vocabulary knowledge and nonverbal ability, and a test of past test knowledge. The results best supported the single-route approach, suggesting that this account best extends to languages that possess greater morphological complexity than English.
  • Kidd, E., Rogers, P., & Rogers, C. (2010). The personality correlates of adults who had imaginary companions in childhood. Psychological Reports, 107(1), 163-172. doi:10.2466/02.04.10.pr0.107.4.163-172.

    Abstract

    Two studies showed that adults who reported having an imaginary companion as a child differed from adults who did not on certain personality dimensions. The first yielded a higher mean on the Gough Creative Personality Scale for the group who had imaginary companions. Study 2 showed that such adults scored higher on the Achievement and Absorption subscales of Tellegen's Multidimensional Personality Questionnaire. The results suggest that some differences reported in the developmental literature may be observed in adults
  • Kita, S. (1997). Two-dimensional semantic analysis of Japanese mimetics. Linguistics, 35, 379-415. doi:10.1515/ling.1997.35.2.379.
  • Klein, W., & Dimroth, C. (Eds.). (2005). Spracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 140.
  • Klein, W. (2005). Über den Nutzen naturwissenschaftlicher Denkmodelle für die Geisteswissenschaften. Debatte, 2, 45-50.
  • Klein, W. (2005). Vom Sprachvermögen zum Sprachlichen System. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 8-39.
  • Klein, W. (2005). Wie ist eine exakte Wissenschaft von der Literatur möglich? Zeitschrift für Literaturwissenschaft und Linguistik, 137, 80-100.
  • Klein, W. (1990). A theory of language acquisition is not so easy. Studies in Second Language Acquisition, 12, 219-231. doi:10.1017/S0272263100009104.
  • Klein, W., & Musan, R. (Eds.). (1999). Das deutsche Perfekt [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (113).
  • Klein, W., & Meibauer, J. (2011). Einleitung. LiLi, Zeitschrift für Literaturwissenschaft und Linguistik, 41(162), 5-8.

    Abstract

    Nannten die Erwachsenen irgend einen Gegenstand und wandten sie sich dabei ihm zu, so nahm ich das wahr und ich begriff, daß der Gegenstand durch die Laute, die sie aussprachen, bezeichnet wurde, da sie auf ihn hinweisen wollten. Dies aber entnahm ich aus ihren Gebärden, der natürlichen Sprache aller Völker, der Sprache, die durch Mienen- und Augenspiel, durch die Bewegungen der Glieder und den Klang der Stimme die Empfindungen der Seele anzeigt, wenn diese irgend etwas begehrt, oder festhält, oder zurückweist, oder flieht. So lernte ich nach und nach verstehen, welche Dinge die Wörter bezeichneten, die ich wieder und wieder, an ihren bestimmten Stellen in verschiedenen Sätzen, aussprechen hörte. Und ich brachte, als nun mein Mund sich an diese Zeichen gewöhnt hatte, durch sie meine Wünsche zum Ausdruck. (Augustinus, Confessiones I, 8) Dies ist das Zitat eines Zitats: Zu Beginn der Philosophischen Untersuchungen führt Ludwig Wittgenstein diese Stelle aus Augustinus’ Bekenntnissen an, in denen dieser beschreibt, wie er seiner Erinnerung nach seine Muttersprache gelernt hat (Wittgenstein führt den lateinischen Text an und gibt dann seine Übersetzung, hier ist nur letztere zitiert). Sie bilden den Ausgangspunkt für Wittgensteins berühmte Überlegungen über die Funktionsweise der menschlichen Sprache und für seine Idee des Sprachspiels. Nun weiß man nicht, wie genau sich Augustinus wirklich erinnert und ob er sich all dies, wie so viel, was seither über den Spracherwerb gesagt und geschrieben wurde, bloß zurechtgelegt hat, in der Meinung, so müsse es sein. Aber anders als so vieles, was seither über den Spracherwerb gesagt und geschrieben wurde, ist es wunderbar formuliert und enthält zwei Momente, die in der wissenschaftlichen Forschung bis heute, wenn denn nicht bestritten, so doch oft nicht gesehen und dort, wo sie denn gesehen, nicht wirklich ernstgenommen wurden: A. Wir lernen die Sprache in der alltäglichen Kommunikation mit der sozialen Umgebung. B. Um eine Sprache zu lernen, genügt es nicht, diese Sprache zu hören; vielmehr benötigen wir eine Fülle an begleitender Information, wie hier Gestik und Mimik der Erwachsenen. Beides möchte man eigentlich für selbstverständlich halten. Herodot erzählt die berühmte Geschichte des Pharaos Psammetich, der wissen wollte, was die erste und eigentliche Sprache der Menschen sei, und befahl, zwei Neugeborene aufwachsen zu lassen, ohne dass jemand zu ihnen spricht; das erste Wort, das sie äußern, klang, so erzählt Herodot, wie das phrygische Wort für Brot, und so nahm man denn an, die Ursprache des Menschen sei das Phrygische. In dieser Vorstellung vom Spracherwerb spielt der Input aus der sozialen Umgebung nur insofern eine Rolle, als die eigentliche, von Geburt an vorhandene Sprache durch eine andere verdrängt werden kann: Kinder, die in einer englischsprachigen Umgebung aufwachsen, sprechen nicht die Ursprache. Diese Theorie gilt heute als obsolet. Sie ist aber in ihrer Einschätzung vom relativen Gewicht dessen, was an sprachlichem Wissen von Anfang an vorhanden ist, und dem, was der sozialen Umgebung entnommen werden muss, manchen neueren Theorien des Spracherwerbs nicht ganz fern: In der Chomsky’schen Idee der Universalgrammatik, theoretische Grundlage eines wesentlichen Teils der modernen Spracherwerbsforschung, ist „die Sprache” hauptsächlich etwas Angeborenes, insoweit gleich für alle Menschen und vom jeweiligen Input unabhängig. Das, was das Kind oder, beim Zweitspracherwerb, der erwachsene Lerner an Sprachlichem aus seiner Umgebung erfährt, wird nicht genutzt, um daraus bestimmte Regelhaftigkeiten abzuleiten und sich diese anzueignen; der Input fungiert eher als eine Art externer Auslöser für latent bereits vorhandenes Wissen. Für das Erlernen des Wortschatzes gilt dies sicher nicht. Es kann nicht angeboren sein, dass der Mond luna heißt. Für andere Bereiche der Sprache ist das Ausmaß des Angeborenen aber durchaus umstritten. Bei dieser Denkweise gilt das unter A Gesagte nicht. Die meisten modernen Spracherwerbsforscher schreiben dem Input ein wesentlich höheres Gewicht zu: Wir kopieren die charakteristischen Eigenschaften eines bestimmten sprachlichen Systems, indem wir den Input analysieren, um so die ihm zugrundeliegenden Regularitäten abzuleiten. Der Input tritt uns in Form von Schallfolgen (oder Gesten und später geschriebenen Zeichen) entgegen, die von anderen, die das System beherrschen, zu kommunikativen Zwecken verwendet werden. Diese Schallfolgen müssen die Lernenden in kleinere Einheiten zerlegen, diese mit Bedeutungen versehen und nach den Regularitäten abklopfen, denen gemäß sie sich zu komplexeren Ausdrücken verbinden lassen. Dies – und vieles andere – ist es, was das dem Menschen angeborene Sprachvermögen leistet, keine andere Spezies kann es (einem Pferd kann man so viel Chinesisch vorspielen, wie man will, es wird es nicht lernen). Aber auch wir könnten es nicht, wenn wir nur den Schall hätten. Wenn man, in einer Abwandlung des Psammetich’schen Versuchs, jemanden in ein Zimmer einsperren und tagaus tagein mit Chinesisch beschallen und im Übrigen gut versorgen würde, so würde er es, gleich ob als Kind oder als Erwachsener, nicht lernen. Vielleicht würde er einige strukturelle Eigenschaften des Schallstroms ausfindig machen; aber er würde auch nach Jahren kein Chinesisch können. Man benötigt den Schallstrom als sinnlich fassbaren Ausdruck der zugrundeliegenden Sprache, und man benötigt all die Informationen, die man der jeweiligen Redesituation oder aber seinem bereits vorhandenen anderweitigen Wissen entnehmen kann. Augustinus hat beides radikal vereinfacht; aber im Prinzip hat er Recht, und man sollte daher von der Spracherwerbsforschung erwarten, dass sie dies in Rechnung stellt. Das tut sie aber selten. Soweit sie überhaupt aus dem Gehäuse der Theorie tritt und sich den tatsächlichen Verlauf des Spracherwerbs anschaut, konzentriert sie sich weithin auf das, was die Kinder selbst sagen – dazu dienen ausgedehnte Corpora –, oder aber sie untersucht in experimentellen Settings, wie Kinder bestimmte Wörter oder Strukturen verstehen oder auch nicht verstehen. Das hat auch, wenn denn gut gemacht, einen hohen Aufschlusswert. Aber die eigentliche Verarbeitung des Inputs im doppelten Sinne – Schallwellen und Parallelinformation – wird selten in den Mittelpunkt des Interesses gerückt. Dies führt zu eigentümlichen Verzerrungen. So betrachtet man in der Spracherwerbsforschung vor allem deklarative Hauptsätze. Ein nicht unwesentlicher Teil dessen, was Kinder hören, besteht aber aus Imperativen („Tu das!“, „Tu das nicht!“). In solchen Imperativen gibt es normalerweise kein Subjekt. Ein intelligentes Kind muss daher zu dem Schluss kommen, dass das Deutsche in einem nicht unwesentlichen Teil seiner grammatischen Strukturen eine „pro drop-Sprache” ist, d.h. eine Sprache, in dem man das Subjekt weglassen kann. Kein Linguist käme auf diese Idee; sie entspricht aber den tatsächlichen Verhältnissen, und dies schlägt sich in dem Input, den das Kind verarbeiten muss, nieder. Dieses Heft befasst sich mit einer Spracherwerbssituation, in der – anders als beispielsweise bei einem Gespräch am Frühstückstisch – der Input in seiner doppelten Form gut zu überschauen ist, ohne dass die Situation, wie etwa bei einem kontrollierten Experiment, unnatürlich und der normalen Lernumgebung ferne wäre: mit dem Anschauen, Vorlesen und Lesen von Kinderbüchern. Man kann sich eine solche Situation als eine natürliche Ausweitung dessen vorstellen, was Augustinus beschreibt: Die Kinder hören, was die Erwachsenen sagen, und ihre Aufmerksamkeit wird auf bestimmte Dinge gerichtet, während sie hören und schauen – nur geht es hier nicht um einzelne Wörter, sondern um komplexe Ausdrücke und um komplexe, aber dennoch überschaubare begleitende Informationen. Nun haben Kinderbücher in der Spracherwerbsforschung durchaus eine Rolle gespielt. Dabei dienen sie – sei es als reine Folge von Bildern, sei es mit Text oder gar nur als Text – aber meistens nur als eine Art Vorlage für die Sprachproduktion der Kinder: Sie sollen aus der Vorlage eine Geschichte ableiten und in ihren eigenen Worten erzählen. Das bekannteste, aber keineswegs das einzige Beispiel sind die von Michael Bamberg, Ruth Berman und Dan Slobin in den 1980er Jahren initiierten „frog stories” – Nacherzählungen einer einfachen Bildgeschichte, die inzwischen in zahlreichen Sprachen vorliegen und viele Aufschlüsse über die unterschiedlichsten Aspekte der sich entwickelnden Sprachbeherrschung, von der Flexionsmorphologie bis zur Textstruktur, gebracht haben. Das ist gut und sinnvoll; aber im Grunde müsste man einen Schritt weiter gehen, nämlich gleichsam wir durch ein Mikroskop zu schauen, wie sich die Kinder ihre Regularitäten aus der Interaktion ableiten. Dies würde unsere Vorstellungen über den Verlauf des Spracherwerbs und die Gesetzlichkeiten, nach denen er erfolgt, wesentlich bereichern, vielleicht auf eine ganz neue Basis stellen. Die Beiträge dieses Heftes geben dafür eine Reihe von Beispielen, von denen nur ein kleines, aber besonders schlagendes erwähnt werden soll. Es gibt zahlreiche, auf Bildgeschichten beruhende Analysen, in denen untersucht wird, wie Kinder eine bestimmte Person oder eine Sache im fortlaufenden Diskurs benennen – ob sie etwa definite und indefinite Nominalausdrücke (ein Junge – der Junge), lexikalische oder pronominale Nominalphrasen (der Junge – er) oder gar leere Elemente (der Junge wacht auf und 0 schaut nach seinem Hund) richtig verwenden können. Das Bild, das die Forschung in diesem wesentlichen Teil der Sprachbeherrschung heute bietet, ist alles andere als einheitlich. So umfassen die Ansichten darüber, wann die Definit-Indefinit-Unterscheidung gemeistert wird, den größten Teil der Kindheit, je nachdem, welche Untersuchungen man zu Rate zieht. In dem Aufsatz von Katrin Dammann-Thedens wird deutlich, dass Kindern in einem bestimmen Alter oft überhaupt nicht klar ist, dass eine bestimmte Person, eine bestimmte Sache auf fortlaufenden Bildern dieselbe ist – auch wenn sie ähnlich aussieht –, und das ist bei Licht besehen ja auch keine triviale Frage. Diese Beobachtungen werfen ein ganz neues Licht auf die Idee der referentiellen Kontinuität im Diskurs und ihren Ausdruck durch nominale Ausdrücke wie die eben genannten. Vielleicht haben wir ganz falsche Vorstellungen darüber, wie Kinder die begleitende Information – hier durch die Bilder einer Geschichte geliefert – verstehen und damit für den Spracherwerb verarbeiten. Derlei Beobachtungen sind zunächst einmal etwas Punktuelles, keine Antworten, sondern Hinweise auf Dinge, die man bedenken muss. Aber ihre Analyse, und allgemeiner, ein genauerer Blick auf das, was sich tatsächlich abspielt, wenn Kinder sich Kinderbücher anschauen, mag uns vielleicht zu einem wesentlich tieferen Verständnis dessen führen, was beim Erwerb einer Sprache tatsächlich geschieht.
  • Klein, W. (1990). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 20(78), 7-8.
  • Klein, W., & Winkler, S. (2010). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 158, 5-7.
  • Klein, W. (1988). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 18(69), 7-8.
  • Klein, W. (1990). Comments on the papers by Bierwisch and Zwicky. Yearbook of Morphology, 3, 217-221.
  • Klein, W., & Winkler, S. (Eds.). (2010). Ambiguität [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 40(158).
  • Klein, W. (2005). Hoe is een exacte literatuurwetenschap mogelijk? Parmentier, 14(1), 48-65.
  • Klein, W. (Ed.). (2005). Nicht nur Literatur [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 137.
  • Klein, W. (1997). Learner varieties are the normal case. The Clarion, 3, 4-6.
  • Klein, W. (1997). Nobels Vermächtnis, oder die Wandlungen des Idealischen. Zeitschrift für Literaturwissenschaft und Linguistik, 107, 6-18.

    Abstract

    Nobel's legacy, or the metamorphosis of what is idealistic Ever since the first Nobel prize in literature was awarded to Prudhomme in 1901, the decisions of the Swedish Academy have been subject to criticism. What is surprising in the changing decision policy as well as in its criticism is the fact that Alfred Nobel's original intentions are hardly ever taken into account: the Nobel prize is a philanthropic prize, it is not meant to select and honour the most eminent literary work but the work with maximal benefit to human beings. What is even more surprising is the fact that no one seems to care that the donator's Last Will is regularly broken.
  • Klein, W. (2010). On times and arguments. Linguistics, 48, 1221-1253. doi:10.1515/LING.2010.040.

    Abstract

    Verbs are traditionally assumed to have an “argument structure”, which imposes various constraints on form and meaning of the noun phrases that go with the verb, and an “event structure”, which defines certain temporal characteristics of the “event” to which the verb relates. In this paper, I argue that these two structures should be brought together. The verb assigns descriptive properties to one or more arguments at one or more temporal intervals, hence verbs have an “argument-time structure”. This argument-time structure as well as the descriptive properties connected to it can be modified by various morphological and syntactic operations. This approach allows a relatively simple analysis of familiar but not well-defined temporal notions such as tense, aspect and Aktionsart. This will be illustrated for English. It will be shown that a few simple morphosyntactic operations on the argument-time structure might account for form and meaning of the perfect, the progressive, the passive and related constructions.
  • Klein, W. (Ed.). (1997). Technologischer Wandel in den Philologien [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (106).
  • Klein, W. (Ed.). (1984). Textverständlichkeit - Textverstehen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (55).
  • Klein, W., & Perdue, C. (1997). The basic variety (or: Couldn't natural languages be much simpler?). Second Language Research, 13, 301-347. doi:10.1191/026765897666879396.

    Abstract

    In this article, we discuss the implications of the fact that adult second language learners (outside the classroom) universally develop a well-structured, efficient and simple form of language–the Basic Variety (BV). Three questions are asked as to (1) the structural properties of the BV, (2) the status of these properties and (3) why some structural properties of ‘fully fledged’ languages are more complex. First, we characterize the BV in four respects: its lexical repertoire, the principles according to which utterances are structured, and temporality and spatiality expressed. The organizational principles proposed are small in number, and interact. We analyse this interaction, describing how the BV is put to use in various complex verbal tasks, in order to establish both what its communicative potentialities are, and also those discourse contexts where the constraints come into conflict and where the variety breaks down. This latter phenomenon provides a partial answer to the third question,concerning the relative complexity of ‘fully fledged’ languages–they have devices to deal with such cases. As for the second question, it is argued firstly that the empirically established continuity of the adult acquisition process precludes any assignment of the BV to a mode of linguistic expression (e.g., ‘protolanguage’) distinct from that of ‘fully fledged’ languages and, moreover, that the organizational constraints of the BV belong to the core attributes of the human language capacity, whereas a number of complexifications not attested in the BV are less central properties of this capacity. Finally, it is shown that the notion of feature strength, as used in recent versions of Generative Grammar, allows a straightforward characterization of the BV as a special case of an I-language, in the sense of this theory. Under this perspective, the acquisition of an Ilanguage beyond the BV can essentially be described as a change in feature strength.
  • Klein, W. (Ed.). (1988). Sprache Kranker [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (69).
  • Klein, W. (1988). Sprache und Krankheit: Ein paar Anmerkungen. Zeitschrift für Literaturwissenschaft und Linguistik, 69, 9-20.
  • Klein, W. (Ed.). (1990). Sprache und Raum [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (78).
  • Klein, W., & Meibauer, J. (Eds.). (2011). Spracherwerb und Kinderliteratur [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 162.
  • Klein, W. (1999). Wie sich das deutsche Perfekt zusammensetzt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (113), 52-85.
  • Klein, W., & Schlieben-Lange, B. (Eds.). (1990). Zukunft der Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (79).
  • Klein, W. (1990). Überall und nirgendwo: Subjektive und objektive Momente in der Raumreferenz. Zeitschrift für Literaturwissenschaft und Linguistik, 78, 9-42.
  • Koenigs, M., Acheson, D. J., Barbey, A. K., Soloman, J., Postle, B. R., & Grafman, J. (2011). Areas of left perisylvian cortex mediate auditory-verbal short-term memory. Neuropsychologia, 49(13), 3612-3619. doi:10.1016/j.neuropsychologia.2011.09.013.

    Abstract

    A contentious issue in memory research is whether verbal short-term memory (STM) depends on a neural system specifically dedicated to the temporary maintenance of information, or instead relies on the same brain areas subserving the comprehension and production of language. In this study, we examined a large sample of adults with acquired brain lesions to identify the critical neural substrates underlying verbal STM and the relationship between verbal STM and language processing abilities. We found that patients with damage to selective regions of left perisylvian cortex – specifically the inferior frontal and posterior temporal sectors – were impaired on auditory–verbal STM performance (digit span), as well as on tests requiring the production and/or comprehension of language. These results support the conclusion that verbal STM and language processing are mediated by the same areas of left perisylvian cortex.

    Files private

    Request files
  • Kokal, I., Engel, A., Kirschner, S., & Keysers, C. (2011). Synchronized drumming enhances activity in the caudate and facilitates prosocial commitment - If the rhythm comes easily. PLoS One, 6(11), e27272. doi:10.1371/journal.pone.0027272.

    Abstract

    Why does chanting, drumming or dancing together make people feel united? Here we investigate the neural mechanisms underlying interpersonal synchrony and its subsequent effects on prosocial behavior among synchronized individuals. We hypothesized that areas of the brain associated with the processing of reward would be active when individuals experience synchrony during drumming, and that these reward signals would increase prosocial behavior toward this synchronous drum partner. 18 female non-musicians were scanned with functional magnetic resonance imaging while they drummed a rhythm, in alternating blocks, with two different experimenters: one drumming in-synchrony and the other out-of-synchrony relative to the participant. In the last scanning part, which served as the experimental manipulation for the following prosocial behavioral test, one of the experimenters drummed with one half of the participants in-synchrony and with the other out-of-synchrony. After scanning, this experimenter "accidentally" dropped eight pencils, and the number of pencils collected by the participants was used as a measure of prosocial commitment. Results revealed that participants who mastered the novel rhythm easily before scanning showed increased activity in the caudate during synchronous drumming. The same area also responded to monetary reward in a localizer task with the same participants. The activity in the caudate during experiencing synchronous drumming also predicted the number of pencils the participants later collected to help the synchronous experimenter of the manipulation run. In addition, participants collected more pencils to help the experimenter when she had drummed in-synchrony than out-of-synchrony during the manipulation run. By showing an overlap in activated areas during synchronized drumming and monetary reward, our findings suggest that interpersonal synchrony is related to the brain's reward system.
  • Kooijman, V., Hagoort, P., & Cutler, A. (2005). Electrophysiological evidence for prelinguistic infants' word recognition in continuous speech. Cognitive Brain Research, 24(1), 109-116. doi:10.1016/j.cogbrainres.2004.12.009.

    Abstract

    Children begin to talk at about age one. The vocabulary they need to do so must be built on perceptual evidence and, indeed, infants begin to recognize spoken words long before they talk. Most of the utterances infants hear, however, are continuous, without pauses between words, so constructing a vocabulary requires them to decompose continuous speech in order to extract the individual words. Here, we present electrophysiological evidence that 10-month-old infants recognize two-syllable words they have previously heard only in isolation when these words are presented anew in continuous speech. Moreover, they only need roughly the first syllable of the word to begin doing this. Thus, prelinguistic infants command a highly efficient procedure for segmentation and recognition of spoken words in the absence of an existing vocabulary, allowing them to tackle effectively the problem of bootstrapping a lexicon out of the highly variable, continuous speech signals in their environment.
  • Kos, M., Vosse, T. G., Van den Brink, D., & Hagoort, P. (2010). About edible restaurants: Conflicts between syntax and semantics as revealed by ERPs. Frontiers in Psychology, 1, E222. doi:10.3389/fpsyg.2010.00222.

    Abstract

    In order to investigate conflicts between semantics and syntax, we recorded ERPs, while participants read Dutch sentences. Sentences containing conflicts between syntax and semantics (Fred eats in a sandwich…/ Fred eats a restaurant…) elicited an N400. These results show that conflicts between syntax and semantics not necessarily lead to P600 effects and are in line with the processing competition account. According to this parallel account the syntactic and semantic processing streams are fully interactive and information from one level can influence the processing at another level. The relative strength of the cues of the processing streams determines which level is affected most strongly by the conflict. The processing competition account maintains the distinction between the N400 as index for semantic processing and the P600 as index for structural processing.
  • Kucera, K. S., Reddy, T. E., Pauli, F., Gertz, J., Logan, J. E., Myers, R. M., & Willard, H. F. (2011). Allele-specific distribution of RNA polymerase II on female X chromosomes. Human Molecular Genetics, 20, 3964-3973. doi:10.1093/hmg/ddr315.

    Abstract

    While the distribution of RNA polymerase II (PolII) in a variety of complex genomes is correlated with gene expression, the presence of PolII at a gene does not necessarily indicate active expression. Various patterns of PolII binding have been described genome wide; however, whether or not PolII binds at transcriptionally inactive sites remains uncertain. The two X chromosomes in female cells in mammals present an opportunity to examine each of the two alleles of a given locus in both active and inactive states, depending on which X chromosome is silenced by X chromosome inactivation. Here, we investigated PolII occupancy and expression of the associated genes across the active (Xa) and inactive (Xi) X chromosomes in human female cells to elucidate the relationship of gene expression and PolII binding. We find that, while PolII in the pseudoautosomal region occupies both chromosomes at similar levels, it is significantly biased toward the Xa throughout the rest of the chromosome. The general paucity of PolII on the Xi notwithstanding, detectable (albeit significantly reduced) binding can be observed, especially on the evolutionarily younger short arm of the X. PolII levels at genes that escape inactivation correlate with the levels of their expression; however, additional PolII sites can be found at apparently silenced regions, suggesting the possibility of a subset of genes on the Xi that are poised for expression. Consistent with this hypothesis, we show that a high proportion of genes associated with PolII-accessible sites, while silenced in GM12878, are expressed in other female cell lines.
  • Kuzla, C., & Ernestus, M. (2011). Prosodic conditioning of phonetic detail in German plosives. Journal of Phonetics, 39, 143-155. doi:10.1016/j.wocn.2011.01.001.

    Abstract

    This study investigates the prosodic conditioning of phonetic details which are candidate cues to phonological contrasts. German /b, d, g, p, t, k/ were examined in three prosodic positions. Lenis plosives /b, d, g/ were produced with less glottal vibration at larger prosodic boundaries, whereas their VOT showed no effect of prosody. VOT of fortis plosives /p, t, k/ decreased at larger boundaries, as did their burst intensity maximum. Vowels (when measured from consonantal release) following fortis plosives and lenis velars were shorter after larger boundaries. Closure duration, which did not contribute to the fortis/lenis contrast, was heavily affected by prosody. These results support neither of the hitherto proposed accounts of prosodic strengthening (Uniform Strengthening and Feature Enhancement). We propose a different account, stating that the phonological identity of speech sounds remains stable not only within, but also across prosodic positions (contrast-over-prosody hypothesis). Domain-initial strengthening hardly diminishes the contrast between prosodically weak fortis and strong lenis plosives.
  • Laaksonen, H., Kujala, J., Hultén, A., Liljeström, M., & Salmelin, R. (2011). MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production. NeuroImage, 60, 29-36. doi:MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production.

    Abstract

    Phase-locked evoked responses and event-related modulations of spontaneous rhythmic activity are the two main approaches used to quantify stimulus- or task-related changes in electrophysiological measures. The relationship between the two has beenwidely theorized upon but empirical research has been limited to the primary visual and sensorimotor cortex. However, both evoked responses and rhythms have been used as markers of neural activity in paradigms ranging from simple sensory to complex cognitive tasks.While some spatial agreement between the two phenomena has been observed, typically only one of the measures has been used in any given study, thus disallowing a direct evaluation of their exact spatiotemporal relationship. In this study, we sought to systematically clarify the connection between evoked responses and rhythmic activity. Using both measures, we identified the spatiotemporal patterns of task effects in three magnetoencephalography (MEG) data sets, all variants of a picture naming task. Evoked responses and rhythmic modulation yielded largely separate networks, with spatial overlap mainly in the sensorimotor and primary visual areas.Moreover, in the cortical regions thatwere identified with both measures the experimental effects they conveyed differed in terms of timing and function. Our results suggest that the two phenomena are largely detached and that both measures are needed for an accurate portrayal of brain activity.

Share this page