Publications

Displaying 301 - 400 of 848
  • Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23-B32. doi:10.1016/j.cognition.2004.10.003.

    Abstract

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
  • Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.

    Abstract

    The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical.
  • IJzerman, H., Gallucci, M., Pouw, W., Weiβgerber, S. C., Van Doesum, N. J., & Williams, K. D. (2012). Cold-blooded loneliness: Social exclusion leads to lower skin temperatures. Acta Psychologica, 140(3), 283-288. doi:10.1016/j.actpsy.2012.05.002.

    Abstract

    Being ostracized or excluded, even briefly and by strangers, is painful and threatens fundamental needs. Recent work by Zhong and Leonardelli (2008) found that excluded individuals perceive the room as cooler and that they desire warmer drinks. A perspective that many rely on in embodiment is the theoretical idea that people use metaphorical associations to understand social exclusion (see Landau, Meier, & Keefer, 2010). We suggest that people feel colder because they are colder. The results strongly support the idea that more complex metaphorical understandings of social relations are scaffolded onto literal changes in bodily temperature: Being excluded in an online ball tossing game leads to lower finger temperatures (Study 1), while the negative affect typically experienced after such social exclusion is alleviated after holding a cup of warm tea (Study 2). The authors discuss further implications for the interaction between body and social relations specifically, and for basic and cognitive systems in general.
  • Ikram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F. and 37 moreIkram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F., Uitterlinden, A. G., Knopman, D. S., Hartikainen, A.-L., Pennell, C. E., Thiering, E., Steegers, E. A. P., Hakonarson, H., Heinrich, J., Palmer, L. J., Jarvelin, M.-R., McCarthy, M. I., Grant, S. F. A., St Pourcain, B., Timpson, N. J., Smith, G. D., Sovio, U., Nalls, M. A., Au, R., Hofman, A., Gudnason, H., van der Lugt, A., Harris, T. B., Meeks, W. M., Vernooij, M. W., van Buchem, M. A., Catellier, D., Jaddoe, V. W. V., Gudnason, V., Windham, B. G., Wolf, P. A., van Duijn, C. M., Mosley, T. H., Schmidt, H., Launer, L. J., Breteler, M. M. B., DeCarli, C., Consortiumthe Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, & Early Growth Genetics (EGG) Consortium (2012). Common variants at 6q22 and 17q21 are associated with intracranial volume. Nature Genetics, 44(5), 539-544. doi:10.1038/ng.2245.

    Abstract

    During aging, intracranial volume remains unchanged and represents maximally attained brain size, while various interacting biological phenomena lead to brain volume loss. Consequently, intracranial volume and brain volume in late life reflect different genetic influences. Our genome-wide association study (GWAS) in 8,175 community-dwelling elderly persons did not reveal any associations at genome-wide significance (P < 5 × 10(-8)) for brain volume. In contrast, intracranial volume was significantly associated with two loci: rs4273712 (P = 3.4 × 10(-11)), a known height-associated locus on chromosome 6q22, and rs9915547 (P = 1.5 × 10(-12)), localized to the inversion on chromosome 17q21. We replicated the associations of these loci with intracranial volume in a separate sample of 1,752 elderly persons (P = 1.1 × 10(-3) for 6q22 and 1.2 × 10(-3) for 17q21). Furthermore, we also found suggestive associations of the 17q21 locus with head circumference in 10,768 children (mean age of 14.5 months). Our data identify two loci associated with head size, with the inversion at 17q21 also likely to be involved in attaining maximal brain size.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Ioana, M., Ferwerda, B., Farjadian, S., Ioana, L., Ghaderi, A., Oosting, M., Joosten, L. A., Van der Meer, J. W., Romeo, G., Luiselli, D., Dediu, D., & Netea, M. G. (2012). High variability of TLR4 gene in different ethnic groups of Iran. Innate Immunity, 18, 492-502. doi:10.1177/1753425911423043.

    Abstract

    Infectious diseases exert a constant evolutionary pressure on the innate immunity genes. TLR4, an important member of the Toll-like receptors family, specifically recognizes conserved structures of various infectious pathogens. Two functional TLR4 polymorphisms, Asp299Gly and Thr399Ile, modulate innate host defense against infections, and their prevalence between various populations has been proposed to be influenced by local infectious pressures. If this assumption is true, strong local infectious pressures would lead to a homogeneous pattern of these ancient TLR4 polymorphisms in geographically close populations, while a weak selection or genetic drift may result in a diverse pattern. We evaluated TLR4 polymorphisms in 15 ethnic groups of Iran, to assess whether infections exerted selective pressures on different haplotypes containing these variants. The Iranian subpopulations displayed a heterogeneous pattern of TLR4 polymorphisms, comprising various percentages of Asp299Gly and Thr399Ile alone or in combination. The Iranian sample as a whole showed an intermediate mixed pattern when compared with commonly found patterns in Africa, Europe, Eastern Asia and Americas. These findings suggest a weak or absent selection pressure on TLR4 polymorphisms in the Middle-East, that does not support the assumption of an important role of these polymorphisms in the host defence against local pathogens.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jaeger, E., Leedham, S., Lewis, A., Segditsas, S., Becker, M., Rodenas-Cuadrado, P., Davis, H., Kaur, K., Heinimann, K., Howarth, K., East, J., Taylor, J., Thomas, H., & Tomlinson, I. (2012). Hereditary mixed polyposis syndrome is caused by a 40-kb upstream duplication that leads to increased and ectopic expression of the BMP antagonist GREM1. Nature Genetics, 44, 699-703. doi:10.1038/ng.2263.

    Abstract

    Hereditary mixed polyposis syndrome (HMPS) is characterized by apparent autosomal dominant inheritance of multiple types of colorectal polyp, with colorectal carcinoma occurring in a high proportion of affected individuals. Here, we use genetic mapping, copy-number analysis, exclusion of mutations by high-throughput sequencing, gene expression analysis and functional assays to show that HMPS is caused by a duplication spanning the 3' end of the SCG5 gene and a region upstream of the GREM1 locus. This unusual mutation is associated with increased allele-specific GREM1 expression. Whereas GREM1 is expressed in intestinal subepithelial myofibroblasts in controls, GREM1 is predominantly expressed in the epithelium of the large bowel in individuals with HMPS. The HMPS duplication contains predicted enhancer elements; some of these interact with the GREM1 promoter and can drive gene expression in vitro. Increased GREM1 expression is predicted to cause reduced bone morphogenetic protein (BMP) pathway activity, a mechanism that also underlies tumorigenesis in juvenile polyposis of the large bowel.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.

    Abstract

    In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.
  • Janse, I., Bok, J., Hamidjaja, R. A., Hodemaekers, H. M., & van Rotterdam, B. J. (2012). Development and comparison of two assay formats for parallel detection of four biothreat pathogens by using suspension microarrays. PLoS One, 7(2), e31958. doi:10.1371/journal.pone.0031958.

    Abstract

    Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. © 2012 Janse et al.
  • Janse, E. (2005). Neighbourhood density effects in auditory nonword processing in aphasia. Brain and Language, 95, 24-25. doi:10.1016/j.bandl.2005.07.027.
  • Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.

    Abstract

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Janzen, G., & Hawlik, M. (2005). Orientierung im Raum: Befunde zu Entscheidungspunkten. Zeitschrift für Psychology, 213, 179-186.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2012). Tracking down abstract linguistic meaning: Neural correlates of spatial frame of reference ambiguities in language. PLoS One, 7(2), e30657. doi:10.1371/journal.pone.0030657.

    Abstract

    This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference
  • Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How typing shapes the meanings of words. Psychonomic Bulletin & Review, 19, 499-504. doi:10.3758/s13423-012-0229-7.

    Abstract

    The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semanntic changes in language can arise.
  • Jepma, M., Verdonschot, R. G., Van Steenbergen, H., Rombouts, S. A. R. B., & Nieuwenhuis, S. (2012). Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6: 5. doi:10.3389/fnbeh.2012.00005.

    Abstract

    Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI) to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (1) the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex (ACC), brain regions sensitive to conflict and arousal; (2) the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (3) the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.

    Abstract

    Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation.
  • Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

    Abstract

    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
  • Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32(45), 16064-16069. doi:10.1523/JNEUROSCI.2926-12.2012.

    Abstract

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
  • Johnson, E. K. (2005). English-learning infants' representations of word-forms with iambic stress. Infancy, 7(1), 95-105. doi:10.1207/s15327078in0701_8.

    Abstract

    Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head-turn preference study was used to investigate the nature of English- learners' representations of iambic word onsets. Fifty-four 10.5-month-olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near-familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near-familiar test group (near-familiar vs. unfamiliar) oriented equally long to near-familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Jolink, A. (2005). Finite linking in normally developing Dutch children and children with specific language impairment. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 61-81.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.

    Abstract

    Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction.
  • Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.

    Abstract

    Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
    of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
    isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
    directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
    such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
    understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
    24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
    words in continuous utterances is clearly linked to future language development.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand [Abstract]. Abstracts of the Acoustics 2012 Hong Kong conference published in The Journal of the Acoustical Society of America, 131, 3311. doi:10.1121/1.4708385.

    Abstract

    Hand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on objects. The relationship between the speech and gesture/action was either complementary (e.g., “He found the answer,” while producing a calculating gesture vs. actually using a calculator) or incongruent (e.g., the same sentence paired with the incongruent gesture/action of stirring with a spoon). Participants watched the video (prime) and then responded to a written word (target) that was or was not spoken in the video prime (e.g., “found” or “cut”). ERPs were taken to the primes (time-locked to the spoken verb, e.g., “found”) and the written targets. For primes, there was a larger frontal N400 (semantic processing) to incongruent vs. congruent items for the gesture, but not action, condition. For targets, the P2 (phonemic processing) was smaller for target words following congruent vs. incongruent gesture, but not action, primes. These findings suggest that hand gestures are integrated with speech in a privileged fashion compared to manual actions on objects.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G. (1990). Een slordig gestapeld servies [Review of the book Tranen van de krokodil by Piet Vroon]. Intermediair, 26(17), 67-69.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G. (1971). [Review of the book General Psychology by N. Dember and J.J. Jenkins]. Nijmeegs Tijdschrift voor Psychologie, 19, 132-133.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G., & Vosse, T. (1989). Incremental syntactic tree formation in human sentence processing: A cognitive architecture based on activation decay and simulated annealing. Connection Science, 1(3), 273-290. doi:10.1080/09540098908915642.

    Abstract

    A new cognitive architecture is proposed for the syntactic aspects of human sentence processing. The architecture, called Unification Space, is biologically inspired but not based on neural nets. Instead it relies on biosynthesis as a basic metaphor. We use simulated annealing as an optimization technique which searches for the best configuration of isolated syntactic segments or subtrees in the final parse tree. The gradually decaying activation of individual syntactic nodes determines the ‘global excitation level’ of the system. This parameter serves the function of ‘computational temperature’ in simulated annealing. We have built a computer implementation of the architecture which simulates well-known sentence understanding phenomena. We report successful simulations of the psycholinguistic effects of clause embedding, minimal attachment, right association and lexical ambiguity. In addition, we simulated impaired sentence understanding as observable in agrammatic patients. Since the Unification Space allows for contextual (semantic and pragmatic) influences on the syntactic tree formation process, it belongs to the class of interactive sentence processing models.
  • Kempen, G. (1990). Microcomputers en cognitiewetenschap. SURF: Tijdschrift over Computerdienstverlening in het Hoger Onderwijs en Onderzoek, 4(3), 2.
  • Kempen, G., & Jongen-Janner, E. (1990). Naar een flexibele methode voor algoritmisch grammatica- en spellingonderwijs. Pedagogisch Tijdschrift, 15, 280-289.
  • Kempen, G. (1992). Grammar based text processing. Document Management: Nieuwsbrief voor Documentaire Informatiekunde, 1(2), 8-10.
  • Kempen, G., Olsthoorn, N., & Sprenger, S. (2012). Grammatical workspace sharing during language production and language comprehension: Evidence from grammatical multitasking. Language and Cognitive Processes, 27, 345-380. doi:10.1080/01690965.2010.544583.

    Abstract

    Grammatical encoding and grammatical decoding (in sentence production and comprehension, respectively) are often portrayed as independent modalities of grammatical performance that only share declarative resources: lexicon and grammar. The processing resources subserving these modalities are supposed to be distinct. In particular, one assumes the existence of two workspaces where grammatical structures are assembled and temporarily maintained—one for each modality. An alternative theory holds that the two modalities share many of their processing resources and postulates a single mechanism for the online assemblage and short-term storage of grammatical structures: a shared workspace. We report two experiments with a novel “grammatical multitasking” paradigm: the participants had to read (i.e., decode) and to paraphrase (encode) sentences presented in fragments, responding to each input fragment as fast as possible with a fragment of the paraphrase. The main finding was that grammatical constraints with respect to upcoming input that emanate from decoded sentence fragments are immediately replaced by grammatical expectations emanating from the structure of the corresponding paraphrase fragments. This evidences that the two modalities have direct access to, and operate upon, the same (i.e., token-identical) grammatical structures. This is possible only if the grammatical encoding and decoding processes command the same, shared grammatical workspace. Theoretical implications for important forms of grammatical multitasking—self-monitoring, turn-taking in dialogue, speech shadowing, and simultaneous translation—are explored.
  • Kempen, G. (1971). Het onthouden van eenvoudige zinnen met zijn en hebben als werkwoorden: Een experiment met steekwoordreaktietijden. Nijmeegs Tijdschrift voor Psychologie, 19, 262-274.
  • Kempen, G., & Kolk, H. (1986). Het voortbrengen van normale en agrammatische taal. Van Horen Zeggen, 27(2), 36-40.
  • Kempen, G. (1971). Opslag van woordbetekenissen in het semantisch geheugen. Nijmeegs Tijdschrift voor Psychologie, 19, 36-50.
  • Kempen, G. (1990). Representation in memory: Volume 2, chapter 8, pp. 511–587 by David E. Rumelhart and Donald A. Norman [Book review]. Acta Psychologica, 75, 191-192. doi:10.1016/0001-6918(90)90107-Q.
  • Kempen, G. (1990). Taaltechnologie en de toekomst van tekstautomatisering. Informatie, 32, 724-727.
  • Kempen, G. (1986). RIKS: Kennistechnologisch centrum voor bedrijfsleven en wetenschap. Informatie, 28, 122-125.
  • Kempen, G., & Huijbers, P. (1983). The lexicalization process in sentence production and naming: Indirect election of words. Cognition, 14(2), 185-209. doi:10.1016/0010-0277(83)90029-X.

    Abstract

    A series of experiments is reported in which subjects describe simple visual scenes by means of both sentential and non-sentential responses. The data support the following statements about the lexicalization (word finding) process. (1) Words used by speakers in overt naming or sentence production responses are selected by a sequence of two lexical retrieval processes, the first yielding abstract pre-phonological items (Ll -items), the second one adding their phonological shapes (L2-items). (2) The selection of several Ll-items for a multi-word utterance can take place simultaneously. (3) A monitoring process is watching the output of Ll-lexicalization to check if it is in keeping with prevailing constraints upon utterance format. (4) Retrieval of the L2-item which corresponds with a given LI-item waits until the Ld-item has been checked by the monitor, and all other Ll-items needed for the utterance under construction have become available. A coherent picture of the lexicalization process begins to emerge when these characteristics are brought together with other empirical results in the area of naming and sentence production, e.g., picture naming reaction times (Seymour, 1979), speech errors (Garrett, 1980), and word order preferences (Bock, 1982).
  • Kempen, G. (1983). Wat betekent taalvaardigheid voor informatiesystemen? TNO project: Maandblad voor toegepaste wetenschappen, 11, 401-403.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90(1-3), 117-127. doi:10.1016/S0093-934X(03)00425-5.

    Abstract

    Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.
  • Kemps, R. J. J. K., Wurm, L. H., Ernestus, M., Schreuder, R., & Baayen, R. H. (2005). Prosodic cues for morphological complexity in Dutch and English. Language and Cognitive Processes, 20(1/2), 43-73. doi:10.1080/01690960444000223.

    Abstract

    Previous work has shown that Dutch listeners use prosodic information in the speech signal to optimise morphological processing: Listeners are sensitive to prosodic differences between a noun stem realised in isolation and a noun stem realised as part of a plural form (in which the stem is followed by an unstressed syllable). The present study, employing a lexical decision task, provides an additional demonstration of listeners' sensitivity to prosodic cues in the stem. This sensitivity is shown for two languages that differ in morphological productivity: Dutch and English. The degree of morphological productivity does not correlate with listeners' sensitivity to prosodic cues in the stem, but it is reflected in differential sensitivities to the word-specific log odds ratio of encountering an unshortened stem (i.e., a stem in isolation) versus encountering a shortened stem (i.e., a stem followed by a suffix consisting of one or more unstressed syllables). In addition to being sensitive to the prosodic cues themselves, listeners are also sensitive to the probabilities of occurrence of these prosodic cues.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2005). Prosodic cues for morphological complexity: The case of Dutch plural nouns. Memory & Cognition, 33(3), 430-446.

    Abstract

    It has recently been shown that listeners use systematic differences in vowel length and intonation to resolve ambiguities between onset-matched simple words (Davis, Marslen-Wilson, & Gaskell, 2002; Salverda, Dahan, & McQueen, 2003). The present study shows that listeners also use prosodic information in the speech signal to optimize morphological processing. The precise acoustic realization of the stem provides crucial information to the listener about the morphological context in which the stem appears and attenuates the competition between stored inflectional variants. We argue that listeners are able to make use of prosodic information, even though the speech signal is highly variable within and between speakers, by virtue of the relative invariance of the duration of the onset. This provides listeners with a baseline against which the durational cues in a vowel and a coda can be evaluated. Furthermore, our experiments provide evidence for item-specific prosodic effects.
  • Keune, K., Ernestus, M., Van Hout, R., & Baayen, R. H. (2005). Variation in Dutch: From written "mogelijk" to spoken "mok". Corpus Linguistics and Linguistic Theory, 1(2), 183-223. doi:10.1515/cllt.2005.1.2.183.

    Abstract

    In Dutch, high-frequency words with the suffix -lijk are often highly reduced in spontaneous unscripted speech. This study addressed socio-geographic variation in the reduction of such words against the backdrop of the variation in their use in written and spoken Dutch. Multivariate analyses of the frequencies with which the words were used in a factorially contrasted set of subcorpora revealed signi ficant variation involving the speaker's country, sex, and education level for spoken Dutch, and involving country and register for written Dutch. Acoustic analyses revealed that Dutch men reduced most often, while Flemish highly educated women reduced least. Two linguistic context effects emerged, one prosodic, and the other pertaining to the flow of information. Words in sentence final position showed less reduction, while words that were better predictable from the preceding word in the sentence(based on mutual information) tended to be reduced more often. The increased probability of reduction for forms that are more predictable in context, combined with the loss of the suffix in the more extremely reduced forms, suggests that highfrequency words in -lijk are undergoing a process of erosion that causes them to gravitate towards monomorphemic function words.
  • Kidd, E. (2012). Implicit statistical learning is directly associated with the acquisition of syntax. Developmental Psychology, 48(1), 171-184. doi:10.1037/a0025405.

    Abstract

    This article reports on an individual differences study that investigated the role of implicit statistical learning in the acquisition of syntax in children. One hundred children ages 4 years 5 months through 6 years 11 months completed a test of implicit statistical learning, a test of explicit declarative learning, and standardized tests of verbal and nonverbal ability. They also completed a syntactic priming task, which provided a dynamic index of children's facility to detect and respond to changes in the input frequency of linguistic structure. The results showed that implicit statistical learning ability was directly associated with the long-term maintenance of the primed structure. The results constitute the first empirical demonstration of a direct association between implicit statistical learning and syntactic acquisition in children.
  • Kidd, E. (2012). Individual differences in syntactic priming in language acquisition. Applied Psycholinguistics, 33(2), 393-418. doi:10.1017/S0142716411000415.

    Abstract

    Although the syntactic priming methodology is a promising tool for language acquisition researchers, using the technique with children raises issues that are not problematic in adult research. The current paper reports on an individual differences study that addressed some of these outstanding issues. (a) Does priming purely reflect syntactic knowledge, or are other processes involved? (b) How can we explain individual differences, which are the norm rather than the exception? (c) Do priming effects in developmental populations reflect the same mechanisms thought to be responsible for priming in adults? One hundred twenty-two (N = 122) children aged 4 years, 5 months (4;5)–6;11 (mean = 5;7) completed a syntactic priming task that aimed to prime the English passive construction, in addition to standardized tests of vocabulary, grammar, and nonverbal intelligence. The results confirmed the widely held assumption that syntactic priming reflects the presence of syntactic knowledge, but not in every instance. However, they also suggested that nonlinguistic processes contribute significantly to priming. Priming was in no way related to age. Finally, the children's linguistic knowledge and nonverbal ability determined the manner in which they were primed. The results provide a clearer picture of what it means to be primed in acquisition.
  • Kidd, E. (2004). Grammars, parsers, and language acquisition. Journal of Child Language, 31(2), 480-483. doi:10.1017/S0305000904006117.

    Abstract

    Drozd's critique of Crain & Thornton's (C&T) (1998) book Investigations in Universal Grammar (IUG) raises many issues concerning theory and experimental design within generative approaches to language acquisition. I focus here on one of the strongest theoretical claims of the Modularity Matching Model (MMM): continuity of processing. For reasons different to Drozd, I argue that the assumption is tenuous. Furthermore, I argue that the focus of the MMM and the methodological prescriptions contained in IUG are too narrow to capture language acquisition.
  • Kidd, E., & Bavin, E. L. (2005). Lexical and referential cues to sentence interpretation: An investigation of children's interpretations of ambiguous sentences. Journal of Child Language, 32(4), 855-876. doi:10.1017/S0305000905007051.

    Abstract

    This paper reports on an investigation of children's (aged 3;5–9;8) comprehension of sentences containing ambiguity of prepositional phrase (PP) attachment. Results from a picture selection study (N=90) showed that children use verb semantics and preposition type to resolve the ambiguity, with older children also showing sensitivity to the definiteness of the object NP as a cue to interpretation. Study 2 investigated three- and five-year-old children's (N=47) ability to override an instrumental interpretation of ambiguous PPs in order to process attributes of the referential scene. The results showed that while five-year-olds are capable of incorporating aspects of the referential scene into their interpretations, three-year-olds are not as successful. Overall, the results suggest that children are attuned very early to the lexico-semantic co-occurrences that have been shown to aid ambiguity resolution in adults, but that more diffuse cues to interpretation are used only later in development
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kim, S., Cho, T., & McQueen, J. M. (2012). Phonetic richness can outweigh prosodically-driven phonological knowledge when learning words in an artificial language. Journal of Phonetics, 40, 443-452. doi:10.1016/j.wocn.2012.02.005.

    Abstract

    How do Dutch and Korean listeners use acoustic–phonetic information when learning words in an artificial language? Dutch has a voiceless ‘unaspirated’ stop, produced with shortened Voice Onset Time (VOT) in prosodic strengthening environments (e.g., in domain-initial position and under prominence), enhancing the feature {−spread glottis}; Korean has a voiceless ‘aspirated’ stop produced with lengthened VOT in similar environments, enhancing the feature {+spread glottis}. Given this cross-linguistic difference, two competing hypotheses were tested. The phonological-superiority hypothesis predicts that Dutch and Korean listeners should utilize shortened and lengthened VOTs, respectively, as cues in artificial-language segmentation. The phonetic-superiority hypothesis predicts that both groups should take advantage of the phonetic richness of longer VOTs (i.e., their enhanced auditory–perceptual robustness). Dutch and Korean listeners learned the words of an artificial language better when word-initial stops had longer VOTs than when they had shorter VOTs. It appears that language-specific phonological knowledge can be overridden by phonetic richness in processing an unfamiliar language. Listeners nonetheless performed better when the stimuli were based on the speech of their native languages, suggesting that the use of richer phonetic information was modulated by listeners' familiarity with the stimuli.
  • Kim, A., & Lai, V. T. (2012). Rapid interactions between lexical semantic and word form analysis during word recognition in context: Evidence from ERPs. Journal of Cognitive Neuroscience, 24, 1104-1112. doi:10.1162/jocn_a_00148.

    Abstract

    We used event-related potentials (ERPs) to investigate the timecourse of interactions between lexical-semantic and sub-lexical visual word-form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually-supported real word (e.g., “She measured the flour so she could bake a ceke …”) or did not (e.g., “She measured the flour so she could bake a tont …”) along with nonword consonant strings (e.g., “She measured the flour so she could bake a srdt …”). Pseudowords that resembled a contextually-supported real word (“ceke”) elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., “She measured the flour so she could bake a cake …”). Pseudowords that did not resemble a plausible real word (“tont”) enhanced the N170 component, as did nonword consonant strings (“srdt”). The effect pattern shows that the visual word recognition system is, perhaps counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually-predicted inputs. The findings are consistent with rapid interactions between lexical and sub-lexical representations during word recognition, in which rapid lexical access of a contextually-supported word (CAKE) provides top-down excitation of form features (“cake”), highlighting the anomaly of an unexpected word “ceke”.
  • Kim, S., Broersma, M., & Cho, T. (2012). The use of prosodic cues in learning new words in an unfamiliar language. Studies in Second Language Acquisition, 34, 415-444. doi:10.1017/S0272263112000137.

    Abstract

    The artificial language learning paradigm was used to investigate to what extent the use of prosodic features is universally applicable or specifically language driven in learning an unfamiliar language, and how nonnative prosodic patterns can be learned. Listeners of unrelated languages—Dutch (n = 100) and Korean (n = 100)—participated. The words to be learned varied with prosodic cues: no prosody, fundamental frequency (F0) rise in initial and final position, final lengthening, and final lengthening plus F0 rise. Both listener groups performed well above chance level with the final lengthening cue, confirming its crosslinguistic use. As for final F0 rise, however, Dutch listeners did not use it until the second exposure session, whereas Korean listeners used it at initial exposure. Neither group used initial F0 rise. On the basis of these results, F0 and durational cues appear to be universal in the sense that they are used across languages for their universally applicable auditory-perceptual saliency, but how they are used is language specific and constrains the use of available prosodic cues in processing a nonnative language. A discussion on how these findings bear on theories of second language (L2) speech perception and learning is provided.
  • Kircher, T. T. J., Brammer, M. J., Levelt, W. J. M., Bartels, M., & McGuire, P. K. (2004). Pausing for thought: Engagement of left temporal cortex during pauses in speech. NeuroImage, 21(1), 84-90. doi:10.1016/j.neuroimage.2003.09.041.

    Abstract

    Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.
  • Kirjavainen, M., Nikolaev, A., & Kidd, E. (2012). The effect of frequency and phonological neighbourhood density on the acquisition of past tense verbs by Finnish children. Cognitive Linguistics, 23(2), 273-315. doi:10.1515/cog-2012-0009.

    Abstract

    The acquisition of the past tense has received substantial attention in the psycholinguistics literature, yet most studies report data from English or closely related Indo-European languages. We report on a past tense elicitation study on 136 4–6-year-old children that were acquiring a highly inflected Finno-Ugric (Uralic) language—Finnish. The children were tested on real and novel verbs (N = 120) exhibiting (1) productive, (2) semi-productive, or (3) non-productive inflectional processes manipulated for frequency and phonological neighbourhood density (PND). We found that Finnish children are sensitive to lemma/base frequency and PND when processing inflected words, suggesting that even though children were using suffixation processes, they were also paying attention to the item level properties of the past tense verbs. This paper contributes to the growing body of research suggesting a single analogical/associative mechanism is sufficient in processing both productive (i.e., regular-like) and non-productive (i.e., irregular-like) words. We argue that seemingly rule-like elements in inflectional morphology are an emergent property of the lexicon.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W., & Dimroth, C. (Eds.). (2005). Spracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 140.
  • Klein, W. (2005). Über den Nutzen naturwissenschaftlicher Denkmodelle für die Geisteswissenschaften. Debatte, 2, 45-50.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W. (2005). Vom Sprachvermögen zum Sprachlichen System. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 8-39.
  • Klein, W. (2004). Vom Wörterbuch zum digitalen lexikalischen System. Zeitschrift für Literaturwissenschaft und Linguistik, 136, 10-55.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W. (2005). Wie ist eine exakte Wissenschaft von der Literatur möglich? Zeitschrift für Literaturwissenschaft und Linguistik, 137, 80-100.
  • Klein, W. (1990). A theory of language acquisition is not so easy. Studies in Second Language Acquisition, 12, 219-231. doi:10.1017/S0272263100009104.
  • Klein, W. (1986). Der Wahn vom Sprachverfall und andere Mythen. Zeitschrift für Literaturwissenschaft und Linguistik, 62, 11-28.
  • Klein, W. (1990). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 20(78), 7-8.
  • Klein, W. (1986). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 16(62), 9-10.
  • Klein, W. (1992). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 22(86), 7-8.
  • Klein, W. (1990). Comments on the papers by Bierwisch and Zwicky. Yearbook of Morphology, 3, 217-221.
  • Klein, W. (1971). Eine kommentierte Bibliographie zur Computerlinguistik. Linguistische Berichte, (11), 101-134.
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W. (2004). Auf der Suche nach den Prinzipien, oder: Warum die Geisteswissenschaften auf dem Rückzug sind. Zeitschrift für Literaturwissenschaft und Linguistik, 134, 19-44.
  • Klein, W. (2004). Im Lauf der Jahre. Linguistische Berichte, 200, 397-407.
  • Klein, W. (2005). Hoe is een exacte literatuurwetenschap mogelijk? Parmentier, 14(1), 48-65.
  • Klein, W. (Ed.). (2005). Nicht nur Literatur [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 137.
  • Klein, W. (1991). Geile Binsenbüschel, sehr intime Gespielen: Ein paar Anmerkungen über Arno Schmidt als Übersetzer. Zeitschrift für Literaturwissenschaft und Linguistik, 84, 124-129.
  • Klein, W. (Ed.). (1989). Kindersprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (73).
  • Klein, W. (Ed.). (1983). Intonation [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (49).
  • Klein, W. (1989). Introspection into what? Review of C. Faerch & G. Kaspar (Eds.) Introspection in second language research 1987. Contemporary Psychology, 34(12), 1119-1120.
  • Klein, W. (Ed.). (1998). Kaleidoskop [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (112).
  • Klein, W. (1991). Raumausdrücke. Linguistische Berichte, 132, 77-114.
  • Klein, W. (1992). Tempus, Aspekt und Zeitadverbien. Kognitionswissenschaft, 2, 107-118.
  • Klein, W., & Von Stutterheim, C. (1991). Text structure and referential movement. Arbeitsberichte des Forschungsprogramms S&P: Sprache und Pragmatik, 22.
  • Klein, W. (Ed.). (1992). Textlinguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (86).
  • Klein, W., & Von Stutterheim, C. (1992). Textstruktur und referentielle Bewegung. Zeitschrift für Literaturwissenschaft und Linguistik, 86, 67-92.
  • Klein, W. (1998). The contribution of second language acquisition research. Language Learning, 48, 527-550. doi:10.1111/0023-8333.00057.

    Abstract

    During the last 25 years, second language acquisition (SLA) research hasmade considerable progress, but is still far from proving a solid basis for foreign language teaching, or from a general theory of SLA. In addition, its status within the linguistic disciplines is still very low. I argue this has not much to do with low empirical or theoretical standards in the field—in this regard, SLA research is fully competitive—but with a particular perspective on the acquisition process: SLA researches learners' utterances as deviations from a certain target, instead of genuine manifestations of underlying language capacity; it analyses them in terms of what they are not rather than what they are. For some purposes such a "target deviation perspective" makes sense, but it will not help SLA researchers to substantially and independently contribute to a deeper understanding of the structure and function of the human language faculty. Therefore, these findings will remain of limited interest to other scientists until SLA researchers consider learner varieties a normal, in fact typical, manifestation of this unique human capacity.

Share this page