Publications

Displaying 401 - 500 of 1130
  • Huizeling, E., Alday, P. M., Peeters, D., & Hagoort, P. (2023). Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia, 191: 108730. doi:10.1016/j.neuropsychologia.2023.108730.

    Abstract

    EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.

    Abstract

    n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.

    Abstract

    This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Jadoul, Y., Thompson, B., & De Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. doi:10.1016/j.wocn.2018.07.001.

    Abstract

    This paper introduces Parselmouth, an open-source Python library that facilitates access to core functionality of Praat in Python, in an efficient and programmer-friendly way. We introduce and motivate the package, and present simple usage examples. Specifically, we focus on applications in data visualisation, file manipulation, audio manipulation, statistical analysis, and integration of Parselmouth into a Python-based experimental design for automated, in-the-loop manipulation of acoustic data. Parselmouth is available at https://github.com/YannickJadoul/Parselmouth.
  • Jadoul, Y., & Ravignani, A. (2023). Modelling the emergence of synchrony from decentralized rhythmic interactions in animal communication. Proceedings of the Royal Society B: Biological Sciences, 290(2003). doi:10.1098/rspb.2023.0876.

    Abstract

    To communicate, an animal's strategic timing of rhythmic signals is crucial. Evolutionary, game-theoretical, and dynamical systems models can shed light on the interaction between individuals and the associated costs and benefits of signalling at a specific time. Mathematical models that study rhythmic interactions from a strategic or evolutionary perspective are rare in animal communication research. But new inspiration may come from a recent game theory model of how group synchrony emerges from local interactions of oscillatory neurons. In the study, the authors analyse when the benefit of joint synchronization outweighs the cost of individual neurons sending electrical signals to each other. They postulate there is a benefit for pairs of neurons to fire together and a cost for a neuron to communicate. The resulting model delivers a variant of a classical dynamical system, the Kuramoto model. Here, we present an accessible overview of the Kuramoto model and evolutionary game theory, and of the 'oscillatory neurons' model. We interpret the model's results and discuss the advantages and limitations of using this particular model in the context of animal rhythmic communication. Finally, we sketch potential future directions and discuss the need to further combine evolutionary dynamics, game theory and rhythmic processes in animal communication studies.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). PyGellermann: a Python tool to generate pseudorandom series for human and non-human animal behavioural experiments. BMC Research Notes, 16: 135. doi:10.1186/s13104-023-06396-x.

    Abstract

    Objective

    Researchers in animal cognition, psychophysics, and experimental psychology need to randomise the presentation order of trials in experimental sessions. In many paradigms, for each trial, one of two responses can be correct, and the trials need to be ordered such that the participant’s responses are a fair assessment of their performance. Specifically, in some cases, especially for low numbers of trials, randomised trial orders need to be excluded if they contain simple patterns which a participant could accidentally match and so succeed at the task without learning.
    Results

    We present and distribute a simple Python software package and tool to produce pseudorandom sequences following the Gellermann series. This series has been proposed to pre-empt simple heuristics and avoid inflated performance rates via false positive responses. Our tool allows users to choose the sequence length and outputs a .csv file with newly and randomly generated sequences. This allows behavioural researchers to produce, in a few seconds, a pseudorandom sequence for their specific experiment. PyGellermann is available at https://github.com/YannickJadoul/PyGellermann.
  • Jago, L. S., Alcock, K., Meints, K., Pine, J. M., & Rowland, C. F. (2023). Language outcomes from the UK-CDI Project: Can risk factors, vocabulary skills and gesture scores in infancy predict later language disorders or concern for language development? Frontiers in Psychology, 14: 1167810. doi:10.3389/fpsyg.2023.1167810.

    Abstract

    At the group level, children exposed to certain health and demographic risk factors, and who have delayed language in early childhood are, more likely to have language problems later in childhood. However, it is unclear whether we can use these risk factors to predict whether an individual child is likely to develop problems with language (e.g., be diagnosed with a developmental language disorder). We tested this in a sample of 146 children who took part in the UK-CDI norming project. When the children were 15–18 months old, 1,210 British parents completed: (a) the UK-CDI (a detailed assessment of vocabulary and gesture use) and (b) the Family Questionnaire (questions about health and demographic risk factors). When the children were between 4 and 6  years, 146 of the same parents completed a short questionnaire that assessed (a) whether children had been diagnosed with a disability that was likely to affect language proficiency (e.g., developmental disability, language disorder, hearing impairment), but (b) also yielded a broader measure: whether the child’s language had raised any concern, either by a parent or professional. Discriminant function analyses were used to assess whether we could use different combinations of 10 risk factors, together with early vocabulary and gesture scores, to identify children (a) who had developed a language-related disability by the age of 4–6 years (20 children, 13.70% of the sample) or (b) for whom concern about language had been expressed (49 children; 33.56%). The overall accuracy of the models, and the specificity scores were high, indicating that the measures correctly identified those children without a language-related disability and whose language was not of concern. However, sensitivity scores were low, indicating that the models could not identify those children who were diagnosed with a language-related disability or whose language was of concern. Several exploratory analyses were carried out to analyse these results further. Overall, the results suggest that it is difficult to use parent reports of early risk factors and language in the first 2 years of life to predict which children are likely to be diagnosed with a language-related disability. Possible reasons for this are discussed.

    Additional information

    follow up questionnaire table S1
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Jin, H., Wang, Q., Yang, Y.-F., Zhang, H., Gao, M. (., Jin, S., Chen, Y. (., Xu, T., Zheng, Y.-R., Chen, J., Xiao, Q., Yang, J., Wang, X., Geng, H., Ge, J., Wang, W.-W., Chen, X., Zhang, L., Zuo, X.-N., & Chuan-Peng, H. (2023). The Chinese Open Science Network (COSN): Building an open science community from scratch. Advances in Methods and Practices in Psychological Science, 6(1): 10.1177/25152459221144986. doi:10.1177/25152459221144986.

    Abstract

    Open Science is becoming a mainstream scientific ideology in psychology and related fields. However, researchers, especially early-career researchers (ECRs) in developing countries, are facing significant hurdles in engaging in Open Science and moving it forward. In China, various societal and cultural factors discourage ECRs from participating in Open Science, such as the lack of dedicated communication channels and the norm of modesty. To make the voice of Open Science heard by Chinese-speaking ECRs and scholars at large, the Chinese Open Science Network (COSN) was initiated in 2016. With its core values being grassroots-oriented, diversity, and inclusivity, COSN has grown from a small Open Science interest group to a recognized network both in the Chinese-speaking research community and the international Open Science community. So far, COSN has organized three in-person workshops, 12 tutorials, 48 talks, and 55 journal club sessions and translated 15 Open Science-related articles and blogs from English to Chinese. Currently, the main social media account of COSN (i.e., the WeChat Official Account) has more than 23,000 subscribers, and more than 1,000 researchers/students actively participate in the discussions on Open Science. In this article, we share our experience in building such a network to encourage ECRs in developing countries to start their own Open Science initiatives and engage in the global Open Science movement. We foresee great collaborative efforts of COSN together with all other local and international networks to further accelerate the Open Science movement.
  • Jodzio, A., Piai, V., Verhagen, L., Cameron, I., & Indefrey, P. (2023). Validity of chronometric TMS for probing the time-course of word production: A modified replication. Cerebral Cortex, 33(12), 7816-7829. doi:10.1093/cercor/bhad081.

    Abstract

    In the present study, we used chronometric TMS to probe the time-course of 3 brain regions during a picture naming task. The left inferior frontal gyrus, left posterior middle temporal gyrus, and left posterior superior temporal gyrus were all separately stimulated in 1 of 5 time-windows (225, 300, 375, 450, and 525 ms) from picture onset. We found posterior temporal areas to be causally involved in picture naming in earlier time-windows, whereas all 3 regions appear to be involved in the later time-windows. However, chronometric TMS produces nonspecific effects that may impact behavior, and furthermore, the time-course of any given process is a product of both the involved processing stages along with individual variation in the duration of each stage. We therefore extend previous work in the field by accounting for both individual variations in naming latencies and directly testing for nonspecific effects of TMS. Our findings reveal that both factors influence behavioral outcomes at the group level, underlining the importance of accounting for individual variations in naming latencies, especially for late processing stages closer to articulation, and recognizing the presence of nonspecific effects of TMS. The paper advances key considerations and avenues for future work using chronometric TMS to study overt production.
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (2023). Introduction special issue: Marking the truth: A cross-linguistic approach to verum. Zeitschrift für Sprachwissenschaft, 42(3), 429-442. doi:10.1515/zfs-2023-2012.

    Abstract

    This special issue focuses on the theoretical and empirical underpinnings of truth-marking. The names that have been used to refer to this phenomenon include, among others, counter-assertive focus, polar(ity) focus, verum focus, emphatic polarity or simply verum. This terminological variety is suggestive of the wide range of ideas and conceptions that characterizes this research field. This collection aims to get closer to the core of what truly constitutes verum. We want to expand the empirical base and determine the common and diverging properties of truth-marking in the languages of the world. The objective is to set a theoretical and empirical baseline for future research on verum and related phenomena.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (Eds.). (2023). Marking the truth: A cross-linguistic approach to verum [Special Issue]. Zeitschrift für Sprachwissenschaft, 42(3).
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Kalashnikova, M., Escudero, P., & Kidd, E. (2018). The development of fast-mapping and novel word retention strategies in monolingual and bilingual infants. Developmental Science, 21(6): e12674. doi:10.1111/desc.12674.

    Abstract

    The mutual exclusivity (ME) assumption is proposed to facilitate early word learning by guiding infants to map novel words to novel referents. This study assessed the emergence and use of ME to both disambiguate and retain the meanings of novel words across development in 18‐month‐old monolingual and bilingual children (Experiment 1; N = 58), and in a sub‐group of these children again at 24 months of age (Experiment 2: N = 32). Both monolinguals and bilinguals employed ME to select the referent of a novel label to a similar extent at 18 and 24 months. At 18 months, there were also no differences in novel word retention between the two language‐background groups. However, at 24 months, only monolinguals showed the ability to retain these label–object mappings. These findings indicate that the development of the ME assumption as a reliable word‐learning strategy is shaped by children's individual language exposure and experience with language use.

    Files private

    Request files
  • Kanero, J., Geçkin, V., Oranç, C., Mamus, E., Küntay, A. C., & Göksun, T. (2018). Social robots for early language learning: Current evidence and future directions. Child Development Perspectives, 12(3), 146-151. doi:10.1111/cdep.12277.

    Abstract

    In this article, we review research on child–robot interaction (CRI) to discuss how social robots can be used to scaffold language learning in young children. First we provide reasons why robots can be useful for teaching first and second languages to children. Then we review studies on CRI that used robots to help children learn vocabulary and produce language. The studies vary in first and second languages and demographics of the learners (typically developing children and children with hearing and communication impairments). We conclude that, although social robots are useful for teaching language to children, evidence suggests that robots are not as effective as human teachers. However, this conclusion is not definitive because robots that tutor students in language have not been evaluated rigorously and technology is advancing rapidly. We suggest that CRI offers an opportunity for research and list possible directions for that work.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Kaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E. and 10 moreKaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E., Ziegenfusz, S., Bennett, M. F., Robertson, E., Wang, L., Boys, A., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2023). Genetic aetiologies for childhood speech disorder: Novel pathways co-expressed during brain development. Molecular Psychiatry, 28, 1647-1663. doi:10.1038/s41380-022-01764-8.

    Abstract

    Childhood apraxia of speech (CAS), the prototypic severe childhood speech disorder, is characterized by motor programming and planning deficits. Genetic factors make substantive contributions to CAS aetiology, with a monogenic pathogenic variant identified in a third of cases, implicating around 20 single genes to date. Here we aimed to identify molecular causation in 70 unrelated probands ascertained with CAS. We performed trio genome sequencing. Our bioinformatic analysis examined single nucleotide, indel, copy number, structural and short tandem repeat variants. We prioritised appropriate variants arising de novo or inherited that were expected to be damaging based on in silico predictions. We identified high confidence variants in 18/70 (26%) probands, almost doubling the current number of candidate genes for CAS. Three of the 18 variants affected SETBP1, SETD1A and DDX3X, thus confirming their roles in CAS, while the remaining 15 occurred in genes not previously associated with this disorder. Fifteen variants arose de novo and three were inherited. We provide further novel insights into the biology of child speech disorder, highlighting the roles of chromatin organization and gene regulation in CAS, and confirm that genes involved in CAS are co-expressed during brain development. Our findings confirm a diagnostic yield comparable to, or even higher, than other neurodevelopmental disorders with substantial de novo variant burden. Data also support the increasingly recognised overlaps between genes conferring risk for a range of neurodevelopmental disorders. Understanding the aetiological basis of CAS is critical to end the diagnostic odyssey and ensure affected individuals are poised for precision medicine trials.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Smedt, K. (1987). Auteursomgevingen: Vijfde-generatie tekstverwerkers. Informatie, 29, 988-993.
  • Kempen, G., & Harbusch, K. (2018). A competitive mechanism selecting verb-second versus verb-final word order in causative and argumentative clauses of spoken Dutch: A corpus-linguistic study. Language Sciences, 69, 30-42. doi:10.1016/j.langsci.2018.05.005.

    Abstract

    In Dutch and German, the canonical order of subject, object(s) and finite verb is ‘verb-second’ (V2) in main but ‘verb-final’ (VF) in subordinate clauses. This occasionally leads to the production of noncanonical word orders. Familiar examples are causative and argumentative clauses introduced by a subordinating conjunction (Du. omdat, Ger. weil ‘because’): the omdat/weil-V2 phenomenon. Such clauses may also be introduced by coordinating conjunctions (Du. want, Ger. denn), which license V2 exclusively. However, want/denn-VF structures are unknown. We present the results of a corpus study on the incidence of omdat-V2 in spoken Dutch, and compare them to published data on weil-V2 in spoken German. Basic findings: omdat-V2 is much less frequent than weil-V2 (ratio almost 1:8); and the frequency relations between coordinating and subordinating conjunctions are opposite (want >> omdat; denn << weil). We propose that conjunction selection and V2/VF selection proceed partly independently, and sometimes miscommunicate—e.g. yielding omdat/weil paired with V2. Want/denn-VF pairs do not occur because want/denn clauses are planned as autonomous sentences, which take V2 by default. We sketch a simple feedforward neural network with two layers of nodes (representing conjunctions and word orders, respectively) that can simulate the observed data pattern through inhibition-based competition of the alternative choices within the node layers.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G., & Hoenkamp, E. (1987). An incremental procedural grammar for sentence formulation. Cognitive Science, 11(2), 201-258.

    Abstract

    This paper presents a theory of the syntactic aspects of human sentence production. An important characteristic of unprepared speech is that overt pronunciation of a sentence can be initiated before the speaker has completely worked out the meaning content he or she is going to express in that sentence. Apparently, the speaker is able to build up a syntactically coherent utterance out of a series of syntactic fragments each rendering a new part of the meaning content. This incremental, left-to-right mode of sentence production is the central capability of the proposed Incremental Procedural Grammar (IPG). Certain other properties of spontaneous speech, as derivable from speech errors, hesitations, self-repairs, and language pathology, are accounted for as well. The psychological plausibility thus gained by the grammar appears compatible with a satisfactory level of linguistic plausibility in that sentences receive structural descriptions which are in line with current theories of grammar. More importantly, an explanation for the existence of configurational conditions on transformations and other linguistics rules is proposed. The basic design feature of IPG which gives rise to these psychologically and linguistically desirable properties, is the “Procedures + Stack” concept. Sentences are built not by a central constructing agency which overlooks the whole process but by a team of syntactic procedures (modules) which work-in parallel-on small parts of the sentence, have only a limited overview, and whose sole communication channel is a stock. IPG covers object complement constructions, interrogatives, and word order in main and subordinate clauses. It handles unbounded dependencies, cross-serial dependencies and coordination phenomena such as gapping and conjunction reduction. It is also capable of generating self-repairs and elliptical answers to questions. IPG has been implemented as an incremental Dutch sentence generator written in LISP.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G., & Kolk, H. (1986). Het voortbrengen van normale en agrammatische taal. Van Horen Zeggen, 27(2), 36-40.
  • Kempen, G. (1987). Tekstverwerking: De vijfde generatie. Informatie, 29, 402-406.
  • Kempen, G. (1986). RIKS: Kennistechnologisch centrum voor bedrijfsleven en wetenschap. Informatie, 28, 122-125.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90(1-3), 117-127. doi:10.1016/S0093-934X(03)00425-5.

    Abstract

    Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.
  • Kendrick, K. H., Holler, J., & Levinson, S. C. (2023). Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210473. doi:10.1098/rstb.2021.0473.

    Abstract

    Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature.

    Additional information

    supplemental material
  • Kholodova, A., Peter, M., Rowland, C. F., Jacob, G., & Allen, S. E. M. (2023). Abstract priming and the lexical boost effect across development in a structurally biased language. Languages, 8: 264. doi:10.3390/languages8040264.

    Abstract

    The present study investigates the developmental trajectory of abstract representations for syntactic structures in children. In a structural priming experiment on the dative alternation in German, we primed children from three different age groups (3–4 years, 5–6 years, 7–8 years) and adults with double object datives (Dora sent Boots the rabbit) or prepositional object datives (Dora sent the rabbit to Boots). Importantly, the prepositional object structure in German is dispreferred and only rarely encountered by young children. While immediate as well as cumulative structural priming effects occurred across all age groups, these effects were strongest in the 3- to 4-year-old group and gradually decreased with increasing age. These results suggest that representations in young children are less stable than in adults and, therefore, more susceptible to adaptation both immediately and across time, presumably due to stronger surprisal. Lexical boost effects, in contrast, were not present in 3- to 4-year-olds but gradually emerged with increasing age, possibly due to limited working-memory capacity in the younger child groups.
  • Kidd, E., Arciuli, J., Christiansen, M. H., & Smithson, M. (2023). The sources and consequences of individual differences in statistical learning for language development. Cognitive Development, 66: 101335. doi:10.1016/j.cogdev.2023.101335.

    Abstract

    Statistical learning (SL)—sensitivity to statistical regularities in the environment—has been postulated to support language development. While even young infants are capable of using distributional statistics to learn in linguistic and non-linguistic domains, efforts to measure SL at the level of the individual and link it to language proficiency in individual differences designs have been mixed, which has at least in part been attributed to problems with task reliability. In the current study we present the first prospective longitudinal study of the relationship between both non-linguistic SL (measured with visual stimuli) and linguistic SL (measured with auditory stimuli) and language in a group of English-speaking children. One-hundred and twenty-one (N = 121) children in their first two years of formal schooling (Mage = 6;1 years, Range: 5;2 – 7;2) completed tests of visual SL (VSL) and auditory SL (ASL) and several control variables at time 1. Both forms of SL were then measured every 6 months for the next 18 months, and at the final testing session (time 4) their language proficiency was measured using a standardised test. The results showed that the reliability of the SL tasks increased across the course of the study. A series of path analyses showed that both VSL and ASL independently predicted individual differences in language proficiency at time 4. The evidence is consistent with the suggestion that, when measured reliably, an observable relationship between SL and language proficiency exists. Theoretical and methodological issues are discussed.

    Additional information

    data and code
  • Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.

    Abstract

    We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.

    Additional information

    supporting information
  • Kidd, E., Donnelly, S., & Christiansen, M. H. (2018). Individual differences in language acquisition and processing. Trends in Cognitive Sciences, 22(2), 154-169. doi:10.1016/j.tics.2017.11.006.

    Abstract

    Humans differ in innumerable ways, with considerable variation observable at every level of description, from the molecular to the social. Traditionally, linguistic and psycholinguistic theory has downplayed the possibility of meaningful differences in language across individuals. However, it is becoming increasingly evident that there is
    significant variation among speakers at any age as well as across the lifespan. In this paper, we review recent research in psycholinguistics, and argue that a focus on individual differences provides a crucial source of evidence that bears strongly upon core issues in theories of the acquisition and processing of language; specifically, the role of experience in language acquisition, processing, and attainment, and the architecture of the language faculty.
  • Kidd, E. (2004). Grammars, parsers, and language acquisition. Journal of Child Language, 31(2), 480-483. doi:10.1017/S0305000904006117.

    Abstract

    Drozd's critique of Crain & Thornton's (C&T) (1998) book Investigations in Universal Grammar (IUG) raises many issues concerning theory and experimental design within generative approaches to language acquisition. I focus here on one of the strongest theoretical claims of the Modularity Matching Model (MMM): continuity of processing. For reasons different to Drozd, I argue that the assumption is tenuous. Furthermore, I argue that the focus of the MMM and the methodological prescriptions contained in IUG are too narrow to capture language acquisition.
  • Kidd, E., Lieven, E., & Tomasello, M. (2010). Lexical frequency and exemplar-based learning effects in language acquisition: evidence from sentential complements. Language Sciences, 32(1), 132-142. doi:10.1016/j.langsci.2009.05.002.

    Abstract

    Usage-based approaches to language acquisition argue that children acquire the grammar of their target language using general-cognitive learning principles. The current paper reports on an experiment that tested a central assumption of the usage-based approach: argument structure patterns are connected to high frequency verbs that facilitate acquisition. Sixty children (N = 60) aged 4- and 6-years participated in a sentence recall/lexical priming experiment that manipulated the frequency with which the target verbs occurred in the finite sentential complement construction in English. The results showed that the children performed better on sentences that contained high frequency verbs. Furthermore, the children’s performance suggested that their knowledge of finite sentential complements relies most heavily on one particular verb – think, supporting arguments made by Goldberg [Goldberg, A.E., 2006. Constructions at Work: The Nature of Generalization in Language. Oxford University Press, Oxford], who argued that skewed input facilitates language learning.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E., Rogers, P., & Rogers, C. (2010). The personality correlates of adults who had imaginary companions in childhood. Psychological Reports, 107(1), 163-172. doi:10.2466/02.04.10.pr0.107.4.163-172.

    Abstract

    Two studies showed that adults who reported having an imaginary companion as a child differed from adults who did not on certain personality dimensions. The first yielded a higher mean on the Gough Creative Personality Scale for the group who had imaginary companions. Study 2 showed that such adults scored higher on the Achievement and Absorption subscales of Tellegen's Multidimensional Personality Questionnaire. The results suggest that some differences reported in the developmental literature may be observed in adults
  • Kircher, T. T. J., Brammer, M. J., Levelt, W. J. M., Bartels, M., & McGuire, P. K. (2004). Pausing for thought: Engagement of left temporal cortex during pauses in speech. NeuroImage, 21(1), 84-90. doi:10.1016/j.neuroimage.2003.09.041.

    Abstract

    Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Kiyama, S., Verdonschot, R. G., Xiong, K., & Tamaoka, K. (2018). Individual mentalizing ability boosts flexibility toward a linguistic marker of social distance: An ERP investigation. Journal of Neurolinguistics, 47, 1-15. doi:10.1016/j.jneuroling.2018.01.005.

    Abstract

    Sentence-final particles (SFPs) as bound morphemes in Japanese have no obvious effect on the truth conditions of a sentence. However, they encompass a diverse range of usages, from typical to atypical, according to the context and the interpersonal relationships in the specific situation. The most frequent particle,-ne, is typically used after addressee-oriented propositions for information sharing, while another frequent particle,-yo, is typically used after addresser-oriented propositions to elicit a sense of strength. This study sheds light on individual differences among native speakers in flexibly understanding such linguistic markers based on their mentalizing ability (i.e., the ability to infer the mental states of others). Two experiments employing electroencephalography (EEG) consistently showed enhanced early posterior negativities (EPN) for atypical SFP usage compared to typical usage, especially when understanding-ne compared to -yo, in both an SFP appropriateness judgment task and a content comprehension task. Importantly, the amplitude of the EPN for atypical usages of-ne was significantly higher in participants with lower mentalizing ability than in those with a higher mentalizing ability. This effect plausibly reflects low-ability mentalizers' stronger sense of strangeness toward atypical-ne usage. While high-ability mentalizers may aptly perceive others' attitudes via their various usages of-ne, low-ability mentalizers seem to adopt a more stereotypical understanding. These results attest to the greater degree of difficulty low-ability mentalizers have in establishing a smooth regulation of interpersonal distance during social encounters.

    Additional information

    stimuli dialog sets
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W. (2004). Vom Wörterbuch zum digitalen lexikalischen System. Zeitschrift für Literaturwissenschaft und Linguistik, 136, 10-55.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W. (1987). Das Geltende, oder: System der Überzeugungen. Zeitschrift für Literaturwissenschaft und Linguistik, (64), 10-31.
  • Klein, W. (1986). Der Wahn vom Sprachverfall und andere Mythen. Zeitschrift für Literaturwissenschaft und Linguistik, 62, 11-28.
  • Klein, W. (1987). Eine Verschärfung des Entscheidungsproblems. Rechtshistorisches Journal, 6, 209-210.
  • Klein, W., & Winkler, S. (2010). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 158, 5-7.
  • Klein, W. (1986). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 16(62), 9-10.
  • Klein, W., & Winkler, S. (Eds.). (2010). Ambiguität [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 40(158).
  • Klein, W. (2000). An analysis of the German perfekt. Language, 76, 358-382.

    Abstract

    The German Perfekt has two quite different temporal readings, as illustrated by the two possible continuations of the sentence Peter hat gearbeitet in i, ii, respectively: (i) Peter hat gearbeitet und ist müde. Peter has worked and is tired. (ii) Peter hat gearbeitet und wollte nicht gestört werden. Peter has worked and wanted not to be disturbed. The first reading essentially corresponds to the English present perfect; the second can take a temporal adverbial with past time reference ('yesterday at five', 'when the phone rang', and so on), and an English translation would require a past tense ('Peter worked/was working'). This article shows that the Perfekt has a uniform temporal meaning that results systematically from the interaction of its three components-finiteness marking, auxiliary and past participle-and that the two readings are the consequence of a structural ambiguity. This analysis also predicts the properties of other participle constructions, in particular the passive in German.
  • Klein, W., Li, P., & Hendriks, H. (2000). Aspect and assertion in Mandarin Chinese. Natural Language & Linguistic Theory, 18, 723-770. doi:10.1023/A:1006411825993.

    Abstract

    Chinese has a number of particles such as le, guo, zai and zhe that add a particular aspectual value to the verb to which they are attached. There have been many characterisations of this value in the literature. In this paper, we review several existing influential accounts of these particles, including those in Li and Thompson (1981), Smith (1991), and Mangione and Li (1993). We argue that all these characterisations are intuitively plausible, but none of them is precise.We propose that these particles serve to mark which part of the sentence''s descriptive content is asserted, and that their aspectual value is a consequence of this function. We provide a simple and precise definition of the meanings of le, guo, zai and zhe in terms of the relationship between topic time and time of situation, and show the consequences of their interaction with different verb expressions within thisnew framework of interpretation.
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W. (2004). Auf der Suche nach den Prinzipien, oder: Warum die Geisteswissenschaften auf dem Rückzug sind. Zeitschrift für Literaturwissenschaft und Linguistik, 134, 19-44.
  • Klein, W. (2004). Im Lauf der Jahre. Linguistische Berichte, 200, 397-407.
  • Klein, W. (2000). Fatale Traditionen. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (120), 11-40.
  • Klein, W. (1991). Geile Binsenbüschel, sehr intime Gespielen: Ein paar Anmerkungen über Arno Schmidt als Übersetzer. Zeitschrift für Literaturwissenschaft und Linguistik, 84, 124-129.
  • Klein, W. (Ed.). (1998). Kaleidoskop [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (112).
  • Klein, W. (2010). On times and arguments. Linguistics, 48, 1221-1253. doi:10.1515/LING.2010.040.

    Abstract

    Verbs are traditionally assumed to have an “argument structure”, which imposes various constraints on form and meaning of the noun phrases that go with the verb, and an “event structure”, which defines certain temporal characteristics of the “event” to which the verb relates. In this paper, I argue that these two structures should be brought together. The verb assigns descriptive properties to one or more arguments at one or more temporal intervals, hence verbs have an “argument-time structure”. This argument-time structure as well as the descriptive properties connected to it can be modified by various morphological and syntactic operations. This approach allows a relatively simple analysis of familiar but not well-defined temporal notions such as tense, aspect and Aktionsart. This will be illustrated for English. It will be shown that a few simple morphosyntactic operations on the argument-time structure might account for form and meaning of the perfect, the progressive, the passive and related constructions.
  • Klein, W., & Von Stutterheim, C. (1987). Quaestio und referentielle Bewegung in Erzählungen. Linguistische Berichte, 109, 163-183.
  • Klein, W. (1991). Raumausdrücke. Linguistische Berichte, 132, 77-114.
  • Klein, W., & Von Stutterheim, C. (1991). Text structure and referential movement. Arbeitsberichte des Forschungsprogramms S&P: Sprache und Pragmatik, 22.
  • Klein, W. (1998). The contribution of second language acquisition research. Language Learning, 48, 527-550. doi:10.1111/0023-8333.00057.

    Abstract

    During the last 25 years, second language acquisition (SLA) research hasmade considerable progress, but is still far from proving a solid basis for foreign language teaching, or from a general theory of SLA. In addition, its status within the linguistic disciplines is still very low. I argue this has not much to do with low empirical or theoretical standards in the field—in this regard, SLA research is fully competitive—but with a particular perspective on the acquisition process: SLA researches learners' utterances as deviations from a certain target, instead of genuine manifestations of underlying language capacity; it analyses them in terms of what they are not rather than what they are. For some purposes such a "target deviation perspective" makes sense, but it will not help SLA researchers to substantially and independently contribute to a deeper understanding of the structure and function of the human language faculty. Therefore, these findings will remain of limited interest to other scientists until SLA researchers consider learner varieties a normal, in fact typical, manifestation of this unique human capacity.
  • Klein, W. (Ed.). (2000). Sprache des Rechts [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (118).
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (Ed.). (1987). Sprache und Ritual [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (65).
  • Klein, W. (Ed.). (1986). Sprachverfall [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (62).
  • Klein, W. (2004). Was die Geisteswissenschaften leider noch von den Naturwissenschaften unterscheidet. Gegenworte, 13, 79-84.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.

Share this page