Publications

Displaying 301 - 400 of 649
  • Klein, W. (1990). Sprachverfall. In Ruprecht-Karls-Universität Heidelberg (Ed.), Sprache: Vorträge im Sommersemester (pp. 101-114). Heidelberg: Ruprecht-Karls-Universität.
  • Klein, W. (Ed.). (1985). Schriftlichkeit [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (59).
  • Klein, W. (1985). Sechs Grundgrößen des Spracherwerbs. In R. Eppeneder (Ed.), Lernersprache: Thesen zum Erwerb einer Fremdsprache (pp. 67-106). München: Goethe Institut.
  • Klein, W. (2001). Second language acquisition. In N. Smelser, & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences: Vol. 20 (pp. 13768-13771). Amsterdam: Elsevier science.
  • Klein, W. (1988). Second language acquisition. Cambridge: Cambridge University Press.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (1988). The unity of a vernacular: Some remarks on "Berliner Stadtsprache". In N. Dittmar, & P. Schlobinski (Eds.), The sociolinguistics of urban vernaculars: Case studies and their evaluation (pp. 147-153). Berlin: de Gruyter.
  • Klein, W. (2001). Time and again. In C. Féry, & W. Sternefeld (Eds.), Audiatur vox sapientiae: A festschrift for Arnim von Stechow (pp. 267-286). Berlin: Akademie Verlag.
  • Klein, W., & Schlieben-Lange, B. (Eds.). (1990). Zukunft der Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (79).
  • Klein, W. (2001). Typen und Konzepte des Spracherwerbs. In L. Götze, G. Helbig, G. Henrici, & H. Krumm (Eds.), Deutsch als Fremdsprache (pp. 604-616). Berlin: de Gruyter.
  • Klein, W. (1990). Überall und nirgendwo: Subjektive und objektive Momente in der Raumreferenz. Zeitschrift für Literaturwissenschaft und Linguistik, 78, 9-42.
  • Klein, W. (1988). Varietätengrammatik. In U. Ammon, N. Dittmar, & K. J. Mattheier (Eds.), Sociolinguistics: An international handbook of the science of language and society: Vol. 2 (pp. 997-1060). Berlin: de Gruyter.
  • Klein, W. (1998). Von der einfältigen Wißbegierde. Zeitschrift für Literaturwissenschaft und Linguistik, 112, 6-13.
  • Knösche, T. R., & Bastiaansen, M. C. M. (2001). Does the Hilbert transform improve accuracy and time resolution of ERD/ERS? Biomedizinische Technik, 46(2), 106-108.
  • Kornfeld, L., & Rossi, G. (2023). Enforcing rules during play: Knowledge, agency, and the design of instructions and reminders. Research on Language and Social Interaction, 56(1), 42-64. doi:10.1080/08351813.2023.2170637.

    Abstract

    Rules of behavior are fundamental to human sociality. Whether on the road, at the dinner table, or during a game, people monitor one another’s behavior for conformity to rules and may take action to rectify violations. In this study, we examine two ways in which rules are enforced during games: instructions and reminders. Building on prior research, we identify instructions as actions produced to rectify violations based on another’s lack of knowledge of the relevant rule; knowledge that the instruction is designed to impart. In contrast to this, the actions we refer to as reminders are designed to enforce rules presupposing the transgressor’s competence and treating the violation as the result of forgetfulness or oversight. We show that instructing and reminding actions differ in turn design, sequential development, the epistemic stances taken by transgressors and enforcers, and in how the action affects the progressivity of the interaction. Data are in German and Italian from the Parallel European Corpus of Informal Interaction (PECII).
  • Kösem, A., Dai, B., McQueen, J. M., & Hagoort, P. (2023). Neural envelope tracking of speech does not unequivocally reflect intelligibility. NeuroImage, 272: 120040. doi:10.1016/j.neuroimage.2023.120040.

    Abstract

    During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Kreuzer, H. (Ed.). (1971). Methodische Perspektiven [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (1/2).
  • Krott, A. (2001). Analogy in morphology: The selection of linking elements in Dutch compounds. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057602.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Vargha-Khadem, F., & Monaco, A. P. (2001). A forkhead-domain gene is mutated in a severe speech and language disorder[Letters to Nature]. Nature, 413, 519-523. doi:10.1038/35097076.

    Abstract

    Individuals affected with developmental disorders of speech and language have substantial difficulty acquiring expressive and/or receptive language in the absence of any profound sensory or neurological impairment and despite adequate intelligence and opportunity. Although studies of twins consistently indicate that a significant genetic component is involved, most families segregating speech and language deficits show complex patterns of inheritance, and a gene that predisposes individuals to such disorders has not been identified. We have studied a unique three-generation pedigree, KE, in which a severe speech and language disorder is transmitted as an autosomal-dominant monogenic trait. Our previous work mapped the locus responsible, SPCH1, to a 5.6-cM interval of region 7q31 on chromosome 7 (ref. 5). We also identified an unrelated individual, CS, in whom speech and language impairment is associated with a chromosomal translocation involving the SPCH1 interval. Here we show that the gene FOXP2, which encodes a putative transcription factor containing a polyglutamine tract and a forkhead DNA-binding domain, is directly disrupted by the translocation breakpoint in CS. In addition, we identify a point mutation in affected members of the KE family that alters an invariant amino-acid residue in the forkhead domain. Our findings suggest that FOXP2 is involved in the developmental process that culminates in speech and language
  • Lai, J., Chan, A., & Kidd, E. (2023). Relative clause comprehension in Cantonese-speaking children with and without developmental language disorder. PLoS One, 18: e0288021. doi:10.1371/journal.pone.0288021.

    Abstract

    Developmental Language Disorder (DLD), present in 2 out of every 30 children, affects primarily oral language abilities and development in the absence of associated biomedical conditions. We report the first experimental study that examines relative clause (RC) comprehension accuracy and processing (via looking preference) in Cantonese-speaking children with and without DLD, testing the predictions from competing domain-specific versus domain-general theoretical accounts. We compared children with DLD (N = 22) with their age-matched typically-developing (TD) children (AM-TD, N = 23) aged 6;6–9;7 and language-matched (and younger) TD children (YTD, N = 21) aged 4;7–7;6, using a referent selection task. Within-subject factors were: RC type (subject-RCs (SRCs) versus object-RCs (ORCs); relativizer (classifier (CL) versus relative marker ge3 RCs). Accuracy measures and looking preference to the target were analyzed using generalized linear mixed effects models. Results indicated Cantonese children with DLD scored significantly lower than their AM-TD peers in accuracy and processed RCs significantly slower than AM-TDs, but did not differ from the YTDs on either measure. Overall, while the results revealed evidence of a SRC advantage in the accuracy data, there was no indication of additional difficulty associated with ORCs in the eye-tracking data. All children showed a processing advantage for the frequent CL relativizer over the less frequent ge3 relativizer. These findings pose challenges to domain-specific representational deficit accounts of DLD, which primarily explain the disorder as a syntactic deficit, and are better explained by domain-general accounts that explain acquisition and processing as emergent properties of multiple converging linguistic and non-linguistic processes.

    Additional information

    S1 appendix
  • Laparle, S. (2023). Moving past the lexical affiliate with a frame-based analysis of gesture meaning. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527218.

    Abstract

    Interpreting the meaning of co-speech gesture often involves
    identifying a gesture’s ‘lexical affiliate’, the word or phrase to
    which it most closely relates (Schegloff 1984). Though there is
    work within gesture studies that resists this simplex mapping of
    meaning from speech to gesture (e.g. de Ruiter 2000; Kendon
    2014; Parrill 2008), including an evolving body of literature on
    recurrent gesture and gesture families (e.g. Fricke et al. 2014; Müller 2017), it is still the lexical affiliate model that is most ap-
    parent in formal linguistic models of multimodal meaning(e.g.
    Alahverdzhieva et al. 2017; Lascarides and Stone 2009; Puste-
    jovsky and Krishnaswamy 2021; Schlenker 2020). In this work,
    I argue that the lexical affiliate should be carefully reconsidered
    in the further development of such models.
    In place of the lexical affiliate, I suggest a further shift
    toward a frame-based, action schematic approach to gestural
    meaning in line with that proposed in, for example, Parrill and
    Sweetser (2004) and Müller (2017). To demonstrate the utility
    of this approach I present three types of compositional gesture
    sequences which I call spatial contrast, spatial embedding, and
    cooperative abstract deixis. All three rely on gestural context,
    rather than gesture-speech alignment, to convey interactive (i.e.
    pragmatic) meaning. The centrality of gestural context to ges-
    ture meaning in these examples demonstrates the necessity of
    developing a model of gestural meaning independent of its in-
    tegration with speech.
  • Lausberg, H., & Kita, S. (2001). Hemispheric specialization in nonverbal gesticulation investigated in patients with callosal disconnection. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 266-270). Paris, France: Éditions L'Harmattan.
  • Ledberg, A., Fransson, P., Larsson, J., & Petersson, K. M. (2001). A 4D approach to the analysis of functional brain images: Application to fMRI data. Human Brain Mapping, 13, 185-198. doi:10.1002/hbm.1032.

    Abstract

    This paper presents a new approach to functional magnetic resonance imaging (FMRI) data analysis. The main difference lies in the view of what comprises an observation. Here we treat the data from one scanning session (comprising t volumes, say) as one observation. This is contrary to the conventional way of looking at the data where each session is treated as t different observations. Thus instead of viewing the v voxels comprising the 3D volume of the brain as the variables, we suggest the usage of the vt hypervoxels comprising the 4D volume of the brain-over-session as the variables. A linear model is fitted to the 4D volumes originating from different sessions. Parameter estimation and hypothesis testing in this model can be performed with standard techniques. The hypothesis testing generates 4D statistical images (SIs) to which any relevant test statistic can be applied. In this paper we describe two test statistics, one voxel based and one cluster based, that can be used to test a range of hypotheses. There are several benefits in treating the data from each session as one observation, two of which are: (i) the temporal characteristics of the signal can be investigated without an explicit model for the blood oxygenation level dependent (BOLD) contrast response function, and (ii) the observations (sessions) can be assumed to be independent and hence inference on the 4D SI can be made by nonparametric or Monte Carlo methods. The suggested 4D approach is applied to FMRI data and is shown to accurately detect the expected signal
  • Lee, C., Jessop, A., Bidgood, A., Peter, M. S., Pine, J. M., Rowland, C. F., & Durrant, S. (2023). How executive functioning, sentence processing, and vocabulary are related at 3 years of age. Journal of Experimental Child Psychology, 233: 105693. doi:10.1016/j.jecp.2023.105693.

    Abstract

    There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child’s processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input.

    Additional information

    table S1 code and data
  • Lehecka, T. (2023). Normative ratings for 111 Swedish nouns and corresponding picture stimuli. Nordic Journal of Linguistics, 46(1), 20-45. doi:10.1017/S0332586521000123.

    Abstract

    Normative ratings are a means to control for the effects of confounding variables in psycholinguistic experiments. This paper introduces a new dataset of normative ratings for Swedish encompassing 111 concrete nouns and the corresponding picture stimuli in the MultiPic database (Duñabeitia et al. 2017). The norms for name agreement, category typicality, age of acquisition and subjective frequency were collected using online surveys among native speakers of the Finland-Swedish variety of Swedish. The paper discusses the inter-correlations between these variables and compares them against available ratings for other languages. In doing so, the paper argues that ratings for age of acquisition and subjective frequency collected for other languages may be applied to psycholinguistic studies on Finland-Swedish, at least with respect to concrete and highly imageable nouns. In contrast, norms for name agreement should be collected from speakers of the same language variety as represented by the subjects in the actual experiments.
  • Lei, A., Willems, R. M., & Eekhof, L. S. (2023). Emotions, fast and slow: Processing of emotion words is affected by individual differences in need for affect and narrative absorption. Cognition and Emotion, 37(5), 997-1005. doi:10.1080/02699931.2023.2216445.

    Abstract

    Emotional words have consistently been shown to be processed differently than neutral words. However, few studies have examined individual variability in emotion word processing with longer, ecologically valid stimuli (beyond isolated words, sentences, or paragraphs). In the current study, we re-analysed eye-tracking data collected during story reading to reveal how individual differences in need for affect and narrative absorption impact the speed of emotion word reading. Word emotionality was indexed by affective-aesthetic potentials (AAP) calculated by a sentiment analysis tool. We found that individuals with higher levels of need for affect and narrative absorption read positive words more slowly. On the other hand, these individual differences did not influence the reading time of more negative words, suggesting that high need for affect and narrative absorption are characterised by a positivity bias only. In general, unlike most previous studies using more isolated emotion word stimuli, we observed a quadratic (U-shaped) effect of word emotionality on reading speed, such that both positive and negative words were processed more slowly than neutral words. Taken together, this study emphasises the importance of taking into account individual differences and task context when studying emotion word processing.
  • Lemaitre, H., Le Guen, Y., Tilot, A. K., Stein, J. L., Philippe, C., Mangin, J.-F., Fisher, S. E., & Frouin, V. (2023). Genetic variations within human gained enhancer elements affect human brain sulcal morphology. NeuroImage, 265: 119773. doi:10.1016/j.neuroimage.2022.119773.

    Abstract

    The expansion of the cerebral cortex is one of the most distinctive changes in the evolution of the human brain. Cortical expansion and related increases in cortical folding may have contributed to emergence of our capacities for high-order cognitive abilities. Molecular analysis of humans, archaic hominins, and non-human primates has allowed identification of chromosomal regions showing evolutionary changes at different points of our phylogenetic history. In this study, we assessed the contributions of genomic annotations spanning 30 million years to human sulcal morphology measured via MRI in more than 18,000 participants from the UK Biobank. We found that variation within brain-expressed human gained enhancers, regulatory genetic elements that emerged since our last common ancestor with Old World monkeys, explained more trait heritability than expected for the left and right calloso-marginal posterior fissures and the right central sulcus. Intriguingly, these are sulci that have been previously linked to the evolution of locomotion in primates and later on bipedalism in our hominin ancestors.

    Additional information

    tables
  • Levelt, W. J. M. (2001). The architecture of normal spoken language use. In G. Gupta (Ed.), Cognitive science: Issues and perspectives (pp. 457-473). New Delhi: Icon Publications.
  • Levelt, W. J. M. (1988). Psycholinguistics: An overview. In W. Bright (Ed.), International encyclopedia of linguistics: Vol. 3 (pp. 290-294). Oxford: Oxford University press.
  • Levelt, W. J. M. (1990). De connectionistische mode. In P. Van Hoogstraten (Ed.), Belofte en werkelijkheid: Sociale wetenschappen en informatisering (pp. 39-68). Lisse: Swets & Zeitlinger.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1990). Are multilayer feedforward networks effectively turing machines? Psychological Research, 52, 153-157.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1967). Note on the distribution of dominance times in binocular rivalry. British Journal of Psychology, 58, 143-145.
  • Levelt, W. J. M. (1990). On learnability, empirical foundations, and naturalness [Commentary on Hanson & Burr]. Behavioral and Brain Sciences, 13(3), 501. doi:10.1017/S0140525X00079887.
  • Levelt, W. J. M. (1988). Onder sociale wetenschappen. Mededelingen van de Afdeling Letterkunde, 51(2), 41-55.
  • Levelt, W. J. M. (1967). Over het waarnemen van zinnen [Inaugural lecture]. Groningen: Wolters.
  • Levelt, W. J. M. (2001). Relations between speech production and speech perception: Some behavioral and neurological observations. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honour of Jacques Mehler (pp. 241-256). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (1990). Some studies of lexical access at the Max Planck Institute for Psycholinguistics. In F. Aarts, & T. Van Els (Eds.), Contemporary Dutch linguistics (pp. 131-139). Washington: Georgetown University Press.
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M., Richardson, G., & La Heij, W. (1985). Pointing and voicing in deictic expressions. Journal of Memory and Language, 24, 133-164. doi:10.1016/0749-596X(85)90021-X.

    Abstract

    The present paper studies how, in deictic expressions, the temporal interdependency of speech and gesture is realized in the course of motor planning and execution. Two theoretical positions were compared. On the “interactive” view the temporal parameters of speech and gesture are claimed to be the result of feedback between the two systems throughout the phases of motor planning and execution. The alternative “ballistic” view, however, predicts that the two systems are independent during the phase of motor execution, the temporal parameters having been preestablished in the planning phase. In four experiments subjects were requested to indicate which of an array of referent lights was momentarily illuminated. This was done by pointing to the light and/or by using a deictic expression (this/that light). The temporal and spatial course of the pointing movement was automatically registered by means of a Selspot opto-electronic system. By analyzing the moments of gesture initiation and apex, and relating them to the moments of speech onset, it was possible to show that, for deictic expressions, the ballistic view is very nearly correct.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levinson, S. C. (2001). Motion Verb Stimulus (Moverb) version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 9-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513706.

    Abstract

    How do languages express ideas of movement, and how do they package different components of this domain, such as manner and path of motion? This task uses one large set of stimuli to gain knowledge of certain key aspects of motion verb meanings in the target language, and expands the investigation beyond simple verbs (e.g., go) to include the semantics of motion predications complete with adjuncts (e.g., go across something). Consultants are asked to view and briefly describe 96 animations of a few seconds each. The task is designed to get linguistic elicitations of motion predications under contrastive comparison with other animations in the same set. Unlike earlier tasks, the stimuli focus on inanimate moving items or “figures” (in this case, a ball).
  • Levinson, S. C. (1988). Conceptual problems in the study of regional and cultural style. In N. Dittmar, & P. Schlobinski (Eds.), The sociolinguistics of urban vernaculars: Case studies and their evaluation (pp. 161-190). Berlin: De Gruyter.
  • Levinson, S. C. (2001). Covariation between spatial language and cognition. In M. Bowerman, & S. C. Levinson (Eds.), Language acquisition and conceptual development (pp. 566-588). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C., Kita, S., & Ozyurek, A. (2001). Demonstratives in context: Comparative handicrafts. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 52-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874663.

    Abstract

    Demonstratives (e.g., words such as this and that in English) pivot on relationships between the item being talked about, and features of the speech act situation (e.g., where the speaker and addressee are standing or looking). However, they are only rarely investigated multi-modally, in natural language contexts. This task is designed to build a video corpus of cross-linguistically comparable discourse data for the study of “deixis in action”, while simultaneously supporting the investigation of joint attention as a factor in speaker selection of demonstratives. In the task, two or more speakers are asked to discuss and evaluate a group of similar items (e.g., examples of local handicrafts, tools, produce) that are placed within a relatively defined space (e.g., on a table). The task can additionally provide material for comparison of pointing gesture practices.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2001). “Time and space” questionnaire for “space in thinking” subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 14-20). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C., & Enfield, N. J. (Eds.). (2001). Manual for the field season 2001. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2001). Maxim. In S. Duranti (Ed.), Key terms in language and culture (pp. 139-142). Oxford: Blackwell.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C., Enfield, N. J., & Senft, G. (2001). Kinship domain for 'space in thinking' subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 85-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874655.
  • Levinson, S. C., & Wittenburg, P. (2001). Language as cultural heritage - Promoting research and public awareness on the Internet. In J. Renn (Ed.), ECHO - An Infrastructure to Bring European Cultural Heritage Online (pp. 104-111). Berlin: Max Planck Institute for the History of Science.

    Abstract

    The ECHO proposal aims to bring to life the cultural heritage of Europe, through internet technology that encourages collaboration across the Humanities disciplines which interpret it – at the same time making all this scholarship accessible to the citizens of Europe. An essential part of the cultural heritage of Europe is the diverse set of languages used on the continent, in their historical, literary and spoken forms. Amongst these are the ‘hidden languages’ used by minorities but of wide interest to the general public. We take the 18 Sign Languages of the EEC – the natural languages of the deaf - as an example. Little comparative information about these is available, despite their special scientific importance, the widespread public interest and the policy implications. We propose a research project on these languages based on placing fully annotated digitized moving images of each of these languages on the internet. This requires significant development of multi-media technology which would allow distributed annotation of a central corpus, together with the development of special search techniques. The technology would have widespread application to all cultural performances recorded as sound plus moving images. Such a project captures in microcosm the essence of the ECHO proposal: cultural heritage is nothing without the humanities research which contextualizes and gives it comparative assessment; by marrying information technology to humanities research, we can bring these materials to a wider public while simultaneously boosting Europe as a research area.
  • Levinson, S. C., Kita, S., & Enfield, N. J. (2001). Locally-anchored narrative. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 147). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874660.

    Abstract

    As for 'Locally-anchored spatial gestures task, version 2', a major goal of this task is to elicit locally-anchored spatial gestures across different cultures. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. Rather than set up an interview situation, this task involves recording informal, animated narrative delivered to a native-speaker interlocutor. Locally-anchored gestures produced in such narrative are roughly comparable to those collected in the interview task. The data collected can also be used to investigate a wide range of other topics.
  • Levinson, S. C. (1988). Putting linguistics on a proper footing: Explorations in Goffman's participation framework. In P. Drew, & A. Wootton (Eds.), Goffman: Exploring the interaction order (pp. 161-227). Oxford: Polity Press.
  • Levinson, S. C. (2001). Space: Linguistic expression. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 22 (pp. 14749-14752). Oxford: Pergamon.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2001). Place and space in the sculpture of Anthony Gormley - An anthropological perspective. In S. D. McElroy (Ed.), Some of the facts (pp. 68-109). St Ives: Tate Gallery.
  • Levinson, S. C. (2001). Pragmatics. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 17 (pp. 11948-11954). Oxford: Pergamon.
  • Levinson, S. C. (1990). Pragmatics [Japanese translation]. Tokyo: Kenkyusha.
  • Levinson, S. C. (1990). Pragmatik [German translation of Pragmatics]. Tübingen: Niemeyer.

    Abstract

    This is the German translation of Stephen C. Levinson's »Pragmatics«.
  • Levinson, S. C., & Enfield, N. J. (2001). Preface and priorities. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N. (2023). Communicative efficiency: Language structure and use. Cambridge: Cambridge University Press.

    Abstract

    All living beings try to save effort, and humans are no exception. This groundbreaking book shows how we save time and energy during communication by unconsciously making efficient choices in grammar, lexicon and phonology. It presents a new theory of 'communicative efficiency', the idea that language is designed to be as efficient as possible, as a system of communication. The new framework accounts for the diverse manifestations of communicative efficiency across a typologically broad range of languages, using various corpus-based and statistical approaches to explain speakers' bias towards efficiency. The author's unique interdisciplinary expertise allows her to provide rich evidence from a broad range of language sciences. She integrates diverse insights from over a hundred years of research into this comprehensible new theory, which she presents step-by-step in clear and accessible language. It is essential reading for language scientists, cognitive scientists and anyone interested in language use and communication.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Levshina, N. (2023). Testing communicative and learning biases in a causal model of language evolution:A study of cues to Subject and Object. In M. Degano, T. Roberts, G. Sbardolini, & M. Schouwstra (Eds.), The Proceedings of the 23rd Amsterdam Colloquium (pp. 383-387). Amsterdam: University of Amsterdam.
  • Levshina, N. (2023). Word classes in corpus linguistics. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 833-850). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198852889.013.34.

    Abstract

    Word classes play a central role in corpus linguistics under the name of parts of speech (POS). Many popular corpora are provided with POS tags. This chapter gives examples of popular tagsets and discusses the methods of automatic tagging. It also considers bottom-up approaches to POS induction, which are particularly important for the ‘poverty of stimulus’ debate in language acquisition research. The choice of optimal POS tagging involves many difficult decisions, which are related to the level of granularity, redundancy at different levels of corpus annotation, cross-linguistic applicability, language-specific descriptive adequacy, and dealing with fuzzy boundaries between POS. The chapter also discusses the problem of flexible word classes and demonstrates how corpus data with POS tags and syntactic dependencies can be used to quantify the level of flexibility in a language.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators. In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.

    Abstract

    Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as 'open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). The timing bottleneck: Why timing and overlap are mission-critical for conversational user interfaces, speech recognition and dialogue systems. In Proceedings of the 24rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023). doi:10.18653/v1/2023.sigdial-1.45.

    Abstract

    Speech recognition systems are a key intermediary in voice-driven human-computer interaction. Although speech recognition works well for pristine monologic audio, real-life use cases in open-ended interactive settings still present many challenges. We argue that timing is mission-critical for dialogue systems, and evaluate 5 major commercial ASR systems for their conversational and multilingual support. We find that word error rates for natural conversational data in 6 languages remain abysmal, and that overlap remains a key challenge (study 1). This impacts especially the recognition of conversational words (study 2), and in turn has dire consequences for downstream intent recognition (study 3). Our findings help to evaluate the current state of conversational ASR, contribute towards multidimensional error analysis and evaluation, and identify phenomena that need most attention on the way to build robust interactive speech technologies.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Lutte, G., Sarti, S., & Kempen, G. (1971). Le moi idéal de l'adolescent: Recherche génétique, différentielle et culturelle dans sept pays dÉurope. Bruxelles: Dessart.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Norris, D., & Cutler, A. (2001). Can lexical knowledge modulate prelexical representations over time? In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The results of a study on perceptual learning are reported. Dutch subjects made lexical decisions on a list of words and nonwords. Embedded in the list were either [f]- or [s]-final words in which the final fricative had been replaced by an ambiguous sound, midway between [f] and [s]. One group of listeners heard ambiguous [f]- final Dutch words like [kara?] (based on karaf, carafe) and unambiguous [s]-final words (e.g., karkas, carcase). A second group heard the reverse (e.g., ambiguous [karka?] and unambiguous karaf). After this training phase, listeners labelled ambiguous fricatives on an [f]- [s] continuum. The subjects who had heard [?] in [f]- final words categorised these fricatives as [f] reliably more often than those who had heard [?] in [s]-final words. These results suggest that speech recognition is dynamic: the system adjusts to the constraints of each particular listening situation. The lexicon can provide this adjustment process with a training signal.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (Eds.). (2001). Spoken word access processes. Hove, UK: Psychology Press.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Mehler, J., & Cutler, A. (1990). Psycholinguistic implications of phonological diversity among languages. In M. Piattelli-Palmerini (Ed.), Cognitive science in Europe: Issues and trends (pp. 119-134). Rome: Golem.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.

Share this page