Publications

Displaying 501 - 600 of 967
  • Levelt, W. J. M. (2004). Een huis voor kunst en wetenschap. Boekman: Tijdschrift voor Kunst, Cultuur en Beleid, 16(58/59), 212-215.
  • Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge: The MIT Press.
  • Levelt, W. J. M. (1989). De connectionistische mode: Symbolische en subsymbolische modellen van het menselijk gedrag. In C. M. Brown, P. Hagoort, & T. Meijering (Eds.), Vensters op de geest: Cognitie op het snijvlak van filosofie en psychologie (pp. 202-219). Utrecht: Stichting Grafiet.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1974). Formal grammars in linguistics and psycholinguistics: Vol.III, Psycholinguistic applications. The Hague: Mouton.
  • Levelt, W. J. M. (1974). Formal grammars in linguistics and psycholinguistics: Vol. I, An introduction to the theory of formal languages and automata. The Hague: Mouton Publishers.
  • Levelt, W. J. M. (1974). Formal grammars in linguistics and psycholinguistics: Vol.II, Applications in linguistic theory. The Hague: Mouton.
  • Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. doi:10.1016/0010-0277(83)90026-4.

    Abstract

    Making a self-repair in speech typically proceeds in three phases. The first phase involves the monitoring of one’s own speech and the interruption of the flow of speech when trouble is detected. From an analysis of 959 spontaneous self-repairs it appears that interrupting follows detection promptly, with the exception that correct words tend to be completed. Another finding is that detection of trouble improves towards the end of constituents. The second phase is characterized by hesitation, pausing, but especially the use of so-called editing terms. Which editing term is used depends on the nature of the speech trouble in a rather regular fashion: Speech errors induce other editing terms than words that are merely inappropriate, and trouble which is detected quickly by the speaker is preferably signalled by the use of ‘uh’. The third phase consists of making the repair proper The linguistic well-formedness of a repair is not dependent on the speaker’s respecting the integriv of constituents, but on the structural relation between original utterance and repair. A bi-conditional well-formedness rule links this relation to a corresponding relation between the conjuncts of a coordination. It is suggested that a similar relation holds also between question and answer. In all three cases the speaker respects certain Istructural commitments derived from an original utterance. It was finally shown that the editing term plus the first word of the repair proper almost always contain sufficient information for the listener to decide how the repair should be related to the original utterance. Speakers almost never produce misleading information in this respect. It is argued that speakers have little or no access to their speech production process; self-monitoring is probably based on parsing one’s own inner or overt speech.
  • Levelt, W. J. M. (2004). Language. In G. Adelman, & B. H. Smith (Eds.), Elsevier's encyclopedia of neuroscience [CD-ROM] (3rd). Amsterdam: Elsevier.
  • Levelt, W. J. M. (1989). Hochleistung in Millisekunden: Sprechen und Sprache verstehen. Universitas, 44(511), 56-68.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1974). J.B. Carroll & R. Freedle (eds.), Language comprehension and the acquisition of knowledge [Book review]. The Quarterly journal of experimental psychology, 26(2), 325-326. doi:10.1080/14640747408400419.
  • Levelt, W. J. M. (1967). Note on the distribution of dominance times in binocular rivalry. British Journal of Psychology, 58, 143-145.
  • Levelt, W. J. M. (2020). On becoming a physicist of mind. Annual Review of Linguistics, 6(1), 1-23. doi:10.1146/annurev-linguistics-011619-030256.

    Abstract

    In 1976, the German Max Planck Society established a new research enterprise in psycholinguistics, which became the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands. I was fortunate enough to be invited to direct this institute. It enabled me, with my background in visual and auditory psychophysics and the theory of formal grammars and automata, to develop a long-term chronometric endeavor to dissect the process of speaking. It led, among other work, to my book Speaking (1989) and to my research team's article in Brain and Behavioral Sciences “A Theory of Lexical Access in Speech Production” (1999). When I later became president of the Royal Netherlands Academy of Arts and Sciences, I helped initiate the Women for Science research project of the Inter Academy Council, a project chaired by my physicist sister at the National Institute of Standards and Technology. As an emeritus I published a comprehensive History of Psycholinguistics (2013). As will become clear, many people inspired and joined me in these undertakings.
  • Levelt, W. J. M. (1967). Over het waarnemen van zinnen [Inaugural lecture]. Groningen: Wolters.
  • Levelt, W. J. M., & Cutler, A. (1983). Prosodic marking in speech repair. Journal of semantics, 2, 205-217. doi:10.1093/semant/2.2.205.

    Abstract

    Spontaneous self-corrections in speech pose a communication problem; the speaker must make clear to the listener not only that the original Utterance was faulty, but where it was faulty and how the fault is to be corrected. Prosodic marking of corrections - making the prosody of the repair noticeably different from that of the original utterance - offers a resource which the speaker can exploit to provide the listener with such information. A corpus of more than 400 spontaneous speech repairs was analysed, and the prosodic characteristics compared with the syntactic and semantic characteristics of each repair. Prosodic marking showed no relationship at all with the syntactic characteristics of repairs. Instead, marking was associated with certain semantic factors: repairs were marked when the original utterance had been actually erroneous, rather than simply less appropriate than the repair; and repairs tended to be marked more often when the set of items encompassing the error and the repair was small rather than when it was large. These findings lend further weight to the characterization of accent as essentially semantic in function.
  • Levelt, W. J. M. (2001). Relations between speech production and speech perception: Some behavioral and neurological observations. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honour of Jacques Mehler (pp. 241-256). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (2020). The alpha and omega of Jerome Bruner's contributions to the Max Planck Institute for Psycholinguistics. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen (pp. 11-18). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    Presentation of the official opening of the Jerome Bruner Library, January 8th, 2020
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M. (1974). Taalpsychologie: Van taalkunde naar psychologie. In Herstal-Conferentie.
  • Levelt, W. J. M. (1983). The speaker's organization of discourse. In Proceedings of the XIIIth International Congress of Linguists (pp. 278-290).
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (1983). Wetenschapsbeleid: Drie actuele idolen en een godin. Grafiet, 1(4), 178-184.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levelt, W. J. M. (1989). Working models of perception: Five general issues. In B. A. Elsendoorn, & H. Bouma (Eds.), Working models of perception (pp. 489-503). London: Academic Press.
  • Levinson, S. C. (2004). Significados presumibles [Spanish translation of Presumptive meanings]. Madrid: Bibliotheca Románica Hispánica.
  • Levinson, S. C. (2020). On technologies of the intellect: Goody Lecture 2020. Halle: Max Planck Institute for Social Anthropology.
  • Levinson, S. C. (2001). Motion Verb Stimulus (Moverb) version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 9-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513706.

    Abstract

    How do languages express ideas of movement, and how do they package different components of this domain, such as manner and path of motion? This task uses one large set of stimuli to gain knowledge of certain key aspects of motion verb meanings in the target language, and expands the investigation beyond simple verbs (e.g., go) to include the semantics of motion predications complete with adjuncts (e.g., go across something). Consultants are asked to view and briefly describe 96 animations of a few seconds each. The task is designed to get linguistic elicitations of motion predications under contrastive comparison with other animations in the same set. Unlike earlier tasks, the stimuli focus on inanimate moving items or “figures” (in this case, a ball).
  • Levinson, S. C. (1989). A review of Relevance [book review of Dan Sperber & Deirdre Wilson, Relevance: communication and cognition]. Journal of Linguistics, 25, 455-472.
  • Levinson, S. C. (1989). Conversation. In E. Barnouw (Ed.), International encyclopedia of communications (pp. 407-410). New York: Oxford University Press.
  • Levinson, S. C. (2001). Covariation between spatial language and cognition. In M. Bowerman, & S. C. Levinson (Eds.), Language acquisition and conceptual development (pp. 566-588). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C., Kita, S., & Ozyurek, A. (2001). Demonstratives in context: Comparative handicrafts. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 52-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874663.

    Abstract

    Demonstratives (e.g., words such as this and that in English) pivot on relationships between the item being talked about, and features of the speech act situation (e.g., where the speaker and addressee are standing or looking). However, they are only rarely investigated multi-modally, in natural language contexts. This task is designed to build a video corpus of cross-linguistically comparable discourse data for the study of “deixis in action”, while simultaneously supporting the investigation of joint attention as a factor in speaker selection of demonstratives. In the task, two or more speakers are asked to discuss and evaluate a group of similar items (e.g., examples of local handicrafts, tools, produce) that are placed within a relatively defined space (e.g., on a table). The task can additionally provide material for comparison of pointing gesture practices.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2001). “Time and space” questionnaire for “space in thinking” subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 14-20). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C. (2004). Deixis. In L. Horn (Ed.), The handbook of pragmatics (pp. 97-121). Oxford: Blackwell.
  • Levinson, S. C., & Enfield, N. J. (Eds.). (2001). Manual for the field season 2001. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2001). Maxim. In S. Duranti (Ed.), Key terms in language and culture (pp. 139-142). Oxford: Blackwell.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C., Enfield, N. J., & Senft, G. (2001). Kinship domain for 'space in thinking' subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 85-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874655.
  • Levinson, S. C., & Wittenburg, P. (2001). Language as cultural heritage - Promoting research and public awareness on the Internet. In J. Renn (Ed.), ECHO - An Infrastructure to Bring European Cultural Heritage Online (pp. 104-111). Berlin: Max Planck Institute for the History of Science.

    Abstract

    The ECHO proposal aims to bring to life the cultural heritage of Europe, through internet technology that encourages collaboration across the Humanities disciplines which interpret it – at the same time making all this scholarship accessible to the citizens of Europe. An essential part of the cultural heritage of Europe is the diverse set of languages used on the continent, in their historical, literary and spoken forms. Amongst these are the ‘hidden languages’ used by minorities but of wide interest to the general public. We take the 18 Sign Languages of the EEC – the natural languages of the deaf - as an example. Little comparative information about these is available, despite their special scientific importance, the widespread public interest and the policy implications. We propose a research project on these languages based on placing fully annotated digitized moving images of each of these languages on the internet. This requires significant development of multi-media technology which would allow distributed annotation of a central corpus, together with the development of special search techniques. The technology would have widespread application to all cultural performances recorded as sound plus moving images. Such a project captures in microcosm the essence of the ECHO proposal: cultural heritage is nothing without the humanities research which contextualizes and gives it comparative assessment; by marrying information technology to humanities research, we can bring these materials to a wider public while simultaneously boosting Europe as a research area.
  • Levinson, S. C., Kita, S., & Enfield, N. J. (2001). Locally-anchored narrative. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 147). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874660.

    Abstract

    As for 'Locally-anchored spatial gestures task, version 2', a major goal of this task is to elicit locally-anchored spatial gestures across different cultures. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. Rather than set up an interview situation, this task involves recording informal, animated narrative delivered to a native-speaker interlocutor. Locally-anchored gestures produced in such narrative are roughly comparable to those collected in the interview task. The data collected can also be used to investigate a wide range of other topics.
  • Levinson, S. C. (2001). Space: Linguistic expression. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 22 (pp. 14749-14752). Oxford: Pergamon.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2001). Place and space in the sculpture of Anthony Gormley - An anthropological perspective. In S. D. McElroy (Ed.), Some of the facts (pp. 68-109). St Ives: Tate Gallery.
  • Levinson, S. C. (1989). Pragmática [Spanish translation]. Barcelona: Teide.
  • Levinson, S. C. (2001). Pragmatics. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 17 (pp. 11948-11954). Oxford: Pergamon.
  • Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press.
  • Levinson, S. C., & Enfield, N. J. (2001). Preface and priorities. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levshina, N. (2020). How tight is your language? A semantic typology based on Mutual Information. In K. Evang, L. Kallmeyer, R. Ehren, S. Petitjean, E. Seyffarth, & D. Seddah (Eds.), Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories (pp. 70-78). Düsseldorf, Germany: Association for Computational Linguistics. doi:10.18653/v1/2020.tlt-1.7.

    Abstract

    Languages differ in the degree of semantic flexibility of their syntactic roles. For example, Eng-
    lish and Indonesian are considered more flexible with regard to the semantics of subjects,
    whereas German and Japanese are less flexible. In Hawkins’ classification, more flexible lan-
    guages are said to have a loose fit, and less flexible ones are those that have a tight fit. This
    classification has been based on manual inspection of example sentences. The present paper
    proposes a new, quantitative approach to deriving the measures of looseness and tightness from
    corpora. We use corpora of online news from the Leipzig Corpora Collection in thirty typolog-
    ically and genealogically diverse languages and parse them syntactically with the help of the
    Universal Dependencies annotation software. Next, we compute Mutual Information scores for
    each language using the matrices of lexical lemmas and four syntactic dependencies (intransi-
    tive subjects, transitive subject, objects and obliques). The new approach allows us not only to
    reproduce the results of previous investigations, but also to extend the typology to new lan-
    guages. We also demonstrate that verb-final languages tend to have a tighter relationship be-
    tween lexemes and syntactic roles, which helps language users to recognize thematic roles early
    during comprehension.

    Additional information

    full text via ACL website
  • Levshina, N. (2020). Efficient trade-offs as explanations in functional linguistics: some problems and an alternative proposal. Revista da Abralin, 19(3), 50-78. doi:10.25189/rabralin.v19i3.1728.

    Abstract

    The notion of efficient trade-offs is frequently used in functional linguis-tics in order to explain language use and structure. In this paper I argue that this notion is more confusing than enlightening. Not every negative correlation between parameters represents a real trade-off. Moreover, trade-offs are usually reported between pairs of variables, without taking into account the role of other factors. These and other theoretical issues are illustrated in a case study of linguistic cues used in expressing “who did what to whom”: case marking, rigid word order and medial verb posi-tion. The data are taken from the Universal Dependencies corpora in 30 languages and annotated corpora of online news from the Leipzig Corpora collection. We find that not all cues are correlated negatively, which ques-tions the assumption of language as a zero-sum game. Moreover, the cor-relations between pairs of variables change when we incorporate the third variable. Finally, the relationships between the variables are not always bi-directional. The study also presents a causal model, which can serve as a more appropriate alternative to trade-offs.
  • Lewis, A. G. (2020). Balancing exogenous and endogenous cortical rhythms for speech and language requires a lot of entraining: A commentary on Meyer, Sun Martin (2020). Language, Cognition and Neuroscience, 35(9), 1133-1137. doi:10.1080/23273798.2020.1734640.
  • Liang, S., Deng, W., Li, X., Wang, Q., Greenshaw, A. J., Guo, W., Kong, X., Li, M., Zhao, L., Meng, Y., Zhang, C., Yu, H., Li, X.-m., Ma, X., & Li, T. (2020). Aberrant posterior cingulate connectivity classify first-episode schizophrenia from controls: A machine learning study. Schizophrenia Research, 220, 187-193. doi:10.1016/j.schres.2020.03.022.

    Abstract

    Background

    Posterior cingulate cortex (PCC) is a key aspect of the default mode network (DMN). Aberrant PCC functional connectivity (FC) is implicated in schizophrenia, but the potential for PCC related changes as biological classifier of schizophrenia has not yet been evaluated.
    Methods

    We conducted a data-driven approach using resting-state functional MRI data to explore differences in PCC-based region- and voxel-wise FC patterns, to distinguish between patients with first-episode schizophrenia (FES) and demographically matched healthy controls (HC). Discriminative PCC FCs were selected via false discovery rate estimation. A gradient boosting classifier was trained and validated based on 100 FES vs. 93 HC. Subsequently, classification models were tested in an independent dataset of 87 FES patients and 80 HC using resting-state data acquired on a different MRI scanner.
    Results

    Patients with FES had reduced connectivity between PCC and frontal areas, left parahippocampal regions, left anterior cingulate cortex, and right inferior parietal lobule, but hyperconnectivity with left lateral temporal regions. Predictive voxel-wise clusters were similar to region-wise selected brain areas functionally connected with PCC in relation to discriminating FES from HC subject categories. Region-wise analysis of FCs yielded a relatively high predictive level for schizophrenia, with an average accuracy of 72.28% in the independent samples, while selected voxel-wise connectivity yielded an accuracy of 68.72%.
    Conclusion

    FES exhibited a pattern of both increased and decreased PCC-based connectivity, but was related to predominant hypoconnectivity between PCC and brain areas associated with DMN, that may be a useful differential feature revealing underpinnings of neuropathophysiology for schizophrenia.
  • Liao, Y., Flecken, M., Dijkstra, K., & Zwaan, R. A. (2020). Going places in Dutch and mandarin Chinese: Conceptualising the path of motion cross-linguistically. Language, Cognition and Neuroscience, 35(4), 498-520. doi:10.1080/23273798.2019.1676455.

    Abstract

    We study to what extent linguistic differences in grammatical aspect systems and verb lexicalisation patterns of Dutch and mandarin Chinese affect how speakers conceptualise the path of motion in motion events, using description and memory tasks. We hypothesised that speakers of the two languages would show different preferences towards the selection of endpoint-, trajectory- or location-information in Endpoint-oriented (not reached) events, whilst showing a similar bias towards encoding endpoints in Endpoint-reached events. Our findings show that (1) groups did not differ in endpoint encoding and memory for both event types; (2) Dutch speakers conceptualised Endpoint-oriented motion focusing on the trajectory, whereas Chinese speakers focused on the location of the moving entity. In addition, we report detailed linguistic patterns of how grammatical aspect, verb semantics and adjuncts containing path-information are combined in the two languages. Results are discussed in relation to typologies of motion expression and event cognition theory.

    Additional information

    Supplemental material
  • Lindström, E. (2004). Melanesian kinship and culture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 70-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1552190.
  • Lingwood, J., Levy, R., Billington, J., & Rowland, C. F. (2020). Barriers and solutions to participation in family-based education interventions. International Journal of Social Research Methodology, 23(2), 185-198. doi:10.1080/13645579.2019.1645377.

    Abstract

    The fact that many sub-populations do not take part in research, especially participants fromlower socioeconomic (SES) backgrounds, is a serious problem in education research. Toincrease the participation of such groups we must discover what social, economic andpractical factors prevent participation, and how to overcome these barriers. In the currentpaper, we review the literature on this topic, before describing a case study that demonstratesfour potential solutions to four barriers to participation in a shared reading intervention forfamilies from lower SES backgrounds. We discuss the implications of our findings forfamily-based interventions more generally, and the difficulty of balancing strategies toencourage participation with adhering to the methodological integrity of a research study

    Additional information

    supplemental material
  • Lingwood, J., Billington, J., & Rowland, C. F. (2020). Evaluating the effectiveness of a ‘real‐world’ shared reading intervention for preschool children and their families: A randomised controlled trial. Journal of Research in Reading, 43(3), 249-271. doi:10.1111/1467-9817.12301.

    Abstract

    Background: Shared reading interventions can impact positively on preschool children’s
    language development and on their caregiver’s attitudes/behaviours towards
    reading. However, a number of barriers may discourage families from engaging with
    these interventions, particularly families from lower socio-economic status (SES)
    backgrounds. We investigated how families from such backgrounds responded to an
    intervention designed explicitly to overcome these barriers.
    Methods: In a preregistered cluster randomised controlled trial, 85 lower SES families
    and their 3-year-old to 4-year-old children from 10 different preschools were randomly
    allocated to take part in The Reader’s Shared Reading programme
    (intervention) or an existing ‘Story Time’ group at a library (control) once a week
    for 8 weeks. Three outcome measures were assessed at baseline and post intervention:
    (1) attendance, (2) enjoyment of the reading groups and (3) caregivers’ knowledge of,
    attitudes and behaviours towards reading. A fourth children’s vocabulary – was
    assessed at baseline and 4 weeks post intervention.
    Results: Families were significantly more likely to attend the intervention group and
    rated it more favourably, compared with the control group. However, there were no
    significant effects on caregivers’ knowledge, attitudes and behaviours or on children’s
    language.
    Conclusion: The intervention was only successful in engaging families from disadvantaged
    backgrounds in shared reading. Implications for the use, duration and intensity
    of shared reading interventions are discussed.

    Additional information

    Data, scripts and output files
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Long, M., Vega-Mendoza, M., Rohde, H., Sorace, A., & Bak, T. H. (2020). Understudied factors contributing to variability in cognitive performance related to language learning. Bilingualism: Language and Cognition, 23(4), 801-811. doi:10.1017/S1366728919000749.

    Abstract

    While much of the literature on bilingualism and cognition focuses on group comparisons (monolinguals vs bilinguals or language learners vs controls), here we examine the potential differential effects of intensive language learning on subjects with distinct language experiences and demographic profiles. Using an individual differences approach, we assessed attentional performance from 105 university-educated Gaelic learners aged 21–85. Participants were tested before and after beginner, elementary, and intermediate courses using tasks measuring i.) sustained attention, ii.) inhibition, and iii.) attention switching. We examined the relationship between attentional performance and Gaelic level, previous language experience, gender, and age. Gaelic level predicted attention switching performance: those in higher levels initially outperformed lower levels, however lower levels improved the most. Age also predicted performance: as age increased attention switching decreased. Nevertheless, age did not interact with session for any attentional measure, thus the impact of language learning on cognition was detectable across the lifespan.
  • Long, M., Rohde, H., & Rubio-Fernandez, P. (2020). The pressure to communicate efficiently continues to shape language use later in life. Scientific Reports, 10: 8214. doi:10.1038/s41598-020-64475-6.

    Abstract

    Language use is shaped by a pressure to communicate efficiently, yet the tendency towards redundancy is said to increase in older age. The longstanding assumption is that saying more than is necessary is inefficient and may be driven by age-related decline in inhibition (i.e. the ability to filter out irrelevant information). However, recent work proposes an alternative account of efficiency: In certain contexts, redundancy facilitates communication (e.g., when the colour or size of an object is perceptually salient and its mention aids the listener’s search). A critical question follows: Are older adults indiscriminately redundant, or do they modulate their use of redundant information to facilitate communication? We tested efficiency and cognitive capacities in 200 adults aged 19–82. Irrespective of age, adults with better attention switching skills were redundant in efficient ways, demonstrating that the pressure to communicate efficiently continues to shape language use later in life.

    Additional information

    supplementary table S1 dataset 1
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • MacDonald, K., Räsänen, O., Casillas, M., & Warlaumont, A. S. (2020). Measuring prosodic predictability in children’s home language environments. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 695-701). Montreal, QB: Cognitive Science Society.

    Abstract

    Children learn language from the speech in their home environment. Recent work shows that more infant-directed speech
    (IDS) leads to stronger lexical development. But what makes IDS a particularly useful learning signal? Here, we expand on an attention-based account first proposed by Räsänen et al. (2018): that prosodic modifications make IDS less predictable, and thus more interesting. First, we reproduce the critical finding from Räsänen et al.: that lab-recorded IDS pitch is less predictable compared to adult-directed speech (ADS). Next, we show that this result generalizes to the home language environment, finding that IDS in daylong recordings is also less predictable than ADS but that this pattern is much less robust than for IDS recorded in the lab. These results link experimental work on attention and prosodic modifications of IDS to real-world language-learning environments, highlighting some challenges of scaling up analyses of IDS to larger datasets that better capture children’s actual input.
  • Macuch Silva, V., Holler, J., Ozyurek, A., & Roberts, S. G. (2020). Multimodality and the origin of a novel communication system in face-to-face interaction. Royal Society Open Science, 7: 182056. doi:10.1098/rsos.182056.

    Abstract

    Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalisation, and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e., gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalisations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared to vocalisation, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Mai, A. (2020). Phonetic effects of onset complexity on the English syllable. Laboratory phonology, 11(1): 4. doi:10.5334/labphon.148.

    Abstract

    Although onsets do not arbitrate stress placement in English categorically, results from Kelly (2004) and Ryan (2014) suggest that English stress assignment is nevertheless sensitive to onset complexity. Phonetic work on languages in which onsets participate in categorical weight criteria shows that onsets contribute to stress assignment through their phonetic impact on the nucleus, primarily through their effect on nucleus energy (Gordon, 2005). Onsets in English probabilistically participate in weight-based processes, and here it is predicted that they impact the phonetic realization of the syllable similar to the way that onsets do in languages with categorical onset weight criteria. To test this prediction, speakers in this study produced monosyllabic English words varying in onset complexity, and measures of duration, intensity, and f0 were collected. Results of the current study are consistent with the predictions of Gordon’s perceptual account of categorical weight, showing that integrated intensity of the rime is incapable of driving onset weight behavior in English. Furthermore, results indicate that onsets impact the shape of the intensity envelope in a manner consistent with explanations for gradient onset weight that appeal to onset influence on the perceptual center (Ryan, 2014). Together, these results show that cues to gradient weight act independently of primary cues to categorical weight to probabilistically impact weight sensitive stress assignment in English.
  • Yu, J., Mailhammer, R., & Cutler, A. (2020). Vocabulary structure affects word recognition: Evidence from German listeners. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 474-478). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-97.

    Abstract

    Lexical stress is realised similarly in English, German, and
    Dutch. On a suprasegmental level, stressed syllables tend to be
    longer and more acoustically salient than unstressed syllables;
    segmentally, vowels in unstressed syllables are often reduced.
    The frequency of unreduced unstressed syllables (where only
    the suprasegmental cues indicate lack of stress) however,
    differs across the languages. The present studies test whether
    listener behaviour is affected by these vocabulary differences,
    by investigating German listeners’ use of suprasegmental cues
    to lexical stress in German and English word recognition. In a
    forced-choice identification task, German listeners correctly
    assigned single-syllable fragments (e.g., Kon-) to one of two
    words differing in stress (KONto, konZEPT). Thus, German
    listeners can exploit suprasegmental information for
    identifying words. German listeners also performed above
    chance in a similar task in English (with, e.g., DIver, diVERT),
    i.e., their sensitivity to these cues also transferred to a nonnative
    language. An English listener group, in contrast, failed
    in the English fragment task. These findings mirror vocabulary
    patterns: German has more words with unreduced unstressed
    syllables than English does.
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A., Van Staden, M., & Enfield, N. J. (2004). The human body in cognition, brain, and typology. In K. Hovie (Ed.), Forum Handbook, 4th International Forum on Language, Brain, and Cognition - Cognition, Brain, and Typology: Toward a Synthesis (pp. 31-35). Sendai: Tohoku University.

    Abstract

    The human body is unique: it is both an object of perception and the source of human experience. Its universality makes it a perfect resource for asking questions about how cognition, brain and typology relate to one another. For example, we can ask how speakers of different languages segment and categorize the human body. A dominant view is that body parts are “given” by visual perceptual discontinuities, and that words are merely labels for these visually determined parts (e.g., Andersen, 1978; Brown, 1976; Lakoff, 1987). However, there are problems with this view. First it ignores other perceptual information, such as somatosensory and motoric representations. By looking at the neural representations of sesnsory representations, we can test how much of the categorization of the human body can be done through perception alone. Second, we can look at language typology to see how much universality and variation there is in body-part categories. A comparison of a range of typologically, genetically and areally diverse languages shows that the perceptual view has only limited applicability (Majid, Enfield & van Staden, in press). For example, using a “coloring-in” task, where speakers of seven different languages were given a line drawing of a human body and asked to color in various body parts, Majid & van Staden (in prep) show that languages vary substantially in body part segmentation. For example, Jahai (Mon-Khmer) makes a lexical distinction between upper arm, lower arm, and hand, but Lavukaleve (Papuan Isolate) has just one word to refer to arm, hand, and leg. This shows that body part categorization is not a straightforward mapping of words to visually determined perceptual parts.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A., Van Staden, M., Boster, J. S., & Bowerman, M. (2004). Event categorization: A cross-linguistic perspective. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society (pp. 885-890). Mahwah, NJ: Erlbaum.

    Abstract

    Many studies in cognitive science address how people categorize objects, but there has been comparatively little research on event categorization. This study investigated the categorization of events involving material destruction, such as “cutting” and “breaking”. Speakers of 28 typologically, genetically, and areally diverse languages described events shown in a set of video-clips. There was considerable cross-linguistic agreement in the dimensions along which the events were distinguished, but there was variation in the number of categories and the placement of their boundaries.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A. (Ed.). (2004). Field manual volume 9. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Mak, M., De Vries, C., & Willems, R. M. (2020). The influence of mental imagery instructions and personality characteristics on reading experiences. Collabra: Psychology, 6(1): 43. doi:10.1525/collabra.281.

    Abstract

    It is well established that readers form mental images when reading a narrative. However, the consequences of mental imagery (i.e. the influence of mental imagery on the way people experience stories) are still unclear. Here we manipulated the amount of mental imagery that participants engaged in while reading short literary stories in two experiments. Participants received pre-reading instructions aimed at encouraging or discouraging mental imagery. After reading, participants answered questions about their reading experiences. We also measured individual trait differences that are relevant for literary reading experiences. The results from the first experiment suggests an important role of mental imagery in determining reading experiences. However, the results from the second experiment show that mental imagery is only a weak predictor of reading experiences compared to individual (trait) differences in how imaginative participants were. Moreover, the influence of mental imagery instructions did not extend to reading experiences unrelated to mental imagery. The implications of these results for the relationship between mental imagery and reading experiences are discussed.
  • Mandal, S., Best, C. T., Shaw, J., & Cutler, A. (2020). Bilingual phonology in dichotic perception: A case study of Malayalam and English voicing. Glossa: A Journal of General Linguistics, 5(1): 73. doi:10.5334/gjgl.853.

    Abstract

    Listeners often experience cocktail-party situations, encountering multiple ongoing conversa-
    tions while tracking just one. Capturing the words spoken under such conditions requires selec-
    tive attention and processing, which involves using phonetic details to discern phonological
    structure. How do bilinguals accomplish this in L1-L2 competition? We addressed that question
    using a dichotic listening task with fluent Malayalam-English bilinguals, in which they were pre-
    sented with synchronized nonce words, one in each language in separate ears, with competing
    onsets of a labial stop (Malayalam) and a labial fricative (English), both voiced or both voiceless.
    They were required to attend to the Malayalam or the English item, in separate blocks, and report
    the initial consonant they heard. We found that perceptual intrusions from the unattended to the
    attended language were influenced by voicing, with more intrusions on voiced than voiceless tri-
    als. This result supports our proposal for the feature specification of consonants in Malayalam-
    English bilinguals, which makes use of privative features, underspecification and the “standard
    approach” to laryngeal features, as against “laryngeal realism”. Given this representational
    account, we observe that intrusions result from phonetic properties in the unattended signal
    being assimilated to the closest matching phonological category in the attended language, and
    are more likely for segments with a greater number of phonological feature specifications.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Manhardt, F., Ozyurek, A., Sumer, B., Mulder, K., Karadöller, D. Z., & Brouwer, S. (2020). Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(9), 1735-1753. doi:10.1037/xlm0000843.

    Abstract

    To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.

    Additional information

    Supplementary materials
  • The ManyBabies Consortium (2020). Quantifying sources of variability in infancy research using the infant-directed speech preference. Advances in Methods and Practices in Psychological Science, 30(1), 24-52. doi:10.1177/2515245919900809.

    Abstract

    Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.

    Additional information

    Open Practices Disclosure Open Data OSF
  • Marecka, M., Fosker, T., Szewczyk, J., Kałamała, P., & Wodniecka, Z. (2020). An ear for language. Studies in Second Language Acquisition, 42, 987-1014. doi:10.1017/S0272263120000157.

    Abstract

    This study tested whether individual sensitivity to an auditory perceptual cue called amplitude rise time (ART) facilitates novel word learning. Forty adult native speakers of Polish performed a perceptual task testing their sensitivity to ART, learned associations between nonwords and pictures of common objects, and were subsequently tested on their knowledge with a picture recognition (PR) task. In the PR task participants heard each nonword, followed either by a congruent or incongruent picture, and had to assess if the picture matched the nonword. Word learning efficiency was measured by accuracy and reaction time on the PR task and modulation of the N300 ERP. As predicted, participants with greater sensitivity to ART showed better performance in PR suggesting that auditory sensitivity indeed facilitates learning of novel words. Contrary to expectations, the N300 was not modulated by sensitivity to ART suggesting that the behavioral and ERP measures reflect different underlying processes.
  • Martin, A. E. (2020). A compositional neural architecture for language. Journal of Cognitive Neuroscience, 32(8), 1407-1427. doi:10.1162/jocn_a_01552.

    Abstract

    Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de) compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2020). Eye-tracking the time course of distal and global speech rate effects. Journal of Experimental Psychology: Human Perception and Performance, 46(10), 1148-1163. doi:10.1037/xhp0000838.

    Abstract

    To comprehend speech sounds, listeners tune in to speech rate information in the proximal (immediately adjacent), distal (non-adjacent), and global context (further removed preceding and following sentences). Effects of global contextual speech rate cues on speech perception have been shown to follow constraints not found for proximal and distal speech rate. Therefore, listeners may process such global cues at distinct time points during word recognition. We conducted a printed-word eye-tracking experiment to compare the time courses of distal and global rate effects. Results indicated that the distal rate effect emerged immediately after target sound presentation, in line with a general-auditory account. The global rate effect, however, arose more than 200 ms later than the distal rate effect, indicating that distal and global context effects involve distinct processing mechanisms. Results are interpreted in a two-stage model of acoustic context effects. This model posits that distal context effects involve very early perceptual processes, while global context effects arise at a later stage, involving cognitive adjustments conditioned by higher-level information.
  • Matsuo, A. (2004). Young children's understanding of ongoing vs. completion in present and perfective participles. In J. v. Kampen, & S. Baauw (Eds.), Proceedings of GALA 2003 (pp. 305-316). Utrecht: Netherlands Graduate School of Linguistics (LOT).
  • McCollum, A. G., Baković, E., Mai, A., & Meinhardt, E. (2020). Unbounded circumambient patterns in segmental phonology. Phonology, 37, 215-255. doi:10.1017/S095267572000010X.

    Abstract

    We present an empirical challenge to Jardine's (2016) assertion that only tonal spreading patterns can be unbounded circumambient, meaning that the determination of a phonological value may depend on information that is an unbounded distance away on both sides. We focus on a demonstration that the ATR harmony pattern found in Tutrugbu is unbounded circumambient, and we also cite several other segmental spreading processes with the same general character. We discuss implications for the complexity of phonology and for the relationship between the explanation of typology and the evaluation of phonological theories.

    Additional information

    Supporting Information
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., Norris, D., & Cutler, A. (2001). Can lexical knowledge modulate prelexical representations over time? In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The results of a study on perceptual learning are reported. Dutch subjects made lexical decisions on a list of words and nonwords. Embedded in the list were either [f]- or [s]-final words in which the final fricative had been replaced by an ambiguous sound, midway between [f] and [s]. One group of listeners heard ambiguous [f]- final Dutch words like [kara?] (based on karaf, carafe) and unambiguous [s]-final words (e.g., karkas, carcase). A second group heard the reverse (e.g., ambiguous [karka?] and unambiguous karaf). After this training phase, listeners labelled ambiguous fricatives on an [f]- [s] continuum. The subjects who had heard [?] in [f]- final words categorised these fricatives as [f] reliably more often than those who had heard [?] in [s]-final words. These results suggest that speech recognition is dynamic: the system adjusts to the constraints of each particular listening situation. The lexicon can provide this adjustment process with a training signal.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Dilley, L. C. (2020). Prosody and spoken-word recognition. In C. Gussenhoven, & A. Chen (Eds.), The Oxford handbook of language prosody (pp. 509-521). Oxford: Oxford University Press.

    Abstract

    This chapter outlines a Bayesian model of spoken-word recognition and reviews how
    prosody is part of that model. The review focuses on the information that assists the lis­
    tener in recognizing the prosodic structure of an utterance and on how spoken-word
    recognition is also constrained by prior knowledge about prosodic structure. Recognition
    is argued to be a process of perceptual inference that ensures that listening is robust to
    variability in the speech signal. In essence, the listener makes inferences about the seg­
    mental content of each utterance, about its prosodic structure (simultaneously at differ­
    ent levels in the prosodic hierarchy), and about the words it contains, and uses these in­
    ferences to form an utterance interpretation. Four characteristics of the proposed
    prosody-enriched recognition model are discussed: parallel uptake of different informa­
    tion types, high contextual dependency, adaptive processing, and phonological abstrac­
    tion. The next steps that should be taken to develop the model are also discussed.
  • McQueen, J. M., Eisner, F., Burgering, M. A., & Vroomen, J. (2020). Specialized memory systems for learning spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(1), 189-199. doi:10.1037/xlm0000704.

    Abstract

    Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound–picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in “speech mode” was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information.

    Additional information

    Supplemental material
  • McQueen, J. M., & Cutler, A. (Eds.). (2001). Spoken word access processes. Hove, UK: Psychology Press.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Meeuwissen, M. (2004). Producing complex spoken numerals for time and space. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60607.

    Abstract

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult speakers produce such expressions on a regular basis in everyday communication. Thus, no theory on numerical cognition or speech production is complete without an account of the production of multi-morphemic utterances such as complex numeral expressions. The main question of this thesis is which particular speech planning levels are involved in the naming and reading of complex numerals for time and space. More specifically, this issue was investigated by examining different modes of response (clock times versus house numbers), alternative input formats (Arabic digit versus alphabetic format; analog versus digital clock displays), and different expression types (relative 'quarter to four' versus absolute 'three forty-five' time expressions).

    Additional information

    full text via Radboud Repository
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Meira, S., & Levinson, S. C. (2001). Topological tasks: General introduction. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 29-51). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874665.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.

Share this page