Publications

Displaying 201 - 300 of 396
  • Lameira, A. R., Hardus, M. E., Ravignani, A., Raimondi, T., & Gamba, M. (2024). Recursive self-embedded vocal motifs in wild orangutans. eLife, 12: RP88348. doi:10.7554/eLife.88348.3.

    Abstract

    Recursive procedures that allow placing a vocal signal inside another of a similar kind provide a neuro-computational blueprint for syntax and phonology in spoken language and human song. There are, however, no known vocal sequences among nonhuman primates arranged in self-embedded patterns that evince vocal recursion or potential incipient or evolutionary transitional forms thereof, suggesting a neuro-cognitive transformation exclusive to humans. Here, we uncover that wild flanged male orangutan long calls feature rhythmically isochronous call sequences nested within isochronous call sequences, consistent with two hierarchical strata. Remarkably, three temporally and acoustically distinct call rhythms in the lower stratum were not related to the overarching rhythm at the higher stratum by any low multiples, which suggests that these recursive structures were neither the result of parallel non-hierarchical procedures nor anatomical artifacts of bodily constraints or resonances. Findings represent a case of temporally recursive hominid vocal combinatorics in the absence of syntax, semantics, phonology, or music. Second-order combinatorics, ‘sequences within sequences’, involving hierarchically organized and cyclically structured vocal sounds in ancient hominids may have preluded the evolution of recursion in modern language-able humans.
  • Lehtonen, M., Cunillera, T., Rodríguez-Fornells, A., Hultén, A., Tuomainen, J., & Laine, M. (2007). Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research, 1148, 123-137. doi:10.1016/j.brainres.2007.02.026.

    Abstract

    The temporal dynamics of processing morphologically complex words was investigated by recording event-related brain potentials (ERPs) when native Finnish-speakers performed a visual lexical decision task. Behaviorally, there is evidence that recognition of inflected nouns elicits a processing cost (i.e., longer reaction times and higher error rates) in comparison to matched monomorphemic words. We aimed to reveal whether the processing cost stems from decomposition at the early visual word form level or from recomposition at the later semantic–syntactic level. The ERPs showed no early effects for morphology, but revealed an interaction with word frequency at a late N400-type component, as well as a late positive component that was larger for inflected words. These results suggest that the processing cost stems mainly from the semantic–syntactic level. We also studied the features of the morphological decomposition route by investigating the recognition of pseudowords carrying real morphemes. The results showed no differences between inflected vs. uninflected pseudowords with a false stem, but differences in relation to those with a real stem, suggesting that a recognizable stem is needed to initiate the decomposition route.
  • Leitner, C., D’Este, G., Verga, L., Rahayel, S., Mombelli, S., Sforza, M., Casoni, F., Zucconi, M., Ferini-Strambi, L., & Galbiati, A. (2024). Neuropsychological changes in isolated REM sleep behavior disorder: A systematic review and meta-analysis of cross-sectional and longitudinal studies. Neuropsychology Review, 34(1), 41-66. doi:10.1007/s11065-022-09572-1.

    Abstract

    The aim of this meta-analysis is twofold: (a) to assess cognitive impairments in isolated rapid eye movement (REM) sleep behavior disorder (iRBD) patients compared to healthy controls (HC); (b) to quantitatively estimate the risk of developing a neurodegenerative disease in iRBD patients according to baseline cognitive assessment. To address the first aim, cross-sectional studies including polysomnography-confirmed iRBD patients, HC, and reporting neuropsychological testing were included. To address the second aim, longitudinal studies including polysomnography-confirmed iRBD patients, reporting baseline neuropsychological testing for converted and still isolated patients separately were included. The literature search was conducted based on PRISMA guidelines and the protocol was registered at PROSPERO (CRD42021253427). Cross-sectional and longitudinal studies were searched from PubMed, Web of Science, Scopus, and Embase databases. Publication bias and statistical heterogeneity were assessed respectively by funnel plot asymmetry and using I2. Finally, a random-effect model was performed to pool the included studies. 75 cross-sectional (2,398 HC and 2,460 iRBD patients) and 11 longitudinal (495 iRBD patients) studies were selected. Cross-sectional studies showed that iRBD patients performed significantly worse in cognitive screening scores (random-effects (RE) model = –0.69), memory (RE model = –0.64), and executive function (RE model = –0.50) domains compared to HC. The survival analyses conducted for longitudinal studies revealed that lower executive function and language performance, as well as the presence of mild cognitive impairment (MCI), at baseline were associated with an increased risk of conversion at follow-up. Our study underlines the importance of a comprehensive neuropsychological assessment in the context of iRBD.

    Additional information

    figure 1 tables
  • Leonetti, S., Cimarelli, G., Hersh, T. A., & Ravignani, A. (2024). Why do dogs wag their tails? Biology Letters, 20(1): 20230407. doi:10.1098/rsbl.2023.0407.

    Abstract

    Tail wagging is a conspicuous behaviour in domestic dogs (Canis familiaris). Despite how much meaning humans attribute to this display, its quantitative description and evolutionary history are rarely studied. We summarize what is known about the mechanism, ontogeny, function and evolution of this behaviour. We suggest two hypotheses to explain its increased occurrence and frequency in dogs compared to other canids. During the domestication process, enhanced rhythmic tail wagging behaviour could have (i) arisen as a by-product of selection for other traits, such as docility and tameness, or (ii) been directly selected by humans, due to our proclivity for rhythmic stimuli. We invite testing of these hypotheses through neurobiological and ethological experiments, which will shed light on one of the most readily observed yet understudied animal behaviours. Targeted tail wagging research can be a window into both canine ethology and the evolutionary history of characteristic human traits, such as our ability to perceive and produce rhythmic behaviours.
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M. (1988). Onder sociale wetenschappen. Mededelingen van de Afdeling Letterkunde, 51(2), 41-55.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C. (2007). Cut and break verbs in Yélî Dnye, the Papuan language of Rossel Island. Cognitive Linguistics, 18(2), 207-218. doi:10.1515/COG.2007.009.

    Abstract

    The paper explores verbs of cutting and breaking (C&B, hereafter) in Yeli Dnye, the Papuan language of Rossel Island. The Yeli Dnye verbs covering the C&B domain do not divide it in the expected way, with verbs focusing on special instruments and manners of action on the one hand, and verbs focusing on the resultant state on the other. Instead, just three transitive verbs and their intransitive counterparts cover most of the domain, and they are all based on 'exotic' distinctions in mode of severance[--]coherent severance with the grain vs. against the grain, and incoherent severance (regardless of grain).
  • Levinson, S. C. (2000). Yélî Dnye and the theory of basic color terms. Journal of Linguistic Anthropology, 10( 1), 3-55. doi:10.1525/jlin.2000.10.1.3.

    Abstract

    The theory of basic color terms was a crucial factor in the demise of linguistic relativity. The theory is now once again under scrutiny and fundamental revision. This article details a case study that undermines one of the central claims of the classical theory, namely that languages universally treat color as a unitary domain, to be exhaustively named. Taken together with other cases, the study suggests that a number of languages have only an incipient color terminology, raising doubts about the linguistic universality of such terminology.
  • Levshina, N., Koptjevskaja-Tamm, M., & Östling, R. (2024). Revered and reviled: A sentiment analysis of female and male referents in three languages. Frontiers in Communication, 9: 1266407. doi:10.3389/fcomm.2024.1266407.

    Abstract

    Our study contributes to the less explored domain of lexical typology, focusing on semantic prosody and connotation. Semantic derogation, or pejoration of nouns referring to women, whereby such words acquire connotations and further denotations of social pejoration, immorality and/or loose sexuality, has been a very prominent question in studies on gender and language (change). It has been argued that pejoration emerges due to the general derogatory attitudes toward female referents. However, the evidence for systematic differences in connotations of female- vs. male-related words is fragmentary and often fairly impressionistic; moreover, many researchers argue that expressed sentiments toward women (as well as men) often are ambivalent. One should also expect gender differences in connotations to have decreased in the recent years, thanks to the advances of feminism and social progress. We test these ideas in a study of positive and negative connotations of feminine and masculine term pairs such as woman - man, girl - boy, wife - husband, etc. Sentences containing these words were sampled from diachronic corpora of English, Chinese and Russian, and sentiment scores for every word were obtained using two systems for Aspect-Based Sentiment Analysis: PyABSA, and OpenAI’s large language model GPT-3.5. The Generalized Linear Mixed Models of our data provide no indications of significantly more negative sentiment toward female referents in comparison with their male counterparts. However, some of the models suggest that female referents are more infrequently associated with neutral sentiment than male ones. Neither do our data support the hypothesis of the diachronic convergence between the genders. In sum, results suggest that pejoration is unlikely to be explained simply by negative attitudes to female referents in general.

    Additional information

    supplementary materials
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1-20. doi:10.1017/S0305000906007689.

    Abstract

    We investigated two main components of infant declarative pointing, reference and attitude, in two experiments with a total of 106 preverbal infants at 1;0. When an experimenter (E) responded to the declarative pointing of these infants by attending to an incorrect referent (with positive attitude), infants repeated pointing within trials to redirect E’s attention, showing an understanding of E’s reference and active message repair. In contrast, when E identified infants’ referent correctly but displayed a disinterested attitude, infants did not repeat pointing within trials and pointed overall in fewer trials, showing an understanding of E’s unenthusiastic attitude about the referent. When E attended to infants’ intended referent AND shared interest in it, infants were most satisfied, showing no message repair within trials and pointing overall in more trials. These results suggest that by twelve months of age infant declarative pointing is a full communicative act aimed at sharing with others both attention to a referent and a specific attitude about that referent.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10(2), F1-F7. doi:0.1111/j.1467-7687.2006.00552.x.

    Abstract

    There is currently controversy over the nature of 1-year-olds' social-cognitive understanding and motives. In this study we investigated whether 12-month-old infants point for others with an understanding of their knowledge states and with a prosocial motive for sharing experiences with them. Declarative pointing was elicited in four conditions created by crossing two factors: an adult partner (1) was already attending to the target event or not, and (2) emoted positively or neutrally. Pointing was also coded after the event had ceased. The findings suggest that 12-month-olds point to inform others of events they do not know about, that they point to share an attitude about mutually attended events others already know about, and that they can point (already prelinguistically) to absent referents. These findings provide strong support for a mentalistic and prosocial interpretation of infants' prelinguistic communication
  • Loke*, J., Seijdel*, N., Snoek, L., Sorensen, L., Van de Klundert, R., Van der Meer, M., Quispel, E., Cappaert, N., & Scholte, H. S. (2024). Human visual cortex and deep convolutional neural network care deeply about object background. Journal of Cognitive Neuroscience, 36(3), 551-566. doi:10.1162/jocn_a_02098.

    Abstract

    * These authors contributed equally/shared first author
    Deep convolutional neural networks (DCNNs) are able to partially predict brain activity during object categorization tasks, but factors contributing to this predictive power are not fully understood. Our study aimed to investigate the factors contributing to the predictive power of DCNNs in object categorization tasks. We compared the activity of four DCNN architectures with EEG recordings obtained from 62 human participants during an object categorization task. Previous physiological studies on object categorization have highlighted the importance of figure-ground segregation—the ability to distinguish objects from their backgrounds. Therefore, we investigated whether figure-ground segregation could explain the predictive power of DCNNs. Using a stimulus set consisting of identical target objects embedded in different backgrounds, we examined the influence of object background versus object category within both EEG and DCNN activity. Crucially, the recombination of naturalistic objects and experimentally controlled backgrounds creates a challenging and naturalistic task, while retaining experimental control. Our results showed that early EEG activity (< 100 msec) and early DCNN layers represent object background rather than object category. We also found that the ability of DCNNs to predict EEG activity is primarily influenced by how both systems process object backgrounds, rather than object categories. We demonstrated the role of figure-ground segregation as a potential prerequisite for recognition of object features, by contrasting the activations of trained and untrained (i.e., random weights) DCNNs. These findings suggest that both human visual cortex and DCNNs prioritize the segregation of object backgrounds and target objects to perform object categorization. Altogether, our study provides new insights into the mechanisms underlying object categorization as we demonstrated that both human visual cortex and DCNNs care deeply about object background.

    Additional information

    link to preprint
  • Long, M., Rohde, H., Oraa Ali, M., & Rubio-Fernandez, P. (2024). The role of cognitive control and referential complexity on adults’ choice of referring expressions: Testing and expanding the referential complexity scale. Journal of Experimental Psychology: Learning, Memory, and Cognition, 50(1), 109-136. doi:10.1037/xlm0001273.

    Abstract

    This study aims to advance our understanding of the nature and source(s) of individual differences in pragmatic language behavior over the adult lifespan. Across four story continuation experiments, we probed adults’ (N = 496 participants, ages 18–82) choice of referential forms (i.e., names vs. pronouns to refer to the main character). Our manipulations were based on Fossard et al.’s (2018) scale of referential complexity which varies according to the visual properties of the scene: low complexity (one character), intermediate complexity (two characters of different genders), and high complexity (two characters of the same gender). Since pronouns signal topic continuity (i.e., that the discourse will continue to be about the same referent), the use of pronouns is expected to decrease as referential complexity increases. The choice of names versus pronouns, therefore, provides insight into participants’ perception of the topicality of a referent, and whether that varies by age and cognitive capacity. In Experiment 1, we used the scale to test the association between referential choice, aging, and cognition, identifying a link between older adults’ switching skills and optimal referential choice. In Experiments 2–4, we tested novel manipulations that could impact the scale and found both the timing of a competitor referent’s presence and emphasis placed on competitors modulated referential choice, leading us to refine the scale for future use. Collectively, Experiments 1–4 highlight what type of contextual information is prioritized at different ages, revealing older adults’ preserved sensitivity to (visual) scene complexity but reduced sensitivity to linguistic prominence cues, compared to younger adults.
  • Lutzenberger, H., Casillas, M., Fikkert, P., Crasborn, O., & De Vos, C. (2024). More than looks: Exploring methods to test phonological discrimination in the sign language Kata Kolok. Language Learning and Development. Advance online publication. doi:10.1080/15475441.2023.2277472.

    Abstract

    The lack of diversity in the language sciences has increasingly been criticized as it holds the potential for producing flawed theories. Research on (i) geographically diverse language communities and (ii) on sign languages is necessary to corroborate, sharpen, and extend existing theories. This study contributes a case study of adapting a well-established paradigm to study the acquisition of sign phonology in Kata Kolok, a sign language of rural Bali, Indonesia. We conducted an experiment modeled after the familiarization paradigm with child signers of Kata Kolok. Traditional analyses of looking time did not yield significant differences between signing and non-signing children. Yet, additional behavioral analyses (attention, eye contact, hand behavior) suggest that children who are signers and those who are non-signers, as well as those who are hearing and those who are deaf, interact differently with the task. This study suggests limitations of the paradigm due to the ecology of sign languages and the sociocultural characteristics of the sample, calling for a mixed-methods approach. Ultimately, this paper aims to elucidate the diversity of adaptations necessary for experimental design, procedure, and analysis, and to offer a critical reflection on the contribution of similar efforts and the diversification of the field.
  • Mai, A., Riès, S., Ben-Haim, S., Shih, J. J., & Gentner, T. Q. (2024). Acoustic and language-specific sources for phonemic abstraction from speech. Nature Communications, 15: 677. doi:10.1038/s41467-024-44844-9.

    Abstract

    Spoken language comprehension requires abstraction of linguistic information from speech, but the interaction between auditory and linguistic processing of speech remains poorly understood. Here, we investigate the nature of this abstraction using neural responses recorded intracranially while participants listened to conversational English speech. Capitalizing on multiple, language-specific patterns where phonological and acoustic information diverge, we demonstrate the causal efficacy of the phoneme as a unit of analysis and dissociate the unique contributions of phonemic and spectrographic information to neural responses. Quantitive higher-order response models also reveal that unique contributions of phonological information are carried in the covariance structure of the stimulus-response relationship. This suggests that linguistic abstraction is shaped by neurobiological mechanisms that involve integration across multiple spectro-temporal features and prior phonological information. These results link speech acoustics to phonology and morphosyntax, substantiating predictions about abstractness in linguistic theory and providing evidence for the acoustic features that support that abstraction.

    Additional information

    supplementary information
  • Majid, A., Bowerman, M., Van Staden, M., & Boster, J. S. (2007). The semantic categories of cutting and breaking events: A crosslinguistic perspective. Cognitive Linguistics, 18(2), 133-152. doi:10.1515/COG.2007.005.

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how...
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2007). The linguistic description of minimal social scenarios affects the extent of causal inference making. Journal of Experimental Social Psychology, 43(6), 918-932. doi:10.1016/j.jesp.2006.10.016.

    Abstract

    There is little consensus regarding the circumstances in which people spontaneously generate causal inferences, and in particular whether they generate inferences about the causal antecedents or the causal consequences of events. We tested whether people systematically infer causal antecedents or causal consequences to minimal social scenarios by using a continuation methodology. People overwhelmingly produced causal antecedent continuations for descriptions of interpersonal events (John hugged Mary), but causal consequence continuations to descriptions of transfer events (John gave a book to Mary). This demonstrates that there is no global cognitive style, but rather inference generation is crucially tied to the input. Further studies examined the role of event unusualness, number of participators, and verb-type on the likelihood of producing a causal antecedent or causal consequence inference. We conclude that inferences are critically guided by the specific verb used.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A., Gullberg, M., Van Staden, M., & Bowerman, M. (2007). How similar are semantic categories in closely related languages? A comparison of cutting and breaking in four Germanic languages. Cognitive Linguistics, 18(2), 179-194. doi:10.1515/COG.2007.007.

    Abstract

    Are the semantic categories of very closely related languages the same? We present a new methodology for addressing this question. Speakers of English, German, Dutch and Swedish described a set of video clips depicting cutting and breaking events. The verbs elicited were then subjected to cluster analysis, which groups scenes together based on similarity (determined by shared verbs). Using this technique, we find that there are surprising differences among the languages in the number of categories, their exact boundaries, and the relationship of the terms to one another[--]all of which is circumscribed by a common semantic space.
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • Mazzini, S., Yadnik, S., Timmers, I., Rubio-Gozalbo, E., & Jansma, B. M. (2024). Altered neural oscillations in classical galactosaemia during sentence production. Journal of Inherited Metabolic Disease. Advance online publication. doi:10.1002/jimd.12740.

    Abstract

    Classical galactosaemia (CG) is a hereditary disease in galactose metabolism that despite dietary treatment is characterized by a wide range of cognitive deficits, among which is language production. CG brain functioning has been studied with several neuroimaging techniques, which revealed both structural and functional atypicalities. In the present study, for the first time, we compared the oscillatory dynamics, especially the power spectrum and time–frequency representations (TFR), in the electroencephalography (EEG) of CG patients and healthy controls while they were performing a language production task. Twenty-one CG patients and 19 healthy controls described animated scenes, either in full sentences or in words, indicating two levels of complexity in syntactic planning. Based on previous work on the P300 event related potential (ERP) and its relation with theta frequency, we hypothesized that the oscillatory activity of patients and controls would differ in theta power and TFR. With regard to behavior, reaction times showed that patients are slower, reflecting the language deficit. In the power spectrum, we observed significant higher power in patients in delta (1–3 Hz), theta (4–7 Hz), beta (15–30 Hz) and gamma (30–70 Hz) frequencies, but not in alpha (8–12 Hz), suggesting an atypical oscillatory profile. The time-frequency analysis revealed significantly weaker event-related theta synchronization (ERS) and alpha desynchronization (ERD) in patients in the sentence condition. The data support the hypothesis that CG language difficulties relate to theta–alpha brain oscillations.

    Additional information

    table S1 and S2
  • McQueen, J. M., & Viebahn, M. C. (2007). Tracking recognition of spoken words by tracking looks to printed words. Quarterly Journal of Experimental Psychology, 60(5), 661-671. doi:10.1080/17470210601183890.

    Abstract

    Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Meinhardt, E., Mai, A., Baković, E., & McCollum, A. (2024). Weak determinism and the computational consequences of interaction. Natural Language & Linguistic Theory. Advance online publication. doi:10.1007/s11049-023-09578-1.

    Abstract

    Recent work has claimed that (non-tonal) phonological patterns are subregular (Heinz 2011a,b, 2018; Heinz and Idsardi 2013), occupying a delimited proper subregion of the regular functions—the weakly deterministic (WD) functions (Heinz and Lai 2013; Jardine 2016). Whether or not it is correct (McCollum et al. 2020a), this claim can only be properly assessed given a complete and accurate definition of WD functions. We propose such a definition in this article, patching unintended holes in Heinz and Lai’s (2013) original definition that we argue have led to the incorrect classification of some phonological patterns as WD. We start from the observation that WD patterns share a property that we call unbounded semiambience, modeled after the analogous observation by Jardine (2016) about non-deterministic (ND) patterns and their unbounded circumambience. Both ND and WD functions can be broken down into compositions of deterministic (subsequential) functions (Elgot and Mezei 1965; Heinz and Lai 2013) that read an input string from opposite directions; we show that WD functions are those for which these deterministic composands do not interact in a way that is familiar from the theoretical phonology literature. To underscore how this concept of interaction neatly separates the WD class of functions from the strictly more expressive ND class, we provide analyses of the vowel harmony patterns of two Eastern Nilotic languages, Maasai and Turkana, using bimachines, an automaton type that represents unbounded bidirectional dependencies explicitly. These analyses make clear that there is interaction between deterministic composands when (and only when) the output of a given input element of a string is simultaneously dependent on information from both the left and the right: ND functions are those that involve interaction, while WD functions are those that do not.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Menks, W. M., Ekerdt, C., Lemhöfer, K., Kidd, E., Fernández, G., McQueen, J. M., & Janzen, G. (2024). Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Developmental Cognitive Neuroscience, 66: 101347. doi:10.1016/j.dcn.2024.101347.

    Abstract

    While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.

    Additional information

    supplement
  • Meyer, A. S., & Damian, M. F. (2007). Activation of distractor names in the picture-picture interference paradigm. Memory & Cognition, 35, 494-503.

    Abstract

    In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access—that is, under the assumption that the recognition of an object leads to the activation of its name.
  • Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14, 710-716.

    Abstract

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.
  • Meyer, A. S., Belke, E., Häcker, C., & Mortensen, L. (2007). Use of word length information in utterance planning. Journal of Memory and Language, 57, 210-231. doi:10.1016/j.jml.2006.10.005.

    Abstract

    Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment I of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
  • Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.

    Abstract

    Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.

    Additional information

    supplementary material
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • Oblong, L. M., Soheili-Nezhad, S., Trevisan, N., Shi, Y., Beckmann, C. F., & Sprooten, E. (2024). Principal and independent genomic components of brain structure and function. Genes, Brain and Behavior, 23(1): e12876. doi:10.1111/gbb.12876.

    Abstract

    The highly polygenic and pleiotropic nature of behavioural traits, psychiatric disorders and structural and functional brain phenotypes complicate mechanistic interpretation of related genome-wide association study (GWAS) signals, thereby obscuring underlying causal biological processes. We propose genomic principal and independent component analysis (PCA, ICA) to decompose a large set of univariate GWAS statistics of multimodal brain traits into more interpretable latent genomic components. Here we introduce and evaluate this novel methods various analytic parameters and reproducibility across independent samples. Two UK Biobank GWAS summary statistic releases of 2240 imaging-derived phenotypes (IDPs) were retrieved. Genome-wide beta-values and their corresponding standard-error scaled z-values were decomposed using genomic PCA/ICA. We evaluated variance explained at multiple dimensions up to 200. We tested the inter-sample reproducibility of output of dimensions 5, 10, 25 and 50. Reproducibility statistics of the respective univariate GWAS served as benchmarks. Reproducibility of 10-dimensional PCs and ICs showed the best trade-off between model complexity and robustness and variance explained (PCs: |rz − max| = 0.33, |rraw − max| = 0.30; ICs: |rz − max| = 0.23, |rraw − max| = 0.19). Genomic PC and IC reproducibility improved substantially relative to mean univariate GWAS reproducibility up to dimension 10. Genomic components clustered along neuroimaging modalities. Our results indicate that genomic PCA and ICA decompose genetic effects on IDPs from GWAS statistics with high reproducibility by taking advantage of the inherent pleiotropic patterns. These findings encourage further applications of genomic PCA and ICA as fully data-driven methods to effectively reduce the dimensionality, enhance the signal to noise ratio and improve interpretability of high-dimensional multitrait genome-wide analyses.
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Osiecka, A. N., Fearey, J., Ravignani, A., & Burchardt, L. (2024). Isochrony in barks of Cape fur seal (Arctocephalus pusillus pusillus) pups and adults. Ecology and Evolution, 14(3): e11085. doi:10.1002/ece3.11085.

    Abstract

    Animal vocal communication often relies on call sequences. The temporal patterns of such sequences can be adjusted to other callers, follow complex rhythmic structures or exhibit a metronome-like pattern (i.e., isochronous). How regular are the temporal patterns in animal signals, and what influences their precision? If present, are rhythms already there early in ontogeny? Here, we describe an exploratory study of Cape fur seal (Arctocephalus pusillus pusillus) barks—a vocalisation type produced across many pinniped species in rhythmic, percussive bouts. This study is the first quantitative description of barking in Cape fur seal pups. We analysed the rhythmic structures of spontaneous barking bouts of pups and adult females from the breeding colony in Cape Cross, Namibia. Barks of adult females exhibited isochrony, that is they were produced at fairly regular points in time. Instead, intervals between pup barks were more variable, that is skipping a bark in the isochronous series occasionally. In both age classes, beat precision, that is how well the barks followed a perfect template, was worse when barking at higher rates. Differences could be explained by physiological factors, such as respiration or arousal. Whether, and how, isochrony develops in this species remains an open question. This study provides evidence towards a rhythmic production of barks in Cape fur seal pups and lays the groundwork for future studies to investigate the development of rhythm using multidimensional metrics.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S. Ozaki, Y., Tierney, A., Pfordresher, P. Q., McBride, J., Benetos, E., Proutskova, P., Chiba, G., Liu, F., Jacoby, N., Purdy, S. C., Opondo, P., Fitch, W. T., Hegde, S., Rocamora, M., Thorne, R., Nweke, F., Sadaphal, D. P., Sadaphal, P. M., Hadavi, S., Fujii, S., Choo, S., Naruse, M., Ehara, U., Sy, L., Parselelo, M. L., Anglada-Tort, M., Hansen, N. C., Haiduk, F., Færøvik, U., Magalhães, V., Krzyżanowski, W., Shcherbakova, O., Hereld, D., Barbosa, B. S., Correa Varella, M. A., Van Tongeren, M., Dessiatnitchenko, P., Zar Zar, S., El Kahla, I., Muslu, O., Troy, J., Lomsadze, T., Kurdova, D., Tsope, C., Fredriksson, D., Arabadjiev, A., Sarbah, J. P., Arhine, A., Meachair, T. Ó., Silva-Zurita, J., Soto-Silva, I., Millalonco, N. E. M., Ambrazevičius, R., Loui, P., Ravignani, A., Jadoul, Y., Larrouy-Maestri, P., Bruder, C., Teyxokawa, T. P., Kuikuro, U., Natsitsabui, R., Sagarzazu, N. B., Raviv, L., Zeng, M., Varnosfaderani, S. D., Gómez-Cañón, J. S., Kolff, K., Vanden Bos der Nederlanden, C., Chhatwal, M., David, R. M., I Putu Gede Setiawan, Lekakul, G., Borsan, V. N., Nguqu, N., & Savage, P. E. (2024). Globally, songs and instrumental melodies are slower, higher, and use more stable pitches than speech: A Registered Report. Science Advances, 10(20): eadm9797. doi:10.1126/sciadv.adm9797.

    Abstract

    Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.

    Additional information

    supplementary materials
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozker, M., Yu, L., Dugan, P., Doyle, W., Friedman, D., Devinsky, O., & Flinker, A. (2024). Speech-induced suppression and vocal feedback sensitivity in human cortex. eLife, 13: RP94198. doi:10.7554/eLife.94198.1.

    Abstract

    Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petersson, K. M., Reis, A., Askelöf, S., Castro-Caldas, A., & Ingvar, M. (2000). Language processing modulated by literacy: A network analysis of verbal repetition in literate and illiterate subjects. Journal of Cognitive Neuroscience, 12(3), 364-382. doi:10.1162/089892900562147.
  • Petrovic, P., Petersson, K. M., Ghatan, P., Stone-Elander, S., & Ingvar, M. (2000). Pain related cerebral activation is altered by a distracting cognitive task. Pain, 85, 19-30.

    Abstract

    It has previously been suggested that the activity in sensory regions of the brain can be modulated by attentional mechanisms during parallel cognitive processing. To investigate whether such attention-related modulations are present in the processing of pain, the regional cerebral blood ¯ow was measured using [15O]butanol and positron emission tomography in conditions involving both pain and parallel cognitive demands. The painful stimulus consisted of the standard cold pressor test and the cognitive task was a computerised perceptual maze test. The activations during the maze test reproduced findings in previous studies of the same cognitive task. The cold pressor test evoked signi®cant activity in the contralateral S1, and bilaterally in the somatosensory association areas (including S2), the ACC and the mid-insula. The activity in the somatosensory association areas and periaqueductal gray/midbrain were significantly modified, i.e. relatively decreased, when the subjects also were performing the maze task. The altered activity was accompanied with significantly lower ratings of pain during the cognitive task. In contrast, lateral orbitofrontal regions showed a relative increase of activity during pain combined with the maze task as compared to only pain, which suggests the possibility of the involvement of frontal cortex in modulation of regions processing pain
  • Picciulin, M., Bolgan, M., & Burchardt, L. (2024). Rhythmic properties of Sciaena umbra calls across space and time in the Mediterranean Sea. PLOS ONE, 19(2): e0295589. doi:10.1371/journal.pone.0295589.

    Abstract

    In animals, the rhythmical properties of calls are known to be shaped by physical constraints and the necessity of conveying information. As a consequence, investigating rhythmical properties in relation to different environmental conditions can help to shed light on the relationship between environment and species behavior from an evolutionary perspective. Sciaena umbra (fam. Sciaenidae) male fish emit reproductive calls characterized by a simple isochronous, i.e., metronome-like rhythm (the so-called R-pattern). Here, S. umbra R-pattern rhythm properties were assessed and compared between four different sites located along the Mediterranean basin (Mallorca, Venice, Trieste, Crete); furthermore, for one location, two datasets collected 10 years apart were available. Recording sites differed in habitat types, vessel density and acoustic richness; despite this, S. umbra R-calls were isochronous across all locations. A degree of variability was found only when considering the beat frequency, which was temporally stable, but spatially variable, with the beat frequency being faster in one of the sites (Venice). Statistically, the beat frequency was found to be dependent on the season (i.e. month of recording) and potentially influenced by the presence of soniferous competitors and human-generated underwater noise. Overall, the general consistency in the measured rhythmical properties (isochrony and beat frequency) suggests their nature as a fitness-related trait in the context of the S. umbra reproductive behavior and calls for further evaluation as a communicative cue.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • Di Pisa, G., Pereira Soares, S. M., Rothman, J., & Marinis, T. (2024). Being a heritage speaker matters: the role of markedness in subject-verb person agreement in Italian. Frontiers in Psychology, 15: 1321614. doi:10.3389/fpsyg.2024.1321614.

    Abstract

    This study examines online processing and offline judgments of subject-verb person agreement with a focus on how this is impacted by markedness in heritage speakers (HSs) of Italian. To this end, 54 adult HSs living in Germany and 40 homeland Italian speakers completed a self-paced reading task (SPRT) and a grammaticality judgment task (GJT). Markedness was manipulated by probing agreement with both first-person (marked) and third-person (unmarked) subjects. Agreement was manipulated by crossing first-person marked subjects with third-person unmarked verbs and vice versa. Crucially, person violations with 1st person subjects (e.g., io *suona la chitarra “I plays-3rd-person the guitar”) yielded significantly shorter RTs in the SPRT and higher accuracy in the GJT than the opposite error type (e.g., il giornalista *esco spesso “the journalist go-1st-person out often”). This effect is consistent with the claim that when the first element in the dependency is marked (first person), the parser generates stronger predictions regarding upcoming agreeing elements. These results nicely align with work from the same populations investigating the impact of morphological markedness on grammatical gender agreement, suggesting that markedness impacts agreement similarly in two distinct grammatical domains and that sensitivity to markedness is more prevalent for HSs.

    Additional information

    di_pisa_etal_2024_sup.DOCX
  • Pizarro-Guevara, J. S., & Garcia, R. (2024). Philippine Psycholinguistics. Annual Review of Linguistics, 10, 145-167. doi:10.1146/annurev-linguistics-031522-102844.

    Abstract

    Over the last decade, there has been a slow but steady accumulation of psycholinguistic research focusing on typologically diverse languages. In this review, we provide an overview of the psycholinguistic research on Philippine languages at the sentence level. We first discuss the grammatical features of these languages that figure prominently in existing research. We identify four linguistic domains that have received attention from language researchers and summarize the empirical terrain. We advance two claims that emerge across these different domains: (a) The agent-first pressure plays a central role in many of the findings, and (b) the generalization that the patient argument is the syntactically privileged argument cannot be reduced to frequency, but instead is an emergent phenomenon caused by the alignment of competing pressures toward an optimal candidate. We connect these language-specific claims to language-general theories of sentence processing.
  • Poletiek, F. H. (2000). De beoordelaar dobbelt niet - denkt hij. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 55(5), 246-249.
  • Poletiek, F. H., & Berndsen, M. (2000). Hypothesis testing as risk behaviour with regard to beliefs. Journal of Behavioral Decision Making, 13(1), 107-123. doi:10.1002/(SICI)1099-0771(200001/03)13:1<107:AID-BDM349>3.0.CO;2-P.

    Abstract

    In this paper hypothesis‐testing behaviour is compared to risk‐taking behaviour. It is proposed that choosing a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence. This consideration resembles the one a gambler makes when choosing among bets, each having a probability of winning and an amount to be won. A confirmatory testing strategy can be defined within this framework as a strategy directed at maximizing either the probability or the value of a confirming outcome. Previous theories on testing behaviour have focused on the human tendency to maximize the probability of a confirming outcome. In this paper, two experiments are presented in which participants tend to maximize the confirming value of the test outcome. Motivational factors enhance this tendency dependent on the context of the testing situation. Both this result and the framework are discussed in relation to other studies in the field of testing behaviour.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function. Advance online publication. doi:10.1007/s00429-024-02801-8.

    Abstract

    Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system.
  • Rösler, D., & Skiba, R. (1986). Ein vernetzter Lehrmaterial-Steinbruch für Deutsch als Zweitsprache (Projekt EKMAUS, FU Berlin). Deutsch Lernen: Zeitschrift für den Sprachunterricht mit ausländischen Arbeitnehmern, 2, 68-71. Retrieved from http://www.daz-didaktik.de/html/1986.html.
  • Rösler, D., & Skiba, R. (1988). Möglichkeiten für den Einsatz einer Lehrmaterial-Datenbank in der Lehrerfortbildung. Deutsch lernen, 14(1), 24-31.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: what children do know? Journal of Child Language, 27(1), 157-181.

    Abstract

    The present paper reports an analysis of correct wh-question production and subject–auxiliary inversion errors in one child's early wh-question data (age 2; 3.4 to 4; 10.23). It is argued that two current movement rule accounts (DeVilliers, 1991; Valian, Lasser & Mandelbaum, 1992) cannot explain the patterning of early wh-questions. However, the data can be explained in terms of the child's knowledge of particular lexically-specific wh-word+auxiliary combinations, and the pattern of inversion and uninversion predicted from the relative frequencies of these combinations in the mother's speech. The results support the claim that correctly inverted wh-questions can be produced without access to a subject–auxiliary inversion rule and are consistent with the constructivist claim that a distributional learning mechanism that learns and reproduces lexically-specific formulae heard in the input can explain much of the early multi-word speech data. The implications of these results for movement rule-based and constructivist theories of grammatical development are discussed.
  • Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.

    Abstract

    Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.

    Abstract

    Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. Neurocomputing, 32(33), 987-994. doi:10.1016/S0925-2312(00)00270-8.

    Abstract

    Capacity limited memory systems need to gradually forget old information in order to avoid catastrophic forgetting where all stored information is lost. This can be achieved by allowing new information to overwrite old, as in the so-called palimpsest memory. This paper describes a new such learning rule employed in an attractor neural network. The network does not exhibit catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits recency e!ects in retrieval
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.

Share this page