Publications

Displaying 301 - 400 of 603
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Ledberg, A., Fransson, P., Larsson, J., & Petersson, K. M. (2001). A 4D approach to the analysis of functional brain images: Application to fMRI data. Human Brain Mapping, 13, 185-198. doi:10.1002/hbm.1032.

    Abstract

    This paper presents a new approach to functional magnetic resonance imaging (FMRI) data analysis. The main difference lies in the view of what comprises an observation. Here we treat the data from one scanning session (comprising t volumes, say) as one observation. This is contrary to the conventional way of looking at the data where each session is treated as t different observations. Thus instead of viewing the v voxels comprising the 3D volume of the brain as the variables, we suggest the usage of the vt hypervoxels comprising the 4D volume of the brain-over-session as the variables. A linear model is fitted to the 4D volumes originating from different sessions. Parameter estimation and hypothesis testing in this model can be performed with standard techniques. The hypothesis testing generates 4D statistical images (SIs) to which any relevant test statistic can be applied. In this paper we describe two test statistics, one voxel based and one cluster based, that can be used to test a range of hypotheses. There are several benefits in treating the data from each session as one observation, two of which are: (i) the temporal characteristics of the signal can be investigated without an explicit model for the blood oxygenation level dependent (BOLD) contrast response function, and (ii) the observations (sessions) can be assumed to be independent and hence inference on the 4D SI can be made by nonparametric or Monte Carlo methods. The suggested 4D approach is applied to FMRI data and is shown to accurately detect the expected signal
  • Lemhöfer, K., Schriefers, H., & Indefrey, P. (2020). Syntactic processing in L2 depends on perceived reliability of the input: Evidence from P600 responses to correct input. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(10), 1948-1965. doi:10.1037/xlm0000895.

    Abstract

    In 3 ERP experiments, we investigated how experienced L2 speakers process natural and correct syntactic input that deviates from their own, sometimes incorrect, syntactic representations. Our previous study (Lemhöfer, Schriefers, & Indefrey, 2014) had shown that L2 speakers do engage in native-like syntactic processing of gender agreement but base this processing on their own idiosyncratic (and sometimes incorrect) grammars. However, as in other standard ERP studies, but different from realistic L2 input, the materials in that study contained a large proportion of incorrect sentences. In the present study, German speakers of Dutch read exclusively objectively correct Dutch sentences that did or did not contain subjective determiner “errors” (e.g., de boot “the boat,” which conflicts with the intuition of many German speakers that the correct phrase should be het boot). During reading for comprehension (Experiment 1), no syntax-related ERP responses for subjectively incorrect compared to correct phrases were observed. The same was true even when participants explicitly attended to and learned from the determiners in the sentences (Experiment 2). Only when participants judged the correctness of determiners in each sentence (Experiment 3) did a clear P600 appear. These results suggest that the full and native-like use of subjective grammars, as reflected in the P600 to subjective violations, occurs only when speakers have reason to mistrust the grammaticality of the input, either because of the nature of the task (grammaticality judgments) or because of the salient presence of incorrect sentences.
  • De León, L., & Levinson, S. C. (Eds.). (1992). Space in Mesoamerican languages [Special Issue]. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6).
  • Lev-Ari, S., & Sebanz, N. (2020). Interacting with multiple partners improves communication skills. Cognitive Science, 44(4): e12836. doi:10.1111/cogs.12836.

    Abstract

    Successful communication is important for both society and people’s personal life. Here we show that people can improve their communication skills by interacting with multiple others, and that this improvement seems to come about by a greater tendency to take the addressee’s perspective when there are multiple partners. In Experiment 1, during a training phase, participants described figures to a new partner in each round or to the same partner in all rounds. Then all participants interacted with a new partner and their recordings from that round were presented to naïve listeners. Participants who had interacted with multiple partners during training were better understood. This occurred despite the fact that the partners had not provided the participants with any input other than feedback on comprehension during the interaction. In Experiment 2, participants were asked to provide descriptions to a different future participant in each round or to the same future participant in all rounds. Next they performed a surprise memory test designed to tap memory for global details, in line with the addressee’s perspective. Those who had provided descriptions for multiple future participants performed better. These results indicate that people can improve their communication skills by interacting with multiple people, and that this advantage might be due to a greater tendency to take the addressee’s perspective in such cases. Our findings thus show how the social environment can influence our communication skills by shaping our own behavior during interaction in a manner that promotes the development of our communication skills.
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (1999). A developmental grammar for syllable structure in the production of child language. Brain and Language, 68, 291-299.

    Abstract

    The order of acquisition of Dutch syllable types by first language learners is analyzed as following from an initial ranking and subsequent rerankings of constraints in an optimality theoretic grammar. Initially, structural constraints are all ranked above faithfulness constraints, leading to core syllable (CV) productions only. Subsequently, faithfulness gradually rises to the highest position in the ranking, allowing more and more marked syllable types to appear in production. Local conjunctions of Structural constraints allow for a more detailed analysis.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1-38. doi:10.1017/S0140525X99001776.

    Abstract

    Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER + +. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting. for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.
  • Levelt, W. J. M. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42, 1-22. doi:10.1016/0010-0277(92)90038-J.

    Abstract

    This paper introduces a special issue of Cognition on lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesitations, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (a “lemma”) from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance. These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M. (1992). Fairness in reviewing: A reply to O'Connell. Journal of Psycholinguistic Research, 21, 401-403.
  • Levelt, W. J. M. (1999). Models of word production. Trends in Cognitive Sciences, 3, 223-232.

    Abstract

    Research on spoken word production has been approached from two angles. In one research tradition, the analysis of spontaneous or induced speech errors led to models that can account for speech error distributions. In another tradition, the measurement of picture naming latencies led to chronometric models accounting for distributions of reaction times in word production. Both kinds of models are, however, dealing with the same underlying processes: (1) the speaker’s selection of a word that is semantically and syntactically appropriate; (2) the retrieval of the word’s phonological properties; (3) the rapid syllabification of the word in context; and (4) the preparation of the corresponding articulatory gestures. Models of both traditions explain these processes in terms of activation spreading through a localist, symbolic network. By and large, they share the main levels of representation: conceptual/semantic, syntactic, phonological and phonetic. They differ in various details, such as the amount of cascading and feedback in the network. These research traditions have begun to merge in recent years, leading to highly constructive experimentation. Currently, they are like two similar knives honing each other. A single pair of scissors is in the making.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). Multiple perspectives on lexical access [authors' response ]. Behavioral and Brain Sciences, 22, 61-72. doi:10.1017/S0140525X99451775.
  • Levelt, W. J. M. (2020). On becoming a physicist of mind. Annual Review of Linguistics, 6(1), 1-23. doi:10.1146/annurev-linguistics-011619-030256.

    Abstract

    In 1976, the German Max Planck Society established a new research enterprise in psycholinguistics, which became the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands. I was fortunate enough to be invited to direct this institute. It enabled me, with my background in visual and auditory psychophysics and the theory of formal grammars and automata, to develop a long-term chronometric endeavor to dissect the process of speaking. It led, among other work, to my book Speaking (1989) and to my research team's article in Brain and Behavioral Sciences “A Theory of Lexical Access in Speech Production” (1999). When I later became president of the Royal Netherlands Academy of Arts and Sciences, I helped initiate the Women for Science research project of the Inter Academy Council, a project chaired by my physicist sister at the National Institute of Standards and Technology. As an emeritus I published a comprehensive History of Psycholinguistics (2013). As will become clear, many people inspired and joined me in these undertakings.
  • Levelt, W. J. M. (1973). Recente ontwikkelingen in de taalpsychologie. Forum der Letteren, 14(4), 235-254.
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M. (1992). Sprachliche Musterbildung und Mustererkennung. Nova Acta Leopoldina NF, 67(281), 357-370.
  • Levelt, W. J. M., & Bonarius, M. (1973). Suffixes as deep structure clues. Methodology and Science, 6(1), 7-37.

    Abstract

    Recent work on sentence recognition suggests that listeners use their knowledge of the language to directly infer deep structure syntactic relations from surface structure markers. Suffixes may be such clues, especially in agglutinative languages. A cross-language (Dutch-Finnish) experiment is reported, designed to investigate whether the suffix structure of Finnish words (as opposed to suffixless Dutch words) can facilitate prompted recall of sentences in case these suffixes differentiate between possible deep structures. The experiment, in which 80 subjects recall sentences at the occasion of prompt words, gives only slight confirmatory evidence. Meanwhile, another prompted recall effect (Blumenthal's) could not be replicated.
  • Levelt, W. J. M., Richardson, G., & La Heij, W. (1985). Pointing and voicing in deictic expressions. Journal of Memory and Language, 24, 133-164. doi:10.1016/0749-596X(85)90021-X.

    Abstract

    The present paper studies how, in deictic expressions, the temporal interdependency of speech and gesture is realized in the course of motor planning and execution. Two theoretical positions were compared. On the “interactive” view the temporal parameters of speech and gesture are claimed to be the result of feedback between the two systems throughout the phases of motor planning and execution. The alternative “ballistic” view, however, predicts that the two systems are independent during the phase of motor execution, the temporal parameters having been preestablished in the planning phase. In four experiments subjects were requested to indicate which of an array of referent lights was momentarily illuminated. This was done by pointing to the light and/or by using a deictic expression (this/that light). The temporal and spatial course of the pointing movement was automatically registered by means of a Selspot opto-electronic system. By analyzing the moments of gesture initiation and apex, and relating them to the moments of speech onset, it was possible to show that, for deictic expressions, the ballistic view is very nearly correct.
  • Levelt, W. J. M. (1992). The perceptual loop theory not disconfirmed: A reply to MacKay. Consciousness and Cognition, 1, 226-230. doi:10.1016/1053-8100(92)90062-F.

    Abstract

    In his paper, MacKay reviews his Node Structure theory of error detection, but precedes it with a critical discussion of the Perceptual Loop theory of self-monitoring proposed in Levelt (1983, 1989). The present commentary is concerned with this latter critique and shows that there are more than casual problems with MacKay’s argumentation.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (1999). Maxim. Journal of Linguistic Anthropology, 9, 144-147. doi:10.1525/jlin.1999.9.1-2.144.
  • Levinson, S. C. (1992). Primer for the field investigation of spatial description and conception. Pragmatics, 2(1), 5-47.
  • Levshina, N. (2020). Efficient trade-offs as explanations in functional linguistics: some problems and an alternative proposal. Revista da Abralin, 19(3), 50-78. doi:10.25189/rabralin.v19i3.1728.

    Abstract

    The notion of efficient trade-offs is frequently used in functional linguis-tics in order to explain language use and structure. In this paper I argue that this notion is more confusing than enlightening. Not every negative correlation between parameters represents a real trade-off. Moreover, trade-offs are usually reported between pairs of variables, without taking into account the role of other factors. These and other theoretical issues are illustrated in a case study of linguistic cues used in expressing “who did what to whom”: case marking, rigid word order and medial verb posi-tion. The data are taken from the Universal Dependencies corpora in 30 languages and annotated corpora of online news from the Leipzig Corpora collection. We find that not all cues are correlated negatively, which ques-tions the assumption of language as a zero-sum game. Moreover, the cor-relations between pairs of variables change when we incorporate the third variable. Finally, the relationships between the variables are not always bi-directional. The study also presents a causal model, which can serve as a more appropriate alternative to trade-offs.
  • Lewis, A. G. (2020). Balancing exogenous and endogenous cortical rhythms for speech and language requires a lot of entraining: A commentary on Meyer, Sun Martin (2020). Language, Cognition and Neuroscience, 35(9), 1133-1137. doi:10.1080/23273798.2020.1734640.
  • Liang, S., Deng, W., Li, X., Wang, Q., Greenshaw, A. J., Guo, W., Kong, X., Li, M., Zhao, L., Meng, Y., Zhang, C., Yu, H., Li, X.-m., Ma, X., & Li, T. (2020). Aberrant posterior cingulate connectivity classify first-episode schizophrenia from controls: A machine learning study. Schizophrenia Research, 220, 187-193. doi:10.1016/j.schres.2020.03.022.

    Abstract

    Background

    Posterior cingulate cortex (PCC) is a key aspect of the default mode network (DMN). Aberrant PCC functional connectivity (FC) is implicated in schizophrenia, but the potential for PCC related changes as biological classifier of schizophrenia has not yet been evaluated.
    Methods

    We conducted a data-driven approach using resting-state functional MRI data to explore differences in PCC-based region- and voxel-wise FC patterns, to distinguish between patients with first-episode schizophrenia (FES) and demographically matched healthy controls (HC). Discriminative PCC FCs were selected via false discovery rate estimation. A gradient boosting classifier was trained and validated based on 100 FES vs. 93 HC. Subsequently, classification models were tested in an independent dataset of 87 FES patients and 80 HC using resting-state data acquired on a different MRI scanner.
    Results

    Patients with FES had reduced connectivity between PCC and frontal areas, left parahippocampal regions, left anterior cingulate cortex, and right inferior parietal lobule, but hyperconnectivity with left lateral temporal regions. Predictive voxel-wise clusters were similar to region-wise selected brain areas functionally connected with PCC in relation to discriminating FES from HC subject categories. Region-wise analysis of FCs yielded a relatively high predictive level for schizophrenia, with an average accuracy of 72.28% in the independent samples, while selected voxel-wise connectivity yielded an accuracy of 68.72%.
    Conclusion

    FES exhibited a pattern of both increased and decreased PCC-based connectivity, but was related to predominant hypoconnectivity between PCC and brain areas associated with DMN, that may be a useful differential feature revealing underpinnings of neuropathophysiology for schizophrenia.
  • Liao, Y., Flecken, M., Dijkstra, K., & Zwaan, R. A. (2020). Going places in Dutch and mandarin Chinese: Conceptualising the path of motion cross-linguistically. Language, Cognition and Neuroscience, 35(4), 498-520. doi:10.1080/23273798.2019.1676455.

    Abstract

    We study to what extent linguistic differences in grammatical aspect systems and verb lexicalisation patterns of Dutch and mandarin Chinese affect how speakers conceptualise the path of motion in motion events, using description and memory tasks. We hypothesised that speakers of the two languages would show different preferences towards the selection of endpoint-, trajectory- or location-information in Endpoint-oriented (not reached) events, whilst showing a similar bias towards encoding endpoints in Endpoint-reached events. Our findings show that (1) groups did not differ in endpoint encoding and memory for both event types; (2) Dutch speakers conceptualised Endpoint-oriented motion focusing on the trajectory, whereas Chinese speakers focused on the location of the moving entity. In addition, we report detailed linguistic patterns of how grammatical aspect, verb semantics and adjuncts containing path-information are combined in the two languages. Results are discussed in relation to typologies of motion expression and event cognition theory.

    Additional information

    Supplemental material
  • Lingwood, J., Levy, R., Billington, J., & Rowland, C. F. (2020). Barriers and solutions to participation in family-based education interventions. International Journal of Social Research Methodology, 23(2), 185-198. doi:10.1080/13645579.2019.1645377.

    Abstract

    The fact that many sub-populations do not take part in research, especially participants fromlower socioeconomic (SES) backgrounds, is a serious problem in education research. Toincrease the participation of such groups we must discover what social, economic andpractical factors prevent participation, and how to overcome these barriers. In the currentpaper, we review the literature on this topic, before describing a case study that demonstratesfour potential solutions to four barriers to participation in a shared reading intervention forfamilies from lower SES backgrounds. We discuss the implications of our findings forfamily-based interventions more generally, and the difficulty of balancing strategies toencourage participation with adhering to the methodological integrity of a research study

    Additional information

    supplemental material
  • Lingwood, J., Billington, J., & Rowland, C. F. (2020). Evaluating the effectiveness of a ‘real‐world’ shared reading intervention for preschool children and their families: A randomised controlled trial. Journal of Research in Reading, 43(3), 249-271. doi:10.1111/1467-9817.12301.

    Abstract

    Background: Shared reading interventions can impact positively on preschool children’s
    language development and on their caregiver’s attitudes/behaviours towards
    reading. However, a number of barriers may discourage families from engaging with
    these interventions, particularly families from lower socio-economic status (SES)
    backgrounds. We investigated how families from such backgrounds responded to an
    intervention designed explicitly to overcome these barriers.
    Methods: In a preregistered cluster randomised controlled trial, 85 lower SES families
    and their 3-year-old to 4-year-old children from 10 different preschools were randomly
    allocated to take part in The Reader’s Shared Reading programme
    (intervention) or an existing ‘Story Time’ group at a library (control) once a week
    for 8 weeks. Three outcome measures were assessed at baseline and post intervention:
    (1) attendance, (2) enjoyment of the reading groups and (3) caregivers’ knowledge of,
    attitudes and behaviours towards reading. A fourth children’s vocabulary – was
    assessed at baseline and 4 weeks post intervention.
    Results: Families were significantly more likely to attend the intervention group and
    rated it more favourably, compared with the control group. However, there were no
    significant effects on caregivers’ knowledge, attitudes and behaviours or on children’s
    language.
    Conclusion: The intervention was only successful in engaging families from disadvantaged
    backgrounds in shared reading. Implications for the use, duration and intensity
    of shared reading interventions are discussed.

    Additional information

    Data, scripts and output files
  • Long, M., Vega-Mendoza, M., Rohde, H., Sorace, A., & Bak, T. H. (2020). Understudied factors contributing to variability in cognitive performance related to language learning. Bilingualism: Language and Cognition, 23(4), 801-811. doi:10.1017/S1366728919000749.

    Abstract

    While much of the literature on bilingualism and cognition focuses on group comparisons (monolinguals vs bilinguals or language learners vs controls), here we examine the potential differential effects of intensive language learning on subjects with distinct language experiences and demographic profiles. Using an individual differences approach, we assessed attentional performance from 105 university-educated Gaelic learners aged 21–85. Participants were tested before and after beginner, elementary, and intermediate courses using tasks measuring i.) sustained attention, ii.) inhibition, and iii.) attention switching. We examined the relationship between attentional performance and Gaelic level, previous language experience, gender, and age. Gaelic level predicted attention switching performance: those in higher levels initially outperformed lower levels, however lower levels improved the most. Age also predicted performance: as age increased attention switching decreased. Nevertheless, age did not interact with session for any attentional measure, thus the impact of language learning on cognition was detectable across the lifespan.
  • Long, M., Rohde, H., & Rubio-Fernandez, P. (2020). The pressure to communicate efficiently continues to shape language use later in life. Scientific Reports, 10: 8214. doi:10.1038/s41598-020-64475-6.

    Abstract

    Language use is shaped by a pressure to communicate efficiently, yet the tendency towards redundancy is said to increase in older age. The longstanding assumption is that saying more than is necessary is inefficient and may be driven by age-related decline in inhibition (i.e. the ability to filter out irrelevant information). However, recent work proposes an alternative account of efficiency: In certain contexts, redundancy facilitates communication (e.g., when the colour or size of an object is perceptually salient and its mention aids the listener’s search). A critical question follows: Are older adults indiscriminately redundant, or do they modulate their use of redundant information to facilitate communication? We tested efficiency and cognitive capacities in 200 adults aged 19–82. Irrespective of age, adults with better attention switching skills were redundant in efficient ways, demonstrating that the pressure to communicate efficiently continues to shape language use later in life.

    Additional information

    supplementary table S1 dataset 1
  • Macuch Silva, V., Holler, J., Ozyurek, A., & Roberts, S. G. (2020). Multimodality and the origin of a novel communication system in face-to-face interaction. Royal Society Open Science, 7: 182056. doi:10.1098/rsos.182056.

    Abstract

    Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalisation, and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e., gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalisations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared to vocalisation, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Mai, A. (2020). Phonetic effects of onset complexity on the English syllable. Laboratory phonology, 11(1): 4. doi:10.5334/labphon.148.

    Abstract

    Although onsets do not arbitrate stress placement in English categorically, results from Kelly (2004) and Ryan (2014) suggest that English stress assignment is nevertheless sensitive to onset complexity. Phonetic work on languages in which onsets participate in categorical weight criteria shows that onsets contribute to stress assignment through their phonetic impact on the nucleus, primarily through their effect on nucleus energy (Gordon, 2005). Onsets in English probabilistically participate in weight-based processes, and here it is predicted that they impact the phonetic realization of the syllable similar to the way that onsets do in languages with categorical onset weight criteria. To test this prediction, speakers in this study produced monosyllabic English words varying in onset complexity, and measures of duration, intensity, and f0 were collected. Results of the current study are consistent with the predictions of Gordon’s perceptual account of categorical weight, showing that integrated intensity of the rime is incapable of driving onset weight behavior in English. Furthermore, results indicate that onsets impact the shape of the intensity envelope in a manner consistent with explanations for gradient onset weight that appeal to onset influence on the perceptual center (Ryan, 2014). Together, these results show that cues to gradient weight act independently of primary cues to categorical weight to probabilistically impact weight sensitive stress assignment in English.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Mak, M., De Vries, C., & Willems, R. M. (2020). The influence of mental imagery instructions and personality characteristics on reading experiences. Collabra: Psychology, 6(1): 43. doi:10.1525/collabra.281.

    Abstract

    It is well established that readers form mental images when reading a narrative. However, the consequences of mental imagery (i.e. the influence of mental imagery on the way people experience stories) are still unclear. Here we manipulated the amount of mental imagery that participants engaged in while reading short literary stories in two experiments. Participants received pre-reading instructions aimed at encouraging or discouraging mental imagery. After reading, participants answered questions about their reading experiences. We also measured individual trait differences that are relevant for literary reading experiences. The results from the first experiment suggests an important role of mental imagery in determining reading experiences. However, the results from the second experiment show that mental imagery is only a weak predictor of reading experiences compared to individual (trait) differences in how imaginative participants were. Moreover, the influence of mental imagery instructions did not extend to reading experiences unrelated to mental imagery. The implications of these results for the relationship between mental imagery and reading experiences are discussed.
  • Mandal, S., Best, C. T., Shaw, J., & Cutler, A. (2020). Bilingual phonology in dichotic perception: A case study of Malayalam and English voicing. Glossa: A Journal of General Linguistics, 5(1): 73. doi:10.5334/gjgl.853.

    Abstract

    Listeners often experience cocktail-party situations, encountering multiple ongoing conversa-
    tions while tracking just one. Capturing the words spoken under such conditions requires selec-
    tive attention and processing, which involves using phonetic details to discern phonological
    structure. How do bilinguals accomplish this in L1-L2 competition? We addressed that question
    using a dichotic listening task with fluent Malayalam-English bilinguals, in which they were pre-
    sented with synchronized nonce words, one in each language in separate ears, with competing
    onsets of a labial stop (Malayalam) and a labial fricative (English), both voiced or both voiceless.
    They were required to attend to the Malayalam or the English item, in separate blocks, and report
    the initial consonant they heard. We found that perceptual intrusions from the unattended to the
    attended language were influenced by voicing, with more intrusions on voiced than voiceless tri-
    als. This result supports our proposal for the feature specification of consonants in Malayalam-
    English bilinguals, which makes use of privative features, underspecification and the “standard
    approach” to laryngeal features, as against “laryngeal realism”. Given this representational
    account, we observe that intrusions result from phonetic properties in the unattended signal
    being assimilated to the closest matching phonological category in the attended language, and
    are more likely for segments with a greater number of phonological feature specifications.
  • Manhardt, F., Ozyurek, A., Sumer, B., Mulder, K., Karadöller, D. Z., & Brouwer, S. (2020). Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(9), 1735-1753. doi:10.1037/xlm0000843.

    Abstract

    To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.

    Additional information

    Supplementary materials
  • The ManyBabies Consortium (2020). Quantifying sources of variability in infancy research using the infant-directed speech preference. Advances in Methods and Practices in Psychological Science, 30(1), 24-52. doi:10.1177/2515245919900809.

    Abstract

    Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.

    Additional information

    Open Practices Disclosure Open Data OSF
  • Marecka, M., Fosker, T., Szewczyk, J., Kałamała, P., & Wodniecka, Z. (2020). An ear for language. Studies in Second Language Acquisition, 42, 987-1014. doi:10.1017/S0272263120000157.

    Abstract

    This study tested whether individual sensitivity to an auditory perceptual cue called amplitude rise time (ART) facilitates novel word learning. Forty adult native speakers of Polish performed a perceptual task testing their sensitivity to ART, learned associations between nonwords and pictures of common objects, and were subsequently tested on their knowledge with a picture recognition (PR) task. In the PR task participants heard each nonword, followed either by a congruent or incongruent picture, and had to assess if the picture matched the nonword. Word learning efficiency was measured by accuracy and reaction time on the PR task and modulation of the N300 ERP. As predicted, participants with greater sensitivity to ART showed better performance in PR suggesting that auditory sensitivity indeed facilitates learning of novel words. Contrary to expectations, the N300 was not modulated by sensitivity to ART suggesting that the behavioral and ERP measures reflect different underlying processes.
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, A. E. (2020). A compositional neural architecture for language. Journal of Cognitive Neuroscience, 32(8), 1407-1427. doi:10.1162/jocn_a_01552.

    Abstract

    Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de) compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2020). Eye-tracking the time course of distal and global speech rate effects. Journal of Experimental Psychology: Human Perception and Performance, 46(10), 1148-1163. doi:10.1037/xhp0000838.

    Abstract

    To comprehend speech sounds, listeners tune in to speech rate information in the proximal (immediately adjacent), distal (non-adjacent), and global context (further removed preceding and following sentences). Effects of global contextual speech rate cues on speech perception have been shown to follow constraints not found for proximal and distal speech rate. Therefore, listeners may process such global cues at distinct time points during word recognition. We conducted a printed-word eye-tracking experiment to compare the time courses of distal and global rate effects. Results indicated that the distal rate effect emerged immediately after target sound presentation, in line with a general-auditory account. The global rate effect, however, arose more than 200 ms later than the distal rate effect, indicating that distal and global context effects involve distinct processing mechanisms. Results are interpreted in a two-stage model of acoustic context effects. This model posits that distal context effects involve very early perceptual processes, while global context effects arise at a later stage, involving cognitive adjustments conditioned by higher-level information.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • McCollum, A. G., Baković, E., Mai, A., & Meinhardt, E. (2020). Unbounded circumambient patterns in segmental phonology. Phonology, 37, 215-255. doi:10.1017/S095267572000010X.

    Abstract

    We present an empirical challenge to Jardine's (2016) assertion that only tonal spreading patterns can be unbounded circumambient, meaning that the determination of a phonological value may depend on information that is an unbounded distance away on both sides. We focus on a demonstration that the ATR harmony pattern found in Tutrugbu is unbounded circumambient, and we also cite several other segmental spreading processes with the same general character. We discuss implications for the complexity of phonology and for the relationship between the explanation of typology and the evaluation of phonological theories.

    Additional information

    Supporting Information
  • McQueen, J. M., Norris, D., & Cutler, A. (1999). Lexical influence in phonetic decision-making: Evidence from subcategorical mismatches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1363-1389. doi:10.1037/0096-1523.25.5.1363.

    Abstract

    In 5 experiments, listeners heard words and nonwords, some cross-spliced so that they contained acoustic-phonetic mismatches. Performance was worse on mismatching than on matching items. Words cross-spliced with words and words cross-spliced with nonwords produced parallel results. However, in lexical decision and 1 of 3 phonetic decision experiments, performance on nonwords cross-spliced with words was poorer than on nonwords cross-spliced with nonwords. A gating study confirmed that there were misleading coarticulatory cues in the cross-spliced items; a sixth experiment showed that the earlier results were not due to interitem differences in the strength of these cues. Three models of phonetic decision making (the Race model, the TRACE model, and a postlexical model) did not explain the data. A new bottom-up model is outlined that accounts for the findings in terms of lexical involvement at a dedicated decision-making stage.
  • McQueen, J. M., Eisner, F., Burgering, M. A., & Vroomen, J. (2020). Specialized memory systems for learning spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(1), 189-199. doi:10.1037/xlm0000704.

    Abstract

    Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound–picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in “speech mode” was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information.

    Additional information

    Supplemental material
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 35(9), 1089-1099. doi:10.1080/23273798.2019.1693050.

    Abstract

    Research on speech processing is often focused on a phenomenon termed “entrainment”, whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g. syntactic structures) has been observed. Entrainment accounts face two challenges: First, speech is not exactly rhythmic; second, synchronicity with representations that lack a clear acoustic counterpart has been described. We propose that apparent entrainment does not always result from acoustic information. Rather, internal rhythms may have functionalities in the generation of abstract representations and predictions. While acoustics may often provide punctate opportunities for entrainment, internal rhythms may also live a life of their own to infer and predict information, leading to intrinsic synchronicity – not to be counted as entrainment. This possibility may open up new research avenues in the psycho– and neurolinguistic study of language processing and language development.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). “Entraining” to speech, generating language? Language, Cognition and Neuroscience, 35(9), 1138-1148. doi:10.1080/23273798.2020.1827155.

    Abstract

    Could meaning be read from acoustics, or from the refraction rate of pyramidal cells innervated by the cochlea, everyone would be an omniglot. Speech does not contain sufficient acoustic cues to identify linguistic units such as morphemes, words, and phrases without prior knowledge. Our target article (Meyer, L., Sun, Y., & Martin, A. E. (2019). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 1–11. https://doi.org/10.1080/23273798.2019.1693050) thus questioned the concept of “entrainment” of neural oscillations to such units. We suggested that synchronicity with these points to the existence of endogenous functional “oscillators”—or population rhythmic activity in Giraud’s (2020) terms—that underlie the inference, generation, and prediction of linguistic units. Here, we address a series of inspirational commentaries by our colleagues. As apparent from these, some issues raised by our target article have already been raised in the literature. Psycho– and neurolinguists might still benefit from our reply, as “oscillations are an old concept in vision and motor functions, but a new one in linguistics” (Giraud, A.-L. 2020. Oscillations for all A commentary on Meyer, Sun & Martin (2020). Language, Cognition and Neuroscience, 1–8).
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Bock, K. (1999). Representations and processes in the production of pronouns: Some perspectives from Dutch. Journal of Memory and Language, 41(2), 281-301. doi:doi:10.1006/jmla.1999.2649.

    Abstract

    The production and interpretation of pronouns involves the identification of a mental referent and, in connected speech or text, a discourse antecedent. One of the few overt signals of the relationship between a pronoun and its antecedent is agreement in features such as number and grammatical gender. To examine how speakers create these signals, two experiments tested conceptual, lexical. and morphophonological accounts of pronoun production in Dutch. The experiments employed sentence completion and continuation tasks with materials containing noun phrases that conflicted or agreed in grammatical gender. The noun phrases served as the antecedents for demonstrative pronouns tin Experiment 1) and relative pronouns tin Experiment 2) that required gender marking. Gender errors were used to assess the nature of the processes that established the link between pronouns and antecedents. There were more gender errors when candidate antecedents conflicted in grammatical gender, counter to the predictions of a pure conceptual hypothesis. Gender marking on candidate antecedents did not change the magnitude of this interference effect, counter to the predictions of an overt-morphology hypothesis. Mirroring previous findings about pronoun comprehension, the results suggest that speakers of gender-marking languages call on specific linguistic information about antecedents in order to select pronouns and that the information consists of specifications of grammatical gender associated with the lemmas of words.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Micheli, C., Schepers, I., Ozker, M., Yoshor, D., Beauchamp, M., & Rieger, J. (2020). Electrocorticography reveals continuous auditory and visual speech tracking in temporal and occipital cortex. European Journal of Neuroscience, 51(5), 1364-1376. doi:10.1111/ejn.13992.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2020). Between-language competition as a driving force in foreign language attrition. Cognition, 198: 104218. doi:10.1016/j.cognition.2020.104218.

    Abstract

    Research in the domain of memory suggests that forgetting is primarily driven by interference and competition from other, related memories. Here we ask whether similar dynamics are at play in foreign language (FL) attrition. We tested whether interference from translation equivalents in other, more recently used languages causes subsequent retrieval failure in L3. In Experiment 1, we investigated whether interference from the native language (L1) and/or from another foreign language (L2) affected L3 vocabulary retention. On day 1, Dutch native speakers learned 40 new Spanish (L3) words. On day 2, they performed a number of retrieval tasks in either Dutch (L1) or English (L2) on half of these words, and then memory for all items was tested again in L3 Spanish. Recall in Spanish was slower and less complete for words that received interference than for words that did not. In naming speed, this effect was larger for L2 compared to L1 interference. Experiment 2 replicated the interference effect and asked if the language difference can be explained by frequency of use differences between native- and non-native languages. Overall, these findings suggest that competition from more recently used languages, and especially other foreign languages, is a driving force behind FL attrition.

    Additional information

    Supplementary data
  • Mickan, A., & Lemhöfer, K. (2020). Tracking syntactic conflict between languages over the course of L2 acquisition: A cross-sectional event-related potential study. Journal of Cognitive Neuroscience, 32(5), 822-846. doi:10.1162/jocn_a_01528.

    Abstract

    One challenge of learning a foreign language (L2) in adulthood is the mastery of syntactic structures that are implemented differently in L2 and one's native language (L1). Here, we asked how L2 speakers learn to process syntactic constructions that are in direct conflict between L1 and L2, in comparison to structures without such a conflict. To do so, we measured EEG during sentence reading in three groups of German learners of Dutch with different degrees of L2 experience (from 3 to more than 18 months of L2 immersion) as well as a control group of Dutch native speakers. They read grammatical and ungrammatical Dutch sentences that, in the conflict condition, contained a structure with opposing word orders in Dutch and German (sentence-final double infinitives) and, in the no-conflict condition, a structure for which word order is identical in Dutch and German (subordinate clause inversion). Results showed, first, that beginning learners showed N400-like signatures instead of the expected P600 for both types of violations, suggesting that, in the very early stages of learning, different neurocognitive processes are employed compared with native speakers, regardless of L1–L2 similarity. In contrast, both advanced and intermediate learners already showed native-like P600 signatures for the no-conflict sentences. However, their P600 signatures were significantly delayed in processing the conflicting structure, even though behavioral performance was on a native level for both these groups and structures. These findings suggest that L1–L2 word order conflicts clearly remain an obstacle to native-like processing, even for advanced L2 learners.
  • Micklos, A., & Walker, B. (2020). Are people sensitive to problems in communication? Cognitive Science, 44(2): e12816. doi:10.1111/cogs.12816.

    Abstract

    Recent research indicates that interpersonal communication is noisy, and that people exhibit considerable insensitivity to problems in communication. Using a dyadic referential communication task, the goal of which is accurate information transfer, this study examined the extent to which interlocutors are sensitive to problems in communication and use other‐initiated repairs (OIRs) to address them. Participants were randomly assigned to dyads (N = 88 participants, or 44 dyads) and tried to communicate a series of recurring abstract geometric shapes to a partner across a text–chat interface. Participants alternated between directing (describing shapes) and matching (interpreting shape descriptions) roles across 72 trials of the task. Replicating prior research, over repeated social interactions communication success improved and the shape descriptions became increasingly efficient. In addition, confidence in having successfully communicated the different shapes increased over trials. Importantly, matchers were less confident on trials in which communication was unsuccessful, communication success was lower on trials that contained an OIR compared to those that did not contain an OIR, and OIR trials were associated with lower Director Confidence. This pattern of results demonstrates that (a) interlocutors exhibit (a degree of) sensitivity to problems in communication, (b) they appropriately use OIRs to address problems in communication, and (c) OIRs signal problems in communication.

    Additional information

    Open Data OSF
  • Milham, M., Petkov, C. I., Margulies, D. S., Schroeder, C. E., Basso, M. A., Belin, P., Fair, D. A., Fox, A., Kastner, S., Mars, R. B., Messinger, A., Poirier, C., Vanduffel, W., Van Essen, D. C., Alvand, A., Becker, Y., Ben Hamed, S., Benn, A., Bodin, C., Boretius, S. Milham, M., Petkov, C. I., Margulies, D. S., Schroeder, C. E., Basso, M. A., Belin, P., Fair, D. A., Fox, A., Kastner, S., Mars, R. B., Messinger, A., Poirier, C., Vanduffel, W., Van Essen, D. C., Alvand, A., Becker, Y., Ben Hamed, S., Benn, A., Bodin, C., Boretius, S., Cagna, B., Coulon, O., El-Gohary, S. H., Evrard, H., Forkel, S. J., Friedrich, P., Froudist-Walsh, S., Garza-Villarreal, E. A., Gao, Y., Gozzi, A., Grigis, A., Hartig, R., Hayashi, T., Heuer, K., Howells, H., Ardesch, D. J., Jarraya, B., Jarrett, W., Jedema, H. P., Kagan, I., Kelly, C., Kennedy, H., Klink, P. C., Kwok, S. C., Leech, R., Liu, X., Madan, C., Madushanka, W., Majka, P., Mallon, A.-M., Marche, K., Meguerditchian, A., Menon, R. S., Merchant, H., Mitchell, A., Nenning, K.-H., Nikolaidis, A., Ortiz-Rios, M., Pagani, M., Pareek, V., Prescott, M., Procyk, E., Rajimehr, R., Rautu, I.-S., Raz, A., Roe, A. W., Rossi-Pool, R., Roumazeilles, L., Sakai, T., Sallet, J., García-Saldivar, P., Sato, C., Sawiak, S., Schiffer, M., Schwiedrzik, C. M., Seidlitz, J., Sein, J., Shen, Z.-m., Shmuel, A., Silva, A. C., Simone, L., Sirmpilatze, N., Sliwa, J., Smallwood, J., Tasserie, J., Thiebaut de Schotten, M., Toro, R., Trapeau, R., Uhrig, L., Vezoli, J., Wang, Z., Wells, S., Williams, B., Xu, T., Xu, A. G., Yacoub, E., Zhan, M., Ai, L., Amiez, C., Balezeau, F., Baxter, M. G., Blezer, E. L., Brochier, T., Chen, A., Croxson, P. L., Damatac, C. G., Dehaene, S., Everling, S., Fleysher, L., Freiwald, W., Griffiths, T. D., Guedj, C., Hadj-Bouziane, F., Harel, N., Hiba, B., Jung, B., Koo, B., Laland, K. N., Leopold, D. A., Lindenfors, P., Meunier, M., Mok, K., Morrison, J. H., Nacef, J., Nagy, J., Pinsk, M., Reader, S. M., Roelfsema, P. R., Rudko, D. A., Rushworth, M. F., Russ, B. E., Schmid, M. C., Sullivan, E. L., Thiele, A., Todorov, O. S., Tsao, D., Ungerleider, L., Wilson, C. R., Ye, F. Q., Zarco, W., & Zhou, Y.-d. (2020). Accelerating the Evolution of Nonhuman Primate Neuroimaging. Neuron, 105(4), 600-603. doi:10.1016/j.neuron.2019.12.023.

    Abstract

    Nonhuman primate neuroimaging is on the cusp of a transformation, much in the same way its human counterpart was in 2010, when the Human Connectome Project was launched to accelerate progress. Inspired by an open data-sharing initiative, the global community recently met and, in this article, breaks through obstacles to define its ambitions.

    Additional information

    supplementary information
  • Montero-Melis, G., & Jaeger, T. F. (2020). Changing expectations mediate adaptation in L2 production. Bilingualism: Language and Cognition, 23(3), 602-617. doi:10.1017/S1366728919000506.

    Abstract


    Native language (L1) processing draws on implicit expectations. An open question is whether non-native learners of a second language (L2) similarly draw on expectations, and whether these expectations are based on learners’ L1 or L2 knowledge. We approach this question by studying inverse preference effects on lexical encoding. L1 and L2 speakers of Spanish described motion events, while they were either primed to express path, manner, or neither. In line with other work, we find that L1 speakers adapted more strongly after primes that are unexpected in their L1. For L2 speakers, adaptation depended on their L2 proficiency: The least proficient speakers exhibited the inverse preference effect on adaptation based on what was unexpected in their L1; but the more proficient speakers were, the more they exhibited inverse preference effects based on what was unexpected in the L2. We discuss implications for L1 transfer and L2 acquisition.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). The effect of sociolinguistic factors on variation in the Kata Kolok lexicon. Asia-Pacific Language Variation, 6(1), 53-88. doi:10.1075/aplv.19009.mud.

    Abstract

    Sign languages can be categorized as shared sign languages or deaf community sign languages, depending on the context in which they emerge. It has been suggested that shared sign languages exhibit more variation in the expression of everyday concepts than deaf community sign languages (Meir, Israel, Sandler, Padden, & Aronoff, 2012). For deaf community sign languages, it has been shown that various sociolinguistic factors condition this variation. This study presents one of the first in-depth investigations of how sociolinguistic factors (deaf status, age, clan, gender and having a deaf family member) affect lexical variation in a shared sign language, using a picture description task in Kata Kolok. To study lexical variation in Kata Kolok, two methodologies are devised: the identification of signs by underlying iconic motivation and mapping, and a way to compare individual repertoires of signs by calculating the lexical distances between participants. Alongside presenting novel methodologies to study this type of sign language, we present preliminary evidence of sociolinguistic factors that may influence variation in the Kata Kolok lexicon.
  • Muhinyi, A., Hesketh, A., Stewart, A. J., & Rowland, C. F. (2020). Story choice matters for caregiver extra-textual talk during shared reading with preschoolers. Journal of Child Language, 47(3), 633-654. doi:10.1017/S0305000919000783.

    Abstract



    This study aimed to examine the influence of the complexity of the story-book on caregiver extra-textual talk (i.e., interactions beyond text reading) during shared reading with preschool-age children. Fifty-three mother–child dyads (3;00–4;11) were video-recorded sharing two ostensibly similar picture-books: a simple story (containing no false belief) and a complex story (containing a false belief central to the plot, which provided content that was more challenging for preschoolers to understand). Book-reading interactions were transcribed and coded. Results showed that the complex stories facilitated more extra-textual talk from mothers, and a higher quality of extra-textual talk (as indexed by linguistic richness and level of abstraction). Although the type of story did not affect the number of questions mothers posed, more elaborative follow-ups on children's responses were provided by mothers when sharing complex stories. Complex stories may facilitate more and linguistically richer caregiver extra-textual talk, having implications for preschoolers’ developing language abilities.
  • Nakamoto, T., Hatsuta, S., Yagi, S., Verdonschot, R. G., Taguchi, A., & Kakimoto, N. (2020). Computer-aided diagnosis system for osteoporosis based on quantitative evaluation of mandibular lower border porosity using panoramic radiographs. Dentomaxillofacial Radiology, 49(4): 20190481. doi:10.1259/dmfr.20190481.

    Abstract

    Objectives: A new computer-aided screening system for osteoporosis using panoramic radiographs was developed. The conventional system could detect porotic changes within the lower border of the mandible, but its severity could not be evaluated. Our aim was to enable the system to measure severity by implementing a linear bone resorption severity index (BRSI) based on the cortical bone shape.
    Methods: The participants were 68 females (>50 years) who underwent panoramic radiography and lumbar spine bone density measurements. The new system was designed to extract the lower border of the mandible as region of interests and convert them into morphological skeleton line images. The total perimeter length of the skeleton lines was defined as the BRSI. 40 images were visually evaluated for the presence of cortical bone porosity. The correlation between visual evaluation and BRSI of the participants, and the optimal threshold value of BRSI for new system were investigated through a receiver operator characteristic analysis. The diagnostic performance of the new system was evaluated by comparing the results from new system and lumbar bone density tests using 28 participants.
    Results: BRSI and lumbar bone density showed a strong negative correlation (p < 0.01). BRSI showed a strong correlation with visual evaluation. The new system showed high diagnostic efficacy with sensitivity of 90.9%, specificity of 64.7%, and accuracy of 75.0%.
    Conclusions: The new screening system is able to quantitatively evaluate mandibular cortical porosity. This allows for preventive screening for osteoporosis thereby enhancing clinical prospects.
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Nieuwland, M. S., Arkhipova, Y., & Rodríguez-Gómez, P. (2020). Anticipating words during spoken discourse comprehension: A large-scale, pre-registered replication study using brain potentials. Cortex, 133, 1-36. doi:10.1016/j.cortex.2020.09.007.

    Abstract

    Numerous studies report brain potential evidence for the anticipation of specific words during language comprehension. In the most convincing demonstrations, highly predictable nouns exert an influence on processing even before they appear to a reader or listener, as indicated by the brain's neural response to a prenominal adjective or article when it mismatches the expectations about the upcoming noun. However, recent studies suggest that some well-known demonstrations of prediction may be hard to replicate. This could signal the use of data-contingent analysis, but might also mean that readers and listeners do not always use prediction-relevant information in the way that psycholinguistic theories typically suggest. To shed light on this issue, we performed a close replication of one of the best-cited ERP studies on word anticipation (Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, 2005; Experiment 1), in which participants listened to Dutch spoken mini-stories. In the original study, the marking of grammatical gender on pre-nominal adjectives (‘groot/grote’) elicited an early positivity when mismatching the gender of an unseen, highly predictable noun, compared to matching gender. The current pre-registered study involved that same manipulation, but used a novel set of materials twice the size of the original set, an increased sample size (N = 187), and Bayesian mixed-effects model analyses that better accounted for known sources of variance than the original. In our study, mismatching gender elicited more negative voltage than matching gender at posterior electrodes. However, this N400-like effect was small in size and lacked support from Bayes Factors. In contrast, we successfully replicated the original's noun effects. While our results yielded some support for prediction, they do not support the Van Berkum et al. effect and highlight the risks associated with commonly employed data-contingent analyses and small sample sizes. Our results also raise the question whether Dutch listeners reliably or consistently use adjectival inflection information to inform their noun predictions.
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Nieuwland, M. S., & Kazanina, N. (2020). The neural basis of linguistic prediction: Introduction to the special issue. Neuropsychologia, 146: 107532. doi:10.1016/j.neuropsychologia.2020.107532.
  • Noble, C., Cameron-Faulkner, T., Jessop, A., Coates, A., Sawyer, H., Taylor-Ims, R., & Rowland, C. F. (2020). The impact of interactive shared book reading on children's language skills: A randomized controlled trial. Journal of Speech, Language, and Hearing Research, 63(6), 1878-1897. doi:10.1044/2020_JSLHR-19-00288.

    Abstract

    Purpose: Research has indicated that interactive shared
    book reading can support a wide range of early language
    skills and that children who are read to regularly in the early
    years learn language faster, enter school with a larger
    vocabulary, and become more successful readers at school.
    Despite the large volume of research suggesting interactive
    shared reading is beneficial for language development, two
    fundamental issues remain outstanding: whether shared
    book reading interventions are equally effective (a) for children
    from all socioeconomic backgrounds and (b) for a range of
    language skills.
    Method: To address these issues, we conducted a
    randomized controlled trial to investigate the effects of two
    6-week interactive shared reading interventions on a
    range of language skills in children across the socioeconomic
    spectrum. One hundred and fifty children aged between
    2;6 and 3;0 (years;months) were randomly assigned to one

    of three conditions: a pause reading, a dialogic reading, or
    an active shared reading control condition.
    Results: The findings indicated that the interventions were
    effective at changing caregiver reading behaviors. However,
    the interventions did not boost children’s language skills
    over and above the effect of an active reading control
    condition. There were also no effects of socioeconomic status.
    Conclusion: This randomized controlled trial showed
    that caregivers from all socioeconomic backgrounds
    successfully adopted an interactive shared reading style.
    However, while the interventions were effective at increasing
    caregivers’ use of interactive shared book reading behaviors,
    this did not have a significant impact on the children’s
    language skills. The findings are discussed in terms of
    practical implications and future research.

    Additional information

    Supplemental Material
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Norris, D., McQueen, J. M., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal constraints on speech segmentation. Language and Cognitive Processes, 16, 637-660. doi:10.1080/01690960143000119.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and any likely location of a word boundary, as cued in the speech signal. The experiments examined cases where the residue was either a CVC syllable with a schwa, or a CV syllable with a lax vowel. Although neither of these syllable contexts is a possible lexical word in English, word-spotting in both contexts was easier than in a context consisting of a single consonant. Two control lexical-decision experiments showed that the word-spotting results reflected the relative segmentation difficulty of the words in different contexts. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., & Cutler, A. (1985). Juncture detection. Linguistics, 23, 689-705.
  • Nyberg, L., Forkstam, C., Petersson, K. M., Cabeza, R., & Ingvar, M. (2002). Brain imaging of human memory systems: Between-systems similarities and within-system differences. Cognitive Brain Research, 13(2), 281-292. doi:10.1016/S0926-6410(02)00052-6.

    Abstract

    There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.
  • Nyberg, L., Petersson, K. M., Nilsson, L.-G., Sandblom, J., Åberg, C., & Ingvar, M. (2001). Reactivation of motor brain areas during explicit memory for actions. Neuroimage, 14, 521-528. doi:10.1006/nimg.2001.0801.

    Abstract

    Recent functional brain imaging studies have shown that sensory-specific brain regions that are activated during perception/encoding of sensory-specific information are reactivated during memory retrieval of the same information. Here we used PET to examine whether verbal retrieval of action phrases is associated with reactivation of motor brain regions if the actions were overtly or covertly performed during encoding. Compared to a verbal condition, encoding by means of overt as well as covert activity was associated with differential activity in regions in contralateral somatosensory and motor cortex. Several of these regions were reactivated during retrieval. Common to both the overt and covert conditions was reactivation of regions in left ventral motor cortex and left inferior parietal cortex. A direct comparison of the overt and covert activity conditions showed that activation and reactivation of left dorsal parietal cortex and right cerebellum was specific to the overt condition. These results support the reactivation hypothesis by showing that verbal-explicit memory of actions involves areas that are engaged during overt and covert motor activity.
  • Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background

    Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
    Methods

    The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
    Results

    At least 50 items per task per language were named fluently and reached a high naming agreement.
    Conclusion

    The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES.
  • Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.

    Abstract

    When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience
  • Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.

    Abstract

    An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
  • Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.

    Abstract

    In this study we explore whether different types of iconic gestures
    (i.e., acting, drawing, representing) and their combinations are used
    systematically to distinguish between different semantic categories in
    production and comprehension. In Study 1, we elicited silent gestures
    from Mexican and Dutch participants to represent concepts from three
    semantic categories: actions, manipulable objects, and non-manipulable
    objects. Both groups favoured the acting strategy to represent actions and
    manipulable objects; while non-manipulable objects were represented
    through the drawing strategy. Actions elicited primarily single gestures
    whereas objects elicited combinations of different types of iconic gestures
    as well as pointing. In Study 2, a different group of participants were
    shown gestures from Study 1 and were asked to guess their meaning.
    Single-gesture depictions for actions were more accurately guessed than
    for objects. Objects represented through two-gesture combinations (e.g.,
    acting + drawing) were more accurately guessed than objects represented
    with a single gesture. We suggest iconicity is exploited to make direct
    links with a referent, but when it lends itself to ambiguity, individuals
    resort to combinatorial structures to clarify the intended referent.
    Iconicity and the need to communicate a clear signal shape the structure
    of silent gestures and this in turn supports comprehension.
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Otake, T., & Cutler, A. (1999). Perception of suprasegmental structure in a nonnative dialect. Journal of Phonetics, 27, 229-253. doi:10.1006/jpho.1999.0095.

    Abstract

    Two experiments examined the processing of Tokyo Japanese pitchaccent distinctions by native speakers of Japanese from two accentlessvariety areas. In both experiments, listeners were presented with Tokyo Japanese speech materials used in an earlier study with Tokyo Japanese listeners, who clearly exploited the pitch-accent information in spokenword recognition. In the "rst experiment, listeners judged from which of two words, di!ering in accentual structure, isolated syllables had been extracted. Both new groups were, overall, as successful at this task as Tokyo Japanese speakers had been, but their response patterns differed from those of the Tokyo Japanese, for instance in that a bias towards H judgments in the Tokyo Japanese responses was weakened in the present groups' responses. In a second experiment, listeners heard word fragments and guessed what the words were; in this task, the speakers from accentless areas again performed significantly above chance, but their responses showed less sensitivity to the information in the input, and greater bias towards vocabulary distribution frequencies, than had been observed with the Tokyo Japanese listeners. The results suggest that experience with a local accentless dialect affects the processing of accent for word recognition in Tokyo Japanese, even for listeners with extensive exposure to Tokyo Japanese.
  • Ozyurek, A. (2002). Do speakers design their co-speech gestures for their addresees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46(4), 688-704. doi:10.1006/jmla.2001.2826.

    Abstract

    Do speakers use spontaneous gestures accompanying their speech for themselves or to communicate their message to their addressees? Two experiments show that speakers change the orientation of their gestures depending on the location of shared space, that is, the intersection of the gesture spaces of the speakers and addressees. Gesture orientations change more frequently when they accompany spatial prepositions such as into and out, which describe motion that has a beginning and end point, rather than across, which depicts an unbounded path across space. Speakers change their gestures so that they represent the beginning and end point of motion INTO or OUT by moving into or out of the shared space. Thus, speakers design their gestures for their addressees and therefore use them to communicate. This has implications for the view that gestures are a part of language use as well as for the role of gestures in speech production.
  • Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.

    Abstract

    Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.

    Additional information

    Supplementary data
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.

    Abstract

    Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Dynamic changes in the functional anatomy of the human brain during recall of abstract designs related to practice. Neuropsychologia, 37, 567-587.

    Abstract

    In the present PET study we explore some functional aspects of the interaction between attentional/control processes and learning/memory processes. The network of brain regions supporting recall of abstract designs were studied in a less practiced and in a well practiced state. The results indicate that automaticity, i.e., a decreased dependence on attentional and working memory resources, develops as a consequence of practice. This corresponds to the practice related decreases of activity in the prefrontal, anterior cingulate, and posterior parietal regions. In addition, the activity of the medial temporal regions decreased as a function of practice. This indicates an inverse relation between the strength of encoding and the activation of the MTL during retrieval. Furthermore, the pattern of practice related increases in the auditory, posterior insular-opercular extending into perisylvian supra marginal region, and the right mid occipito-temporal region, may reflect a lower degree of inhibitory attentional modulation of task irrelevant processing and more fully developed representations of the abstract designs, respectively. We also suggest that free recall is dependent on bilateral prefrontal processing, in particular non-automatic free recall. The present results cofirm previous functional neuroimaging studies of memory retrieval indicating that recall is subserved by a network of interacting brain regions. Furthermore, the results indicate that some components of the neural network subserving free recall may have a dynamic role and that there is a functional restructuring of the information processing networks during the learning process.
  • Petersson, K. M., Reis, A., Castro-Caldas, A., & Ingvar, M. (1999). Effective auditory-verbal encoding activates the left prefrontal and the medial temporal lobes: A generalization to illiterate subjects. NeuroImage, 10, 45-54. doi:10.1006/nimg.1999.0446.

    Abstract

    Recent event-related FMRI studies indicate that the prefrontal (PFC) and the medial temporal lobe (MTL) regions are more active during effective encoding than during ineffective encoding. The within-subject design and the use of well-educated young college students in these studies makes it important to replicate these results in other study populations. In this PET study, we used an auditory word-pair association cued-recall paradigm and investigated a group of healthy upper middle-aged/older illiterate women. We observed a positive correlation between cued-recall success and the regional cerebral blood flow of the left inferior PFC (BA 47) and the MTLs. Specifically, we used the cuedrecall success as a covariate in a general linear model and the results confirmed that the left inferior PFC and the MTLare more active during effective encoding than during ineffective encoding. These effects were observed during encoding of both semantically and phonologically related word pairs, indicating that these effects are robust in the studied population, that is, reproducible within group. These results generalize the results of Brewer et al. (1998, Science 281, 1185– 1187) and Wagner et al. (1998, Science 281, 1188–1191) to an upper middle aged/older illiterate population. In addition, the present study indicates that effective relational encoding correlates positively with the activity of the anterior medial temporal lobe regions.
  • Petersson, K. M., Reis, A., & Ingvar, M. (2001). Cognitive processing in literate and illiterate subjects: A review of some recent behavioral and functional neuroimaging data. Scandinavian Journal of Psychology, 42, 251-267. doi:10.1111/1467-9450.00235.

    Abstract

    The study of illiterate subjects, which for specific socio-cultural reasons did not have the opportunity to acquire basic reading and writing skills, represents one approach to study the interaction between neurobiological and cultural factors in cognitive development and the functional organization of the human brain. In addition the naturally occurring illiteracy may serve as a model for studying the influence of alphabetic orthography on auditory-verbal language. In this paper we have reviewed some recent behavioral and functional neuroimaging data indicating that learning an alphabetic written language modulates the auditory-verbal language system in a non-trivial way and provided support for the hypothesis that the functional architecture of the brain is modulated by literacy. We have also indicated that the effects of literacy and formal schooling is not limited to language related skills but appears to affect also other cognitive domains. In particular, we indicate that formal schooling influences 2D but not 3D visual naming skills. We have also pointed to the importance of using ecologically relevant tasks when comparing literate and illiterate subjects. We also demonstrate the applicability of a network approach in elucidating differences in the functional organization of the brain between groups. The strength of such an approach is the ability to study patterns of interactions between functionally specialized brain regions and the possibility to compare such patterns of brain interactions between groups or functional states. This complements the more commonly used activation approach to functional neuroimaging data, which characterize functionally specialized regions, and provides important data characterizing the functional interactions between these regions.
  • Petersson, K. M., Sandblom, J., Gisselgard, J., & Ingvar, M. (2001). Learning related modulation of functional retrieval networks in man. Scandinavian Journal of Psychology, 42, 197-216. doi:10.1111/1467-9450.00231.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Learning-related effects and functional neuroimaging. Human Brain Mapping, 7, 234-243. doi:10.1002/(SICI)1097-0193(1999)7:4<234:AID-HBM2>3.0.CO;2-O.

    Abstract

    A fundamental problem in the study of learning is that learning-related changes may be confounded by nonspecific time effects. There are several strategies for handling this problem. This problem may be of greater significance in functional magnetic resonance imaging (fMRI) compared to positron emission tomography (PET). Using the general linear model, we describe, compare, and discuss two approaches for separating learning-related from nonspecific time effects. The first approach makes assumptions on the general behavior of nonspecific effects and explicitly models these effects, i.e., nonspecific time effects are incorporated as a linear or nonlinear confounding covariate in the statistical model. The second strategy makes no a priori assumption concerning the form of nonspecific time effects, but implicitly controls for nonspecific effects using an interaction approach, i.e., learning effects are assessed with an interaction contrast. The two approaches depend on specific assumptions and have specific limitations. With certain experimental designs, both approaches may be used and the results compared, lending particular support to effects that are independent of the method used. A third and perhaps better approach that sometimes may be practically unfeasible is to use a completely temporally balanced experimental design. The choice of approach may be of particular importance when learning related effects are studied with fMRI.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I: Non-inferential methods and statistical models. Philosofical Transactions of the Royal Soeciety B, 354, 1239-1260.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging II: Signal detection and statistical inference. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 354, 1261-1282.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.

Share this page