Publications

Displaying 101 - 200 of 871
  • Bulut, T. (2022). Meta-analytic connectivity modeling of the left and right inferior frontal gyri. Cortex, 155, 107-131. doi:10.1016/j.cortex.2022.07.003.

    Abstract

    Background

    Neurocognitive models of language processing highlight the role of the left inferior frontal gyrus (IFG) in the functional network underlying language. Furthermore, neuroscience research has shown that IFG is not a uniform region anatomically, cytoarchitectonically or functionally. However, no previous study explored the language-related functional connectivity patterns of IFG subdivisions using a meta-analytic connectivity modeling (MACM) approach.
    Purpose

    The present MACM study aimed to identify language-related coactivation patterns of the left and right IFG subdivisions.
    Method

    Six regions of interest (ROIs) were defined using a probabilistic brain atlas corresponding to pars opercularis, pars triangularis and pars orbitalis of IFG in both hemispheres. The ROIs were used to search the BrainMap functional database to identify neuroimaging experiments with healthy, right-handed participants reporting language-related activations in each ROI. Activation likelihood estimation analyses were then performed on the foci extracted from the identified studies to compute functional convergence for each ROI, which was also contrasted with the other ROIs within the same hemisphere.
    Results

    A primarily left-lateralized functional network was revealed for the left and right IFG subdivisions. The left-hemispheric ROIs exhibited more robust coactivation than the right-hemispheric ROIs. Particularly, the left pars opercularis was associated with the most extensive coactivation pattern involving bilateral frontal, bilateral parietal, left temporal, left subcortical, and right cerebellar regions, while the left pars triangularis and orbitalis revealed a predominantly left-lateralized involvement of frontotemporal regions.
    Conclusion

    The findings align with the neurocognitive models of language processing that propose a division of labor among the left IFG subdivisions and their respective functional networks. Also, the opercular part of left IFG stands out as a major hub in the language network with connections to diverse cortical, subcortical and cerebellar structures.
  • Bulut, T. (2022). Neural correlates of morphological processing: An activation likelihood estimation meta-analysis. Cortex, 151, 49-69. doi:10.1016/j.cortex.2022.02.010.

    Abstract

    Background

    Morphemes are the smallest building blocks of language that convey meaning or function. A controversial issue in psycho- and neurolinguistics is whether morphologically complex words consisting of multiple morphemes are processed in a combinatorial manner and, if so, which brain regions underlie this process. Relatively less is known about the neural underpinnings of morphological processing compared to other aspects of grammatical competence such as syntax.

    Purpose
    The present study aimed to shed light on the neural correlates of morphological processing by examining functional convergence for inflectional morphology reported in previous neuroimaging studies.

    Method
    A systematic literature search was performed on PubMed with search terms related to morphological complexity and neuroimaging. 16 studies (279 subjects) comparing regular inflection with stems or irregular inflection met the inclusion and exclusion criteria and were subjected to a series of activation likelihood estimation meta-analyses.

    Results
    Significant functional convergence was found in several mainly left frontal regions for processing inflectional morphology. Specifically, the left inferior frontal gyrus (LIFG) was found to be consistently involved in morphological complexity. Diagnostic analyses revealed that involvement of posterior LIFG was robust against potential publication bias and over-influence of individual studies. Furthermore, LIFG involvement was maintained in meta-analyses of subsets of experiments that matched phonological complexity between conditions, although diagnostic analyses suggested that this conclusion may be premature.

    Conclusion
    The findings provide evidence for combinatorial processing of morphologically complex words and inform psycholinguistic accounts of complex word processing. Furthermore, they highlight the role of LIFG in processing inflectional morphology, in addition to syntactic processing as has been emphasized in previous research. In particular, posterior LIFG seems to underlie grammatical functions encompassing inflectional morphology and syntax.

    Additional information

    Supplementary information Open Data

    Files private

    Request files
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Byers-Heinlein, K., Bergmann, C., & Savalei, V. (2022). Six solutions for more reliable infant research. Infant and Child Development, 31(5): e2296. doi:10.1002/icd.2296.

    Abstract

    Infant research is often underpowered, undermining the robustness and replicability of our findings. Improving the reliability of infant studies offers a solution for increasing statistical power independent of sample size. Here, we discuss two senses of the term reliability in the context of infant research: reliable (large) effects and reliable measures. We examine the circumstances under which effects are strongest and measures are most reliable and use synthetic datasets to illustrate the relationship between effect size, measurement reliability, and statistical power. We then present six concrete solutions for more reliable infant research: (a) routinely estimating and reporting the effect size and measurement reliability of infant tasks, (b) selecting the best measurement tool, (c) developing better infant paradigms, (d) collecting more data points per infant, (e) excluding unreliable data from the analysis, and (f) conducting more sophisticated data analyses. Deeper consideration of measurement in infant research will improve our ability to study infant development.
  • Byun, K.-S., Roberts, S. G., De Vos, C., Zeshan, U., & Levinson, S. C. (2022). Distinguishing selection pressures in an evolving communication system: Evidence from colournaming in 'cross signing'. Frontiers in Communication, 7: 1024340. doi:10.3389/fcomm.2022.1024340.

    Abstract

    Cross-signing—the emergence of an interlanguage between users of different sign languages—offers a rare chance to examine the evolution of a natural communication system in real time. To provide an insight into this process, we analyse an annotated video corpus of 340 minutes of interaction between signers of different language backgrounds on their first meeting and after living with each other for several weeks. We focus on the evolution of shared color terms and examine the role of different selectional pressures, including frequency, content, coordination and interactional context. We show that attentional factors in interaction play a crucial role. This suggests that understanding meta-communication is critical for explaining the cultural evolution of linguistic systems.
  • Cambier, N., Miletitch, R., Burraco, A. B., & Raviv, L. (2022). Prosociality in swarm robotics: A model to study self-domestication and language evolution. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 98-100). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Cao, Y., Oostenveld, R., Alday, P. M., & Piai, V. (2022). Are alpha and beta oscillations spatially dissociated over the cortex in context‐driven spoken‐word production? Psychophysiology, 59(6): e13999. doi:10.1111/psyp.13999.

    Abstract

    Decreases in oscillatory alpha- and beta-band power have been consistently found in spoken-word production. These have been linked to both motor preparation and conceptual-lexical retrieval processes. However, the observed power decreases have a broad frequency range that spans two “classic” (sensorimotor) bands: alpha and beta. It remains unclear whether alpha- and beta-band power decreases contribute independently when a spoken word is planned. Using a re-analysis of existing magnetoencephalography data, we probed whether the effects in alpha and beta bands are spatially distinct. Participants read a sentence that was either constraining or non-constraining toward the final word, which was presented as a picture. In separate blocks participants had to name the picture or score its predictability via button press. Irregular-resampling auto-spectral analysis (IRASA) was used to isolate the oscillatory activity in the alpha and beta bands from the background 1-over-f spectrum. The sources of alpha- and beta-band oscillations were localized based on the participants’ individualized peak frequencies. For both tasks, alpha- and beta-power decreases overlapped in left posterior temporal and inferior parietal cortex, regions that have previously been associated with conceptual and lexical processes. The spatial distributions of the alpha and beta power effects were spatially similar in these regions to the extent we could assess it. By contrast, for left frontal regions, the spatial distributions differed between alpha and beta effects. Our results suggest that for conceptual-lexical retrieval, alpha and beta oscillations do not dissociate spatially and, thus, are distinct from the classical sensorimotor alpha and beta oscillations.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2022). The time course of language production as revealed by pattern classification of MEG sensor data. The Journal of Neuroscience, 42(29), 5745-5754. doi:10.1523/JNEUROSCI.1923-21.2022.

    Abstract

    Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such “earliness” is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word production
  • Carter, G., & Nieuwland, M. S. (2022). Predicting definite and indefinite referents during discourse comprehension: Evidence from event‐related potentials. Cognitive Science, 46(2): e13092. doi:10.1111/cogs.13092.

    Abstract

    Linguistic predictions may be generated from and evaluated against a representation of events and referents described in the discourse. Compatible with this idea, recent work shows that predictions about novel noun phrases include their definiteness. In the current follow-up study, we ask whether people engage similar prediction-related processes for definite and indefinite referents. This question is relevant for linguistic theories that imply a processing difference between definite and indefinite noun phrases, typically because definiteness is thought to require a uniquely identifiable referent in the discourse. We addressed this question in an event-related potential (ERP) study (N = 48) with preregistration of data acquisition, preprocessing, and Bayesian analysis. Participants read Dutch mini-stories with a definite or indefinite novel noun phrase (e.g., “het/een huis,” the/a house), wherein (in)definiteness of the article was either expected or unexpected and the noun was always strongly expected. Unexpected articles elicited enhanced N400s, but unexpectedly indefinite articles also elicited a positive ERP effect at frontal channels compared to expectedly indefinite articles. We tentatively link this effect to an antiuniqueness violation, which may force people to introduce a new referent over and above the already anticipated one. Interestingly, expectedly definite nouns elicited larger N400s than unexpectedly definite nouns (replicating a previous surprising finding) and indefinite nouns. Although the exact nature of these noun effects remains unknown, expectedly definite nouns may have triggered the strongest semantic activation because they alone refer to specific and concrete referents. In sum, results from both the articles and nouns clearly demonstrate that definiteness marking has a rapid effect on processing, counter to recent claims regarding definiteness processing.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Chen, X., Hartsuiker, R. J., Muylle, M., Slim, M. S., & Zhang, C. (2022). The effect of animacy on structural Priming: A replication of Bock, Loebell and Morey (1992). Journal of Memory and Language, 127: 104354. doi:10.1016/j.jml.2022.104354.

    Abstract

    Bock et al. (1992) found that the binding of animacy features onto grammatical roles is susceptible to priming in sentence production. Moreover, this effect did not interact with structural priming. This finding supports an account according to which syntactic representations are insensitive to the consistency of animacy-to-structure mapping. This account has contributed greatly to the development of syntactic processing theories in language production. However, this study has never been directly replicated and the few related studies showed mixed results. A meta-analysis of these studies failed to replicate the findings of Bock et al. (1992). Therefore, we conducted a well-powered replication (n = 496) that followed the original study as closely as possible. We found an effect of structural priming and an animacy priming effect, replicating Bock et al.’s findings. In addition, we replicated Bock et al.’s (1992) observed null interaction between structural priming and animacy binding, which suggests that syntactic representations are indeed independent of semantic information about animacy.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Cheung, C.-Y., Yakpo, K., & Coupé, C. (2022). A computational simulation of the genesis and spread of lexical items in situations of abrupt language contact. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 115-122). Nijmegen: Joint Conference on Language Evolution (JCoLE).

    Abstract

    The current study presents an agent-based model which simulates the innovation and
    competition among lexical items in cases of language contact. It is inspired by relatively
    recent historical cases in which the linguistic ecology and sociohistorical context are highly complex. Pidgin and creole genesis offers an opportunity to obtain linguistic facts, social dynamics, and historical demography in a highly segregated society. This provides a solid ground for researching the interaction of populations with different pre-existing language systems, and how different factors contribute to the genesis of the lexicon of a newly generated mixed language. We take into consideration the population dynamics and structures, as well as a distribution of word frequencies related to language use, in order to study how social factors may affect the developmental trajectory of languages. Focusing on the case of Sranan in Suriname, our study shows that it is possible to account for the
    composition of its core lexicon in relation to different social groups, contact patterns, and
    large population movements.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, T. (2022). The Phonetics-Prosody Interface and Prosodic Strengthening in Korean. In S. Cho, & J. Whitman (Eds.), Cambridge handbook of Korean linguistics (pp. 248-293). Cambridge: Cambridge University Press.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chormai, P., Pu, Y., Hu, H., Fisher, S. E., Francks, C., & Kong, X. (2022). Machine learning of large-scale multimodal brain imaging data reveals neural correlates of hand preference. NeuroImage, 262: 119534. doi:10.1016/j.neuroimage.2022.119534.

    Abstract

    Lateralization is a fundamental characteristic of many behaviors and the organization of the brain, and atypical lateralization has been suggested to be linked to various brain-related disorders such as autism and schizophrenia. Right-handedness is one of the most prominent markers of human behavioural lateralization, yet its neurobiological basis remains to be determined. Here, we present a large-scale analysis of handedness, as measured by self-reported direction of hand preference, and its variability related to brain structural and functional organization in the UK Biobank (N = 36,024). A multivariate machine learning approach with multi-modalities of brain imaging data was adopted, to reveal how well brain imaging features could predict individual's handedness (i.e., right-handedness vs. non-right-handedness) and further identify the top brain signatures that contributed to the prediction. Overall, the results showed a good prediction performance, with an area under the receiver operating characteristic curve (AUROC) score of up to 0.72, driven largely by resting-state functional measures. Virtual lesion analysis and large-scale decoding analysis suggested that the brain networks with the highest importance in the prediction showed functional relevance to hand movement and several higher-level cognitive functions including language, arithmetic, and social interaction. Genetic analyses of contributions of common DNA polymorphisms to the imaging-derived handedness prediction score showed a significant heritability (h2=7.55%, p <0.001) that was similar to and slightly higher than that for the behavioural measure itself (h2=6.74%, p <0.001). The genetic correlation between the two was high (rg=0.71), suggesting that the imaging-derived score could be used as a surrogate in genetic studies where the behavioural measure is not available. This large-scale study using multimodal brain imaging and multivariate machine learning has shed new light on the neural correlates of human handedness.

    Additional information

    supplementary material
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Clough, S., Hilverman, C., Brown-Schmidt, S., & Duff, M. C. (2022). Evidence of audience design in amnesia: Adaptation in gesture but not speech. Brain Sciences, 12(8): 1082. doi:10.3390/brainsci12081082.

    Abstract

    Speakers design communication for their audience, providing more information in both speech and gesture when their listener is naive to the topic. We test whether the hippocampal declarative memory system contributes to multimodal audience design. The hippocampus, while traditionally linked to episodic and relational memory, has also been linked to the ability to imagine the mental states of others and use language flexibly. We examined the speech and gesture use of four patients with hippocampal amnesia when describing how to complete everyday tasks (e.g., how to tie a shoe) to an imagined child listener and an adult listener. Although patients with amnesia did not increase their total number of words and instructional steps for the child listener, they did produce representational gestures at significantly higher rates for the imagined child compared to the adult listener. They also gestured at similar frequencies to neurotypical peers, suggesting that hand gesture can be a meaningful communicative resource, even in the case of severe declarative memory impairment. We discuss the contributions of multiple memory systems to multimodal audience design and the potential of gesture to act as a window into the social cognitive processes of individuals with neurologic disorders.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2022). Hierarchy in language interpretation: Evidence from behavioural experiments and computational modelling. Language, Cognition and Neuroscience, 37(4), 420-439. doi:10.1080/23273798.2021.1980595.

    Abstract

    It has long been recognised that phrases and sentences are organised hierarchically, but many computational models of language treat them as sequences of words without computing constituent structure. Against this background, we conducted two experiments which showed that participants interpret ambiguous noun phrases, such as second blue ball, in terms of their abstract hierarchical structure rather than their linear surface order. When a neural network model was tested on this task, it could simulate such “hierarchical” behaviour. However, when we changed the training data such that they were not entirely unambiguous anymore, the model stopped generalising in a human-like way. It did not systematically generalise to novel items, and when it was trained on ambiguous trials, it strongly favoured the linear interpretation. We argue that these models should be endowed with a bias to make generalisations over hierarchical structure in order to be cognitively adequate models of human language.
  • Coopmans, C. W., De Hoop, H., Hagoort, P., & Martin, A. E. (2022). Effects of structure and meaning on cortical tracking of linguistic units in naturalistic speech. Neurobiology of Language, 3(3), 386-412. doi:10.1162/nol_a_00070.

    Abstract

    Recent research has established that cortical activity “tracks” the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1–2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.

    Additional information

    supplementary information
  • Coopmans, C. W., & Cohn, N. (2022). An electrophysiological investigation of co-referential processes in visual narrative comprehension. Neuropsychologia, 172: 108253. doi:10.1016/j.neuropsychologia.2022.108253.

    Abstract

    Visual narratives make use of various means to convey referential and co-referential meaning, so comprehenders
    must recognize that different depictions across sequential images represent the same character(s). In this study,
    we investigated how the order in which different types of panels in visual sequences are presented affects how
    the unfolding narrative is comprehended. Participants viewed short comic strips while their electroencephalo-
    gram (EEG) was recorded. We analyzed evoked and induced EEG activity elicited by both full panels (showing a
    full character) and refiner panels (showing only a zoom of that full panel), and took into account whether they
    preceded or followed the panel to which they were co-referentially related (i.e., were cataphoric or anaphoric).
    We found that full panels elicited both larger N300 amplitude and increased gamma-band power compared to
    refiner panels. Anaphoric panels elicited a sustained negativity compared to cataphoric panels, which appeared
    to be sensitive to the referential status of the anaphoric panel. In the time-frequency domain, anaphoric panels
    elicited reduced 8–12 Hz alpha power and increased 45–65 Hz gamma-band power compared to cataphoric
    panels. These findings are consistent with models in which the processes involved in visual narrative compre-
    hension partially overlap with those in language comprehension.
  • Corps, R. E., Brooke, C., & Pickering, M. (2022). Prediction involves two stages: Evidence from visual-world eye-tracking. Journal of Memory and Language, 122: 104298. doi:10.1016/j.jml.2021.104298.

    Abstract

    Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently – that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker’s perspective in Experiment 1, their own perspective in Experiment 2, and the character’s perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.

    Additional information

    data and analysis scripts
  • Corps, R. E., Knudsen, B., & Meyer, A. S. (2022). Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition, 223: 105037. doi:10.1016/j.cognition.2022.105037.

    Abstract

    Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Creaghe, N., & Kidd, E. (2022). Symbolic play as a zone of proximal development: An analysis of informational exchange. Social Development, 31(4), 1138-1156. doi:10.1111/sode.12592.

    Abstract

    Symbolic play has long been considered a beneficial context for development. According to Cultural Learning theory, one reason for this is that symbolically-infused dialogical interactions constitute a zone of proximal development. However, the dynamics of caregiver-child interactions during symbolic play are still not fully understood. In the current study, we investigated informational exchange between fifty-two 24-month-old infants and their primary caregivers during symbolic play and a comparable, non-symbolic, functional play context. We coded over 11,000 utterances for whether participants had superior, equivalent, or inferior knowledge concerning the current conversational topic. Results showed that children were significantly more knowledgeable speakers and recipients in symbolic play, whereas the opposite was the case for caregivers, who were more knowledgeable in functional play. The results suggest that, despite its potential conceptual complexity, symbolic play may scaffold development because it facilitates infants’ communicative success by promoting them to ‘co-constructors of meaning’.

    Additional information

    supporting information
  • Creemers, A., & Embick, D. (2022). The role of semantic transparency in the processing of spoken compound words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(5), 734-751. doi:10.1037/xlm0001132.

    Abstract

    The question of whether lexical decomposition is driven by semantic transparency in the lexical processing of morphologically complex words, such as compounds, remains controversial. Prior research on compound processing has predominantly examined visual processing. Focusing instead on spoken word word recognition, the present study examined the processing of auditorily presented English compounds that were semantically transparent (e.g., farmyard) or partially opaque with an opaque head (e.g., airline) or opaque modifier (e.g., pothole). Three auditory primed lexical decision experiments were run to examine to what extent constituent priming effects are affected by the semantic transparency of a compound and whether semantic transparency affects the processing of heads and modifiers equally. The results showed priming effects for both modifiers and heads regardless of their semantic transparency, indicating that individual constituents are accessed in transparent as well as opaque compounds. In addition, the results showed smaller priming effects for semantically opaque heads compared with matched transparent compounds with the same head. These findings suggest that semantically opaque heads induce an increased processing cost, which may result from the need to suppress the meaning of the head in favor of the meaning of the opaque compound.
  • Creemers, A., & Meyer, A. S. (2022). The processing of ambiguous pronominal reference is sensitive to depth of processing. Glossa Psycholinguistics, 1(1): 3. doi:10.5070/G601166.

    Abstract

    Previous studies on the processing of ambiguous pronominal reference have led to contradictory results: some suggested that ambiguity may hinder processing (Stewart, Holler, & Kidd, 2007), while others showed an ambiguity advantage (Grant, Sloggett, & Dillon, 2020) similar to what has been reported for structural ambiguities. This study provides a conceptual replication of Stewart et al. (2007, Experiment 1), to examine whether the discrepancy in earlier results is caused by the processing depth that participants engage in (cf. Swets, Desmet, Clifton, & Ferreira, 2008). We present the results from a word-by-word self-paced reading experiment with Dutch sentences that contained a personal pronoun in an embedded clause that was either ambiguous or disambiguated through gender features. Depth of processing of the embedded clause was manipulated through offline comprehension questions. The results showed that the difference in reading times for ambiguous versus unambiguous sentences depends on the processing depth: a significant ambiguity penalty was found under deep processing but not under shallow processing. No significant ambiguity advantage was found, regardless of processing depth. This replicates the results in Stewart et al. (2007) using a different methodology and a larger sample size for appropriate statistical power. These findings provide further evidence that ambiguous pronominal reference resolution is a flexible process, such that the way in which ambiguous sentences are processed depends on the depth of processing of the relevant information. Theoretical and methodological implications of these findings are discussed.
  • Cristia, A., Tsuji, S., & Bergmann, C. (2022). A meta-analytic approach to evaluating the explanatory adequacy of theories. Meta-Psychology, 6: MP.2020.2741. doi:10.15626/MP.2020.2741.

    Abstract

    How can data be used to check theories’ explanatory adequacy? The two traditional and most widespread approaches use single studies and non-systematic narrative reviews to evaluate theories’ explanatory adequacy; more
    recently, large-scale replications entered the picture. We argue here that none of these approaches fits in with
    cumulative science tenets. We propose instead Community-Augmented Meta-Analyses (CAMAs), which, like metaanalyses and systematic reviews, are built using all available data; like meta-analyses but not systematic reviews, can
    rely on sound statistical practices to model methodological effects; and like no other approach, are broad-scoped,
    cumulative and open. We explain how CAMAs entail a conceptual shift from meta-analyses and systematic reviews, a
    shift that is useful when evaluating theories’ explanatory adequacy. We then provide step-by-step recommendations
    for how to implement this approach – and what it means when one cannot. This leads us to conclude that CAMAs
    highlight areas of uncertainty better than alternative approaches that bring data to bear on theory evaluation, and
    can trigger a much needed shift towards a cumulative mindset with respect to both theory and data, leading us to
    do and view experiments and narrative reviews differently.

    Additional information

    All data available at OSF
  • Cucchiarini, C., Hubers, F., & Strik, H. (2022). Learning L2 idioms in a CALL environment: The role of practice intensity, modality, and idiom properties. Computer Assisted Language Learning, 35(4), 863-891. doi:10.1080/09588221.2020.1752734.

    Abstract

    Idiomatic expressions like hit the road or turn the tables are known to be problematic for L2 learners, but research indicates that learning L2 idiomatic language is important. Relatively few studies, most of them focusing on English idioms, have investigated how L2 idioms are actually acquired and how this process is affected by important idiom properties like transparency (the degree to which the figurative meaning of an idiom can be inferred from its literal analysis) and cross-language overlap (the degree to which L2 idioms correspond to L1 idioms). The present study employed a specially designed CALL system to investigate the effects of intensity of practice and the reading modality on learning Dutch L2 idioms, as well as the impact of idiom transparency and cross-language overlap. The results show that CALL practice with a focus on meaning and form is effective for learning L2 idioms and that the degree of practice needed depends on the properties of the idioms. L2 learners can achieve or even exceed native-like performance. Practicing reading idioms aloud does not lead to significantly higher performance than reading idioms silently.These findings have theoretical implications as they show that differences between native speakers and L2 learners are due to differences in exposure, rather than to different underlying acquisition mechanisms. For teaching practice, this study indicates that a properly designed CALL system is an effective and an ecologically sound environment for learning L2 idioms, a generally unattended area in L2 classes, and that teaching priorities should be based on degree of transparency and cross-language overlap of L2 idioms.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., Ernestus, M., Warner, N., & Weber, A. (2022). Managing speech perception data sets. In B. McDonnell, E. Koller, & L. B. Collister (Eds.), The Open Handbook of Linguistic Data Management (pp. 565-573). Cambrdige, MA, USA: MIT Press. doi:10.7551/mitpress/12200.003.0055.
  • Ip, M. H. K., & Cutler, A. (2022). Juncture prosody across languages: Similar production but dissimilar perception. Laboratory Phonology, 13(1): 5. doi:10.16995/labphon.6464.

    Abstract

    How do speakers of languages with different intonation systems produce and perceive prosodic junctures in sentences with identical structural ambiguity? Native speakers of English and of Mandarin produced potentially ambiguous sentences with a prosodic juncture either earlier in the utterance (e.g., “He gave her # dog biscuits,” “他给她#狗饼干 ”), or later (e.g., “He gave her dog # biscuits,” “他给她狗 #饼干 ”). These productiondata showed that prosodic disambiguation is realised very similarly in the two languages, despite some differences in the degree to which individual juncture cues (e.g., pausing) were favoured. In perception experiments with a new disambiguation task, requiring speeded responses to select the correct meaning for structurally ambiguous sentences, language differences in disambiguation response time appeared: Mandarin speakers correctly disambiguated sentences with earlier juncture faster than those with later juncture, while English speakers showed the reverse. Mandarin-speakers with L2 English did not show their native-language response time pattern when they heard the English ambiguous sentences. Thus even with identical structural ambiguity and identically cued production, prosodic juncture perception across languages can differ.

    Additional information

    supplementary files
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A. (1979). Beyond parsing and lexical look-up. In R. J. Wales, & E. C. T. Walker (Eds.), New approaches to language mechanisms: a collection of psycholinguistic studies (pp. 133-149). Amsterdam: North-Holland.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Cutler, A. (1985). Cross-language psycholinguistics. Linguistics, 23, 659-667.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A., & Norris, D. (1979). Monitoring sentence comprehension. In W. E. Cooper, & E. C. T. Walker (Eds.), Sentence processing: Psycholinguistic studies presented to Merrill Garrett (pp. 113-134). Hillsdale: Erlbaum.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1984). Stress and accent in language production and understanding. In D. Gibbon, & H. Richter (Eds.), Intonation, accent and rhythm: Studies in discourse phonology (pp. 77-90). Berlin: de Gruyter.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Cutler, A. (1988). The perfect speech error. In L. Hyman, & C. Li (Eds.), Language, speech and mind: Studies in honor of Victoria A. Fromkin (pp. 209-223). London: Croom Helm.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Hawkins, J. A., & Gilligan, G. (1985). The suffixing preference: A processing explanation. Linguistics, 23, 723-758.
  • Cutler, A., & Clifton Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. Bouwhuis (Eds.), Attention and Performance X: Control of Language Processes (pp. 183-196). Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Clifton, Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. G. Bouwhuis (Eds.), Attention and performance X: Control of language processes (pp. 183-196). London: Erlbaum.

    Abstract

    In languages with variable stress placement, lexical stress patterns can convey information about word identity. The experiments reported here address the question of whether lexical stress information can be used in word recognition. The results allow the following conclusions: 1. Prior information as to the number of syllables and lexical stress patterns of words and nonwords does not facilitate lexical decision responses (Experiment 1). 2. The strong correspondences between grammatical category membership and stress pattern in bisyllabic English words (strong-weak stress being associated primarily with nouns, weak-strong with verbs) are not exploited in the recognition of isolated words (Experiment 2). 3. When a change in lexical stress also involves a change in vowel quality, i.e., a segmental as well as a suprasegmental alteration, effects on word recognition are greater when no segmental correlates of suprasegmental changes are involved (Experiments 2 and 3). 4. Despite the above finding, when all other factors are controlled, lexical stress information per se can indeed be shown to play a part in word-recognition process (Experiment 3).
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dai, B., McQueen, J. M., Terporten, R., Hagoort, P., & Kösem, A. (2022). Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. Current Research in Neurobiology, 3: 100043. doi:10.1016/j.crneur.2022.100043.

    Abstract

    Listening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to unintelligible sounds. This suggests that the competing linguistic information interferes with the neural processing of target speech. Interference could either arise from a degradation of the neural representation of the target speech, or from increased representation of distracting speech that enters in competition with the target speech. We tested these alternative hypotheses using magnetoencephalography (MEG) while participants listened to a target clear speech in the presence of distracting noise-vocoded speech. Crucially, the distractors were initially unintelligible but became more intelligible after a short training session. Results showed that the comprehension of the target speech was poorer after training than before training. The neural tracking of target speech in the delta range (1–4 Hz) reduced in strength in the presence of a more intelligible distractor. In contrast, the neural tracking of distracting signals was not significantly modulated by intelligibility. These results suggest that the presence of distracting speech signals degrades the linguistic representation of target speech carried by delta oscillations.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Damatac, C. G., Soheili-Nezhad, S., Blazquez Freches, G., Zwiers, M. P., De Bruijn, S., Ikde, S., Portengen, C. M., Abelmann, A. C., Dammers, J. T., Van Rooij, D., Akkermans, S. E., Naaijen, J., Franke, B., Buitelaar, J. K., Beckmann, C. F., & Sprooten, E. (2022). Longitudinal changes of ADHD symptoms in association with white matter microstructure: A tract-specific fixel-based analysis. NeuroImage: Clinical, 35: 103057. doi:10.1016/j.nicl.2022.103057.

    Abstract

    Background

    Variation in the longitudinal course of childhood attention deficit/hyperactivity disorder (ADHD) coincides with neurodevelopmental maturation of brain structure and function. Prior work has attempted to determine how alterations in white matter (WM) relate to changes in symptom severity, but much of that work has been done in smaller cross-sectional samples using voxel-based analyses. Using standard diffusion-weighted imaging (DWI) methods, we previously showed WM alterations were associated with ADHD symptom remission over time in a longitudinal sample of probands, siblings, and unaffected individuals. Here, we extend this work by further assessing the nature of these changes in WM microstructure by including an additional follow-up measurement (aged 18 – 34 years), and using the more physiologically informative fixel-based analysis (FBA).
    Methods

    Data were obtained from 139 participants over 3 clinical and 2 follow-up DWI waves, and analyzed using FBA in regions-of-interest based on prior findings. We replicated previously reported significant models and extended them by adding another time-point, testing whether changes in combined ADHD and hyperactivity-impulsivity (HI) continuous symptom scores are associated with fixel metrics at follow-up.
    Results

    Clinical improvement in HI symptoms over time was associated with more fiber density at follow-up in the left corticospinal tract (lCST) (tmax = 1.092, standardized effect[SE] = 0.044, pFWE = 0.016). Improvement in combined ADHD symptoms over time was associated with more fiber cross-section at follow-up in the lCST (tmax = 3.775, SE = 0.051, pFWE = 0.019).
    Conclusions

    Aberrant white matter development involves both lCST micro- and macrostructural alterations, and its path may be moderated by preceding symptom trajectory.

    Additional information

    supplementary material
  • Den Hoed, J. (2022). Disentangling the molecular landscape of genetic variation of neurodevelopmental and speech disorders. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Deutsch, W., & Frauenfelder, U. (1985). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.6 1985. Nijmegen: MPI for Psycholinguistics.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2022). Finding the force: How children discern possibility and necessity modals. Natural Language Semantics, 30(3), 269-310. doi:10.1007/s11050-022-09196-4.

    Abstract

    This paper investigates when and how children figure out the force of modals: that possibility modals (e.g., can/might) express possibility, and necessity modals (e.g., must/have to) express necessity. Modals raise a classic subset problem: given that necessity entails possibility, what prevents learners from hypothesizing possibility meanings for necessity modals? Three solutions to such subset problems can be found in the literature: the first is for learners to rely on downward-entailing (DE) environments (Gualmini and Schwarz in J. Semant. 26(2):185–215, 2009); the second is a bias for strong (here, necessity) meanings; the third is for learners to rely on pragmatic cues stemming from the conversational context (Dieuleveut et al. in Proceedings of the 2019 Amsterdam colloqnium, pp. 111–122, 2019a; Rasin and Aravind in Nat. Lang. Semant. 29:339–375, 2020). This paper assesses the viability of each of these solutions by examining the modals used in speech to and by 2-year-old children, through a combination of corpus studies and experiments testing the guessability of modal force based on their context of use. Our results suggest that, given the way modals are used in speech to children, the first solution is not viable and the second is unnecessary. Instead, we argue that the conversational context in which modals occur is highly informative as to their force and sufficient, in principle, to sidestep the subset problem. Our child results further suggest an early mastery of possibility—but not necessity—modals and show no evidence for a necessity bias.
  • Dijkstra, T., & Kempen, G. (1984). Taal in uitvoering: Inleiding tot de psycholinguistiek. Groningen: Wolters-Noordhoff.
  • Dijkstra, T., Peeters, D., Hieselaar, W., & van Geffen, A. (2022). Orthographic and semantic priming effects in neighbour cognates: Experiments and simulations. Bilingualism: Language and Cognition, 26(2), 371-383. doi:10.1017/S1366728922000591.

    Abstract

    To investigate how orthography and semantics interact during bilingual visual word recognition, Dutch–English bilinguals made lexical decisions in two masked priming experiments. Dutch primes and English targets were presented that were either neighbour cognates (boek – BOOK), noncognate translations (kooi – CAGE), orthographically related neighbours (neus – NEWS), or unrelated words (huid - COAT). Prime durations of 50 ms (Experiment 1) and 83 ms (Experiment 2) led to similar result patterns. Both experiments reported a large cognate facilitation effect, a smaller facilitatory noncognate translation effect, and the absence of inhibitory orthographic neighbour effects. These results indicate that cognate facilitation is in large part due to orthographic-semantic resonance. Priming results for each condition were simulated well (all r's >.50) by Multilink+, a recent computational model for word retrieval. Limitations to the role of lateral inhibition in bilingual word recognition are discussed.
  • Dima, D., Modabbernia, A., Papachristou, E., Doucet, G. E., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes‐Eizagirre, A., Alnæs, D., Alpert, K. I., Andersson, M., Andreasen, N. C., Andreassen, O. A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur‐Streubel, R., Bertolino, A., Bonvino, A. and 182 moreDima, D., Modabbernia, A., Papachristou, E., Doucet, G. E., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes‐Eizagirre, A., Alnæs, D., Alpert, K. I., Andersson, M., Andreasen, N. C., Andreassen, O. A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur‐Streubel, R., Bertolino, A., Bonvino, A., Boomsma, D. I., Borgwardt, S., Bourque, J., Brandeis, D., Breier, A., Brodaty, H., Brouwer, R. M., Buitelaar, J. K., Busatto, G. F., Buckner, R. L., Calhoun, V., Canales‐Rodríguez, E. J., Cannon, D. M., Caseras, X., Castellanos, F. X., Cervenka, S., Chaim‐Avancini, T. M., Ching, C. R. K., Chubar, V., Clark, V. P., Conrod, P., Conzelmann, A., Crespo‐Facorro, B., Crivello, F., Crone, E. A., Dale, A. M., Davey, C., De Geus, E. J. C., De Haan, L., De Zubicaray, G. I., Den Braber, A., Dickie, E. W., Di Giorgio, A., Doan, N. T., Dørum, E. S., Ehrlich, S., Erk, S., Espeseth, T., Fatouros‐Bergman, H., Fisher, S. E., Fouche, J., Franke, B., Frodl, T., Fuentes‐Claramonte, P., Glahn, D. C., Gotlib, I. H., Grabe, H., Grimm, O., Groenewold, N. A., Grotegerd, D., Gruber, O., Gruner, P., Gur, R. E., Gur, R. C., Harrison, B. J., Hartman, C. A., Hatton, S. N., Heinz, A., Heslenfeld, D. J., Hibar, D. P., Hickie, I. B., Ho, B., Hoekstra, P. J., Hohmann, S., Holmes, A. J., Hoogman, M., Hosten, N., Howells, F. M., Hulshoff Pol, H. E., Huyser, C., Jahanshad, N., James, A., Jernigan, T. L., Jiang, J., Jönsson, E. G., Joska, J. A., Kahn, R., Kalnin, A., Kanai, R., Klein, M., Klyushnik, T. P., Koenders, L., Koops, S., Krämer, B., Kuntsi, J., Lagopoulos, J., Lázaro, L., Lebedeva, I., Lee, W. H., Lesch, K., Lochner, C., Machielsen, M. W. J., Maingault, S., Martin, N. G., Martínez‐Zalacaín, I., Mataix‐Cols, D., Mazoyer, B., McDonald, C., McDonald, B. C., McIntosh, A. M., McMahon, K. L., McPhilemy, G., Menchón, J. M., Medland, S. E., Meyer‐Lindenberg, A., Naaijen, J., Najt, P., Nakao, T., Nordvik, J. E., Nyberg, L., Oosterlaan, J., Ortiz‐García de la Foz, V., Paloyelis, Y., Pauli, P., Pergola, G., Pomarol‐Clotet, E., Portella, M. J., Potkin, S. G., Radua, J., Reif, A., Rinker, D. A., Roffman, J. L., Rosa, P. G. P., Sacchet, M. D., Sachdev, P. S., Salvador, R., Sánchez‐Juan, P., Sarró, S., Satterthwaite, T. D., Saykin, A. J., Serpa, M. H., Schmaal, L., Schnell, K., Schumann, G., Sim, K., Smoller, J. W., Sommer, I., Soriano‐Mas, C., Stein, D. J., Strike, L. T., Swagerman, S. C., Tamnes, C. K., Temmingh, H. S., Thomopoulos, S. I., Tomyshev, A. S., Tordesillas‐Gutiérrez, D., Trollor, J. N., Turner, J. A., Uhlmann, A., Van den Heuvel, O. A., Van den Meer, D., Van der Wee, N. J. A., Van Haren, N. E. M., Van't Ent, D., Van Erp, T. G. M., Veer, I. M., Veltman, D. J., Voineskos, A., Völzke, H., Walter, H., Walton, E., Wang, L., Wang, Y., Wassink, T. H., Weber, B., Wen, W., West, J. D., Westlye, L. T., Whalley, H., Wierenga, L. M., Williams, S. C. R., Wittfeld, K., Wolf, D. H., Worker, A., Wright, M. J., Yang, K., Yoncheva, Y., Zanetti, M. V., Ziegler, G. C., Thompson, P. M., Frangou, S., & Karolinska Schizophrenia Project (KaSP) (2022). Subcortical volumes across the lifespan: Data from 18,605 healthy individuals aged 3–90 years. Human Brain Mapping, 43(1), 452-469. doi:10.1002/hbm.25320.

    Abstract

    Age has a major effect on brain volume. However, the normative studies available are constrained by small sample sizes, restricted age coverage and significant methodological variability. These limitations introduce inconsistencies and may obscure or distort the lifespan trajectories of brain morphometry. In response, we capitalized on the resources of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Consortium to examine age‐related trajectories inferred from cross‐sectional measures of the ventricles, the basal ganglia (caudate, putamen, pallidum, and nucleus accumbens), the thalamus, hippocampus and amygdala using magnetic resonance imaging data obtained from 18,605 individuals aged 3–90 years. All subcortical structure volumes were at their maximum value early in life. The volume of the basal ganglia showed a monotonic negative association with age thereafter; there was no significant association between age and the volumes of the thalamus, amygdala and the hippocampus (with some degree of decline in thalamus) until the sixth decade of life after which they also showed a steep negative association with age. The lateral ventricles showed continuous enlargement throughout the lifespan. Age was positively associated with inter‐individual variability in the hippocampus and amygdala and the lateral ventricles. These results were robust to potential confounders and could be used to examine the functional significance of deviations from typical age‐related morphometric patterns.
  • Dimroth, C. (2004). Fokuspartikeln und Informationsgliederung im Deutschen. Tübingen: Stauffenburg.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., Liesenfeld, A., & Woensdregt, M. (2022). Convergent cultural evolution of continuers (mhmm). In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 160-167). Nijmegen: Joint Conference on Language Evolution (JCoLE). doi:10.31234/osf.io/65c79.

    Abstract

    Continuers —words like mm, mmhm, uhum and the like— are among the most frequent types of responses in conversation. They play a key role in joint action coordination by showing positive evidence of understanding and scaffolding narrative delivery. Here we investigate the hypothesis that their functional importance along with their conversational ecology places selective pressures on their form and may lead to cross-linguistic similarities through convergent cultural evolution. We compare continuer tokens in linguistically diverse conversational corpora and find languages make available highly similar forms. We then approach the causal mechanism of convergent cultural evolution using exemplar modelling, simulating the process by which a combination of effort minimization and functional specialization may push continuers to a particular region of phonological possibility space. By combining comparative linguistics and computational modelling we shed new light on the question of how language structure is shaped by and for social interaction.
  • Dingemanse, M., & Liesenfeld, A. (2022). From text to talk: Harnessing conversational corpora for humane and diversity-aware language technology. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (pp. 5614 -5633). Dublin, Ireland: Association for Computational Linguistics.

    Abstract

    Informal social interaction is the primordial home of human language. Linguistically diverse conversational corpora are an important and largely untapped resource for computational linguistics and language technology. Through the efforts of a worldwide language documentation movement, such corpora are increasingly becoming available. We show how interactional data from 63 languages (26 families) harbours insights about turn-taking, timing, sequential structure and social action, with implications for language technology, natural language understanding, and the design of conversational interfaces. Harnessing linguistically diverse conversational corpora will provide the empirical foundations for flexible, localizable, humane language technologies of the future.
  • Dona, L., & Schouwstra, M. (2022). The Role of Structural Priming, Semantics and Population Structure in Word Order Conventionalization: A Computational Model. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 171-173). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Doronina, L., Hughes, G. M., Moreno-Santillan, D., Lawless, C., Lonergan, T., Ryan, L., Jebb, D., Kirilenko, B. M., Korstian, J. M., Dávalos, L. M., Vernes, S. C., Myers, E. W., Teeling, E. C., Hiller, M., Jermiin, L. S., Schmitz, J., Springer, M. S., & Ray, D. A. (2022). Contradictory phylogenetic signals in the laurasiatheria anomaly zone. Genes, 13(5): 766. doi:10.3390/genes13050766.

    Abstract

    Relationships among laurasiatherian clades represent one of the most highly disputed topics in mammalian phylogeny. In this study, we attempt to disentangle laurasiatherian interordinal relationships using two independent genome-level approaches: (1) quantifying retrotransposon presence/absence patterns, and (2) comparisons of exon datasets at the levels of nucleotides and amino acids. The two approaches revealed contradictory phylogenetic signals, possibly due to a high level of ancestral incomplete lineage sorting. The positions of Eulipotyphla and Chiroptera as the first and second earliest divergences were consistent across the approaches. However, the phylogenetic relationships of Perissodactyla, Cetartiodactyla, and Ferae, were contradictory. While retrotransposon insertion analyses suggest a clade with Cetartiodactyla and Ferae, the exon dataset favoured Cetartiodactyla and Perissodactyla. Future analyses of hitherto unsampled laurasiatherian lineages and synergistic analyses of retrotransposon insertions, exon and conserved intron/intergenic sequences might unravel the conflicting patterns of relationships in this major mammalian clade.
  • Doumas, L. A. A., Puebla, G., Martin, A. E., & Hummel, J. E. (2022). A theory of relation learning and cross-domain generalization. Psychological Review, 129(5), 999-1041. doi:10.1037/rev0000346.

    Abstract

    People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiated in a computational model, based on the idea that cross-domain generalization in humans is a case of analogical inference over structured (i.e., symbolic) relational representations. The model is an extension of the Learning and Inference with Schemas and Analogy (LISA; Hummel & Holyoak, 1997, 2003) and Discovery of Relations by Analogy (DORA; Doumas et al., 2008) models of relational inference and learning. The resulting model learns both the content and format (i.e., structure) of relational representations from nonrelational inputs without supervision, when augmented with the capacity for reinforcement learning it leverages these representations to learn about individual domains, and then generalizes to new domains on the first exposure (i.e., zero-shot learning) via analogical inference. We demonstrate the capacity of the model to learn structured relational representations from a variety of simple visual stimuli, and to perform cross-domain generalization between video games (Breakout and Pong) and between several psychological tasks. We demonstrate that the model’s trajectory closely mirrors the trajectory of children as they learn about relations, accounting for phenomena from the literature on the development of children’s reasoning and analogy making. The model’s ability to generalize between domains demonstrates the flexibility afforded by representing domains in terms of their underlying relational structure, rather than simply in terms of the statistical relations between their inputs and outputs.
  • Doust, C., Fontanillas, P., Eising, E., Gordon, S. D., Wang, Z., Alagöz, G., Molz, B., 23andMe Research Team, Quantitative Trait Working Group of the GenLang Consortium, St Pourcain, B., Francks, C., Marioni, R. E., Zhao, J., Paracchini, S., Talcott, J. B., Monaco, A. P., Stein, J. F., Gruen, J. R., Olson, R. K., Willcutt, E. G., DeFries, J. C., Pennington, B. F. and 7 moreDoust, C., Fontanillas, P., Eising, E., Gordon, S. D., Wang, Z., Alagöz, G., Molz, B., 23andMe Research Team, Quantitative Trait Working Group of the GenLang Consortium, St Pourcain, B., Francks, C., Marioni, R. E., Zhao, J., Paracchini, S., Talcott, J. B., Monaco, A. P., Stein, J. F., Gruen, J. R., Olson, R. K., Willcutt, E. G., DeFries, J. C., Pennington, B. F., Smith, S. D., Wright, M. J., Martin, N. G., Auton, A., Bates, T. C., Fisher, S. E., & Luciano, M. (2022). Discovery of 42 genome-wide significant loci associated with dyslexia. Nature Genetics. doi:10.1038/s41588-022-01192-y.

    Abstract

    Reading and writing are crucial life skills but roughly one in ten children are affected by dyslexia, which can persist into adulthood. Family studies of dyslexia suggest heritability up to 70%, yet few convincing genetic markers have been found. Here we performed a genome-wide association study of 51,800 adults self-reporting a dyslexia diagnosis and 1,087,070 controls and identified 42 independent genome-wide significant loci: 15 in genes linked to cognitive ability/educational attainment, and 27 new and potentially more specific to dyslexia. We validated 23 loci (13 new) in independent cohorts of Chinese and European ancestry. Genetic etiology of dyslexia was similar between sexes, and genetic covariance with many traits was found, including ambidexterity, but not neuroanatomical measures of language-related circuitry. Dyslexia polygenic scores explained up to 6% of variance in reading traits, and might in future contribute to earlier identification and remediation of dyslexia.
  • Drijvers, L., & Holler, J. (2022). Face-to-face spatial orientation fine-tunes the brain for neurocognitive processing in conversation. iScience, 25(11): 105413. doi:10.1016/j.isci.2022.105413.

    Abstract

    We here demonstrate that face-to-face spatial orientation induces a special ‘social mode’ for neurocognitive processing during conversation, even in the absence of visibility. Participants conversed face-to-face, face-to-face but visually occluded, and back-to-back to tease apart effects caused by seeing visual communicative signals and by spatial orientation. Using dual-EEG, we found that 1) listeners’ brains engaged more strongly while conversing in face-to-face than back-to-back, irrespective of the visibility of communicative signals, 2) listeners attended to speech more strongly in a back-to-back compared to a face-to-face spatial orientation without visibility; visual signals further reduced the attention needed; 3) the brains of interlocutors were more in sync in a face-to-face compared to a back-to-back spatial orientation, even when they could not see each other; visual signals further enhanced this pattern. Communicating in face-to-face spatial orientation is thus sufficient to induce a special ‘social mode’ which fine-tunes the brain for neurocognitive processing in conversation.
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.

Share this page