Publications

Displaying 201 - 300 of 1717
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., & Levinson, S. C. (2008). Language and landscape: A cross-linguistic perspective. Language Sciences, 30(2/3), 135-150. doi:10.1016/j.langsci.2006.12.028.

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burenhult, N. (Ed.). (2008). Language and landscape: Geographical ontology in cross-linguistic perspective [Special Issue]. Language Sciences, 30(2/3).

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burgers, N., Ettema, D. F., Hooimeijer, P., & Barendse, M. T. (2021). The effects of neighbours on sport club membership. European Journal for Sport and Society, 18(4), 310-325. doi:10.1080/16138171.2020.1840710.

    Abstract

    Neighbours have been found to influence each other’s behaviour (contagion effect). However, little is known about the influence on sport club membership. This while increasing interest has risen for the social role of sport clubs. Sport clubs could bring people from different backgrounds together. A mixed composition is a key element in this social role. Individual characteristics are strong predictors of sport club membership. Western high educated men are more likely to be members. In contrast to people with a non-Western migration background. The neighbourhood is a more fixed meeting place, which provides unique opportunities for people from different backgrounds to interact. This study aims to gain more insight into the influence of neighbours on sport club membership. This research looks especially at the composition of neighbour’s migration background, since they tend to be more or less likely to be members and therefore could encourage of inhibit each other. A population database including the only registry data of all Dutch inhabitants was merged with data of 11 sport unions. The results show a cross-level effect of neighbours on sport club membership. We find a contagion effect of neighbours’ migration background; having a larger proportion of neighbours with a migration background from a non-Western country reduces the odds, as expected. However, this contagion effect was not found for people with a Moroccan or Turkish background.
  • Burkhardt, P., Avrutin, S., Piñango, M. M., & Ruigendijk, E. (2008). Slower-than-normal syntactic processing in agrammatic Broca's aphasia: Evidence from Dutch. Journal of Neurolinguistics, 21(2), 120-137. doi:10.1016/j.jneuroling.2006.10.004.

    Abstract

    Studies of agrammatic Broca's aphasia reveal a diverging pattern of performance in the comprehension of reflexive elements: offline, performance seems unimpaired, whereas online—and in contrast to both matching controls and Wernicke's patients—no antecedent reactivation is observed at the reflexive. Here we propose that this difference characterizes the agrammatic comprehension deficit as a result of slower-than-normal syntactic structure formation. To test this characterization, the comprehension of three Dutch agrammatic patients and matching control participants was investigated utilizing the cross-modal lexical decision (CMLD) interference task. Two types of reflexive-antecedent dependencies were tested, which have already been shown to exert distinct processing demands on the comprehension system as a function of the level at which the dependency was formed. Our hypothesis predicts that if the agrammatic system has a processing limitation such that syntactic structure is built in a protracted manner, this limitation will be reflected in delayed interpretation. Confirming previous findings, the Dutch patients show an effect of distinct processing demands for the two types of reflexive-antecedent dependencies but with a temporal delay. We argue that this delayed syntactic structure formation is the result of limited processing capacity that specifically affects the syntactic system.
  • Burkhardt, P. (2008). Two types of definites: Evidence for presupposition cost. In A. Grønn (Ed.), Proceedings of SuB 12 (pp. 66-80). Oslo: ILOS.

    Abstract

    This paper investigates the notion of definiteness from a psycholinguistic perspective and addresses Löbner’s (1987) distinction between semantic and pragmatic definites. To this end inherently definite noun phrases, proper names, and indexicals are investigated as instances of (relatively) rigid designators (i.e. semantic definites) and contrasted with definite noun phrases and third person pronouns that are contingent on context to unambiguously determine their reference (i.e. pragmatic definites). Electrophysiological data provide support for this distinction and further substantiate the claim that proper names differ from definite descriptions. These findings suggest that certain expressions carry a feature of inherent definiteness, which facilitates their discourse integration (i.e. semantic definites), while others rely on the establishment of a relation with prior information, which results in processing cost.
  • Burkhardt, P. (2008). What inferences can tell us about the given-new distinction. In Proceedings of the 18th International Congress of Linguists (pp. 219-220).
  • Burkhardt, P. (2008). Dependency precedes independence: Online evidence from discourse processing. In A. Benz, & P. Kühnlein (Eds.), Constraints in discourse (pp. 141-158). Amsterdam: Benjamins.

    Abstract

    This paper investigates the integration of definite determiner phrases (DPs) as a function of their contextual salience, which is reflected in the degree of dependency on prior information. DPs depend on previously established discourse referents or introduce a new, independent discourse referent. This paper presents a formal model that explains how discourse referents are represented in the language system and what kind of mechanisms are implemented during DP interpretation. Experimental data from an event-related potential study are discussed that demonstrate how definite DPs are integrated in real-time processing. The data provide evidence for two distinct mechanisms – Specify R and Establish Independent File Card – and substantiate a model that includes various processes and constraints at the level of discourse representation.
  • Byers-Heinlein, K., Bergmann, C., & Savalei, V. (2022). Six solutions for more reliable infant research. Infant and Child Development, 31(5): e2296. doi:10.1002/icd.2296.

    Abstract

    Infant research is often underpowered, undermining the robustness and replicability of our findings. Improving the reliability of infant studies offers a solution for increasing statistical power independent of sample size. Here, we discuss two senses of the term reliability in the context of infant research: reliable (large) effects and reliable measures. We examine the circumstances under which effects are strongest and measures are most reliable and use synthetic datasets to illustrate the relationship between effect size, measurement reliability, and statistical power. We then present six concrete solutions for more reliable infant research: (a) routinely estimating and reporting the effect size and measurement reliability of infant tasks, (b) selecting the best measurement tool, (c) developing better infant paradigms, (d) collecting more data points per infant, (e) excluding unreliable data from the analysis, and (f) conducting more sophisticated data analyses. Deeper consideration of measurement in infant research will improve our ability to study infant development.
  • Byers-Heinlein, K., Tsui, A. S. M., Bergmann, C., Black, A. K., Brown, A., Carbajal, M. J., Durrant, S., Fennell, C. T., Fiévet, A.-C., Frank, M. C., Gampe, A., Gervain, J., Gonzalez-Gomez, N., Hamlin, J. K., Havron, N., Hernik, M., Kerr, S., Killam, H., Klassen, K., Kosie, J. and 18 moreByers-Heinlein, K., Tsui, A. S. M., Bergmann, C., Black, A. K., Brown, A., Carbajal, M. J., Durrant, S., Fennell, C. T., Fiévet, A.-C., Frank, M. C., Gampe, A., Gervain, J., Gonzalez-Gomez, N., Hamlin, J. K., Havron, N., Hernik, M., Kerr, S., Killam, H., Klassen, K., Kosie, J., Kovács, Á. M., Lew-Williams, C., Liu, L., Mani, N., Marino, C., Mastroberardino, M., Mateu, V., Noble, C., Orena, A. J., Polka, L., Potter, C. E., Schreiner, M., Singh, L., Soderstrom, M., Sundara, M., Waddell, C., Werker, J. F., & Wermelinger, S. (2021). A multilab study of bilingual infants: Exploring the preference for infant-directed speech. Advances in Methods and Practices in Psychological Science, 4(1), 1-30. doi:10.1177/2515245920974622.

    Abstract

    From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) than adult-directed speech (ADS). Yet, IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multi-site study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants’ IDS preference. As part of the multi-lab ManyBabies project, we compared lab-matched samples of 333 bilingual and 385 monolingual infants’ preference for North-American English IDS (cf. ManyBabies Consortium, in press (MB1)), tested in 17 labs in 7 countries. Those infants were tested in two age groups: 6–9 months (the younger sample) and 12–15 months (the older sample). We found that bilingual and monolingual infants both preferred IDS to ADS, and did not differ in terms of the overall magnitude of this preference. However, amongst bilingual infants who were acquiring North-American English (NAE) as a native language, greater exposure to NAE was associated with a stronger IDS preference, extending the previous finding from MB1 that monolinguals learning NAE as a native language showed a stronger preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes a similar contribution to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.
  • Byun, K.-S., Roberts, S. G., De Vos, C., Zeshan, U., & Levinson, S. C. (2022). Distinguishing selection pressures in an evolving communication system: Evidence from colournaming in 'cross signing'. Frontiers in Communication, 7: 1024340. doi:10.3389/fcomm.2022.1024340.

    Abstract

    Cross-signing—the emergence of an interlanguage between users of different sign languages—offers a rare chance to examine the evolution of a natural communication system in real time. To provide an insight into this process, we analyse an annotated video corpus of 340 minutes of interaction between signers of different language backgrounds on their first meeting and after living with each other for several weeks. We focus on the evolution of shared color terms and examine the role of different selectional pressures, including frequency, content, coordination and interactional context. We show that attentional factors in interaction play a crucial role. This suggests that understanding meta-communication is critical for explaining the cultural evolution of linguistic systems.
  • Cambier, N., Miletitch, R., Burraco, A. B., & Raviv, L. (2022). Prosociality in swarm robotics: A model to study self-domestication and language evolution. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 98-100). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Cao, Y., Oostenveld, R., Alday, P. M., & Piai, V. (2022). Are alpha and beta oscillations spatially dissociated over the cortex in context‐driven spoken‐word production? Psychophysiology, 59(6): e13999. doi:10.1111/psyp.13999.

    Abstract

    Decreases in oscillatory alpha- and beta-band power have been consistently found in spoken-word production. These have been linked to both motor preparation and conceptual-lexical retrieval processes. However, the observed power decreases have a broad frequency range that spans two “classic” (sensorimotor) bands: alpha and beta. It remains unclear whether alpha- and beta-band power decreases contribute independently when a spoken word is planned. Using a re-analysis of existing magnetoencephalography data, we probed whether the effects in alpha and beta bands are spatially distinct. Participants read a sentence that was either constraining or non-constraining toward the final word, which was presented as a picture. In separate blocks participants had to name the picture or score its predictability via button press. Irregular-resampling auto-spectral analysis (IRASA) was used to isolate the oscillatory activity in the alpha and beta bands from the background 1-over-f spectrum. The sources of alpha- and beta-band oscillations were localized based on the participants’ individualized peak frequencies. For both tasks, alpha- and beta-power decreases overlapped in left posterior temporal and inferior parietal cortex, regions that have previously been associated with conceptual and lexical processes. The spatial distributions of the alpha and beta power effects were spatially similar in these regions to the extent we could assess it. By contrast, for left frontal regions, the spatial distributions differed between alpha and beta effects. Our results suggest that for conceptual-lexical retrieval, alpha and beta oscillations do not dissociate spatially and, thus, are distinct from the classical sensorimotor alpha and beta oscillations.
  • Caramazza, A., Miozzo, M., Costa, A., Schiller, N. O., & Alario, F.-X. (2003). Etude comparee de la production des determinants dans differentes langues. In E. Dupoux (Ed.), Les Langages du cerveau: Textes en l'honneur de Jacques Mehler (pp. 213-229). Paris: Odile Jacob.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2022). The time course of language production as revealed by pattern classification of MEG sensor data. The Journal of Neuroscience, 42(29), 5745-5754. doi:10.1523/JNEUROSCI.1923-21.2022.

    Abstract

    Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such “earliness” is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word production
  • Carota, F., Nili, H., Pulvermüller, F., & Kriegeskorte, N. (2021). Distinct fronto-temporal substrates of distributional and taxonomic similarity among words: Evidence from RSA of BOLD signals. NeuroImage, 224: 117408. doi:10.1016/j.neuroimage.2020.117408.

    Abstract

    A class of semantic theories defines concepts in terms of statistical distributions of lexical items, basing meaning on vectors of word co-occurrence frequencies. A different approach emphasizes abstract hierarchical taxonomic relationships among concepts. However, the functional relevance of these different accounts and how they capture information-encoding of meaning in the brain still remains elusive.

    We investigated to what extent distributional and taxonomic models explained word-elicited neural responses using cross-validated representational similarity analysis (RSA) of functional magnetic resonance imaging (fMRI) and novel model comparisons.

    Our findings show that the brain encodes both types of semantic similarities, but in distinct cortical regions. Posterior middle temporal regions reflected word links based on hierarchical taxonomies, along with the action-relatedness of the semantic word categories. In contrast, distributional semantics best predicted the representational patterns in left inferior frontal gyrus (LIFG, BA 47). Both representations coexisted in angular gyrus supporting semantic binding and integration. These results reveal that neuronal networks with distinct cortical distributions across higher-order association cortex encode different representational properties of word meanings. Taxonomy may shape long-term lexical-semantic representations in memory consistently with sensorimotor details of semantic categories, whilst distributional knowledge in the LIFG (BA 47) enable semantic combinatorics in the context of language use.

    Our approach helps to elucidate the nature of semantic representations essential for understanding human language.
  • Carota, F., & Sirigu, A. (2008). Neural Bases of Sequence Processing in Action and Language. Language Learning, 58(1), 179-199. doi:10.1111/j.1467-9922.2008.00470.x.

    Abstract

    Real-time estimation of what we will do next is a crucial prerequisite
    of purposive behavior. During the planning of goal-oriented actions, for
    instance, the temporal and causal organization of upcoming subsequent
    moves needs to be predicted based on our knowledge of events. A forward
    computation of sequential structure is also essential for planning
    contiguous discourse segments and syntactic patterns in language. The
    neural encoding of sequential event knowledge and its domain dependency
    is a central issue in cognitive neuroscience. Converging evidence shows
    the involvement of a dedicated neural substrate, including the
    prefrontal cortex and Broca's area, in the representation and the
    processing of sequential event structure. After reviewing major
    representational models of sequential mechanisms in action and language,
    we discuss relevant neuropsychological and neuroimaging findings on the
    temporal organization of sequencing and sequence processing in both
    domains, suggesting that sequential event knowledge may be modularly
    organized through prefrontal and frontal subregions.
  • Carrion Castillo, A., Estruch, S. B., Maassen, B., Franke, B., Francks, C., & Fisher, S. E. (2021). Whole-genome sequencing identifies functional noncoding variation in SEMA3C that cosegregates with dyslexia in a multigenerational family. Human Genetics, 140, 1183-1200. doi:10.1007/s00439-021-02289-w.

    Abstract

    Dyslexia is a common heritable developmental disorder involving impaired reading abilities. Its genetic underpinnings are thought to be complex and heterogeneous, involving common and rare genetic variation. Multigenerational families segregating apparent monogenic forms of language-related disorders can provide useful entrypoints into biological pathways. In the present study, we performed a genome-wide linkage scan in a three-generational family in which dyslexia affects 14 of its 30 members and seems to be transmitted with an autosomal dominant pattern of inheritance. We identified a locus on chromosome 7q21.11 which cosegregated with dyslexia status, with the exception of two cases of phenocopy (LOD = 2.83). Whole-genome sequencing of key individuals enabled the assessment of coding and noncoding variation in the family. Two rare single-nucleotide variants (rs144517871 and rs143835534) within the first intron of the SEMA3C gene cosegregated with the 7q21.11 risk haplotype. In silico characterization of these two variants predicted effects on gene regulation, which we functionally validated for rs144517871 in human cell lines using luciferase reporter assays. SEMA3C encodes a secreted protein that acts as a guidance cue in several processes, including cortical neuronal migration and cellular polarization. We hypothesize that these intronic variants could have a cis-regulatory effect on SEMA3C expression, making a contribution to dyslexia susceptibility in this family.
  • Carter, G., & Nieuwland, M. S. (2022). Predicting definite and indefinite referents during discourse comprehension: Evidence from event‐related potentials. Cognitive Science, 46(2): e13092. doi:10.1111/cogs.13092.

    Abstract

    Linguistic predictions may be generated from and evaluated against a representation of events and referents described in the discourse. Compatible with this idea, recent work shows that predictions about novel noun phrases include their definiteness. In the current follow-up study, we ask whether people engage similar prediction-related processes for definite and indefinite referents. This question is relevant for linguistic theories that imply a processing difference between definite and indefinite noun phrases, typically because definiteness is thought to require a uniquely identifiable referent in the discourse. We addressed this question in an event-related potential (ERP) study (N = 48) with preregistration of data acquisition, preprocessing, and Bayesian analysis. Participants read Dutch mini-stories with a definite or indefinite novel noun phrase (e.g., “het/een huis,” the/a house), wherein (in)definiteness of the article was either expected or unexpected and the noun was always strongly expected. Unexpected articles elicited enhanced N400s, but unexpectedly indefinite articles also elicited a positive ERP effect at frontal channels compared to expectedly indefinite articles. We tentatively link this effect to an antiuniqueness violation, which may force people to introduce a new referent over and above the already anticipated one. Interestingly, expectedly definite nouns elicited larger N400s than unexpectedly definite nouns (replicating a previous surprising finding) and indefinite nouns. Although the exact nature of these noun effects remains unknown, expectedly definite nouns may have triggered the strongest semantic activation because they alone refer to specific and concrete referents. In sum, results from both the articles and nouns clearly demonstrate that definiteness marking has a rapid effect on processing, counter to recent claims regarding definiteness processing.
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 63-79). Oxford: Wiley.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Casillas, M., Brown, P., & Levinson, S. C. (2021). Early language experience in a Papuan community. Journal of Child Language, 48(4), 792-814. doi:10.1017/S0305000920000549.

    Abstract

    The rate at which young children are directly spoken to varies due to many factors, including (a) caregiver ideas about children as conversational partners and (b) the organization of everyday life. Prior work suggests cross-cultural variation in rates of child-directed speech is due to the former factor, but has been fraught with confounds in comparing postindustrial and subsistence farming communities. We investigate the daylong language environments of children (0;0–3;0) on Rossel Island, Papua New Guinea, a small-scale traditional community where prior ethnographic study demonstrated contingency-seeking child interaction styles. In fact, children were infrequently directly addressed and linguistic input rate was primarily affected by situational factors, though children’s vocalization maturity showed no developmental delay. We compare the input characteristics between this community and a Tseltal Mayan one in which near-parallel methods produced comparable results, then briefly discuss the models and mechanisms for learning best supported by our findings.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2021). Do the eyes have it? A systematic review on the role of eye gaze in infant language development. Frontiers in Psychology, 11: 589096. doi:10.3389/fpsyg.2020.589096.

    Abstract

    Eye gaze is a ubiquitous cue in child-caregiver interactions and infants are highly attentive to eye gaze from very early on. However, the question of why infants show gaze-sensitive behavior, and what role this sensitivity to gaze plays in their language development, is not yet well-understood. To gain a better understanding of the role of eye gaze in infants’ language learning, we conducted a broad systematic review of the developmental literature for all studies that investigate the role of eye gaze in infants’ language development. Across 77 peer-reviewed articles containing data from typically-developing human infants (0-24 months) in the domain of language development we identified two broad themes. The first tracked the effect of eye gaze on four developmental domains: (1) vocabulary development, (2) word-object mapping, (3) object processing, and (4) speech processing. Overall, there is considerable evidence that infants learn more about objects and are more likely to form word-object mappings in the presence of eye gaze cues, both of which are necessary for learning words. In addition, there is good evidence for longitudinal relationships between infants’ gaze following abilities and later receptive and expressive vocabulary. However, many domains (e.g. speech processing) are understudied; further work is needed to decide whether gaze effects are specific to tasks such as word-object mapping, or whether they reflect a general learning enhancement mechanism. The second theme explored the reasons why eye gaze might be facilitative for learning, addressing the question of whether eye gaze is treated by infants as a specialized socio-cognitive cue. We concluded that the balance of evidence supports the idea that eye gaze facilitates infants’ learning by enhancing their arousal, memory and attentional capacities to a greater extent than other low-level attentional cues. However, as yet, there are too few studies that directly compare the effect of eye gaze cues and non-social, attentional cues for strong conclusions to be drawn. We also suggest there might be a developmental effect, with eye gaze, over the course of the first two years of life, developing into a truly ostensive cue that enhances language learning across the board.

    Additional information

    data sheet
  • Chan, A., Matthews, S., Tse, N., Lam, A., Chang, F., & Kidd, E. (2021). Revisiting Subject–Object Asymmetry in the Production of Cantonese Relative Clauses: Evidence From Elicited Production in 3-Year-Olds. Frontiers in Psychology, 12: 679008. doi:10.3389/fpsyg.2021.679008.

    Abstract

    Emergentist approaches to language acquisition identify a core role for language-specific experience and give primacy to other factors like function and domain-general learning mechanisms in syntactic development. This directly contrasts with a nativist structurally oriented approach, which predicts that grammatical development is guided by Universal Grammar and that structural factors constrain acquisition. Cantonese relative clauses (RCs) offer a good opportunity to test these perspectives because its typologically rare properties decouple the roles of frequency and complexity in subject- and object-RCs in a way not possible in European languages. Specifically, Cantonese object RCs of the classifier type are frequently attested in children’s linguistic experience and are isomorphic to frequent and early-acquired simple SVO transitive clauses, but according to formal grammatical analyses Cantonese subject RCs are computationally less demanding to process. Thus, the two opposing theories make different predictions: the emergentist approach predicts a specific preference for object RCs of the classifier type, whereas the structurally oriented approach predicts a subject advantage. In the current study we revisited this issue. Eighty-seven monolingual Cantonese children aged between 3;2 and 3;11 (Mage: 3;6) participated in an elicited production task designed to elicit production of subject- and object- RCs. The children were very young and most of them produced only noun phrases when RCs were elicited. Those (nine children) who did produce RCs produced overwhelmingly more object RCs than subject RCs, even when animacy cues were controlled. The majority of object RCs produced were the frequent classifier-type RCs. The findings concur with our hypothesis from the emergentist perspectives that input frequency and formal and functional similarity to known structures guide acquisition.
  • Chen, J. (2008). The acquisition of verb compounding in Mandarin Chinese. PhD Thesis, Vrije Universiteit Amsterdam, Amsterdam.

    Abstract

    Seeing someone breaking a stick into two, an English speaks typically describes with a verb break, but a Mandarin speaker has to say bai1-duan4 ‘bend-be.broken’, a verb
    compound composed of two free verbs with each verb encoding one aspect of the breaking event. Verb compounding represents a typical and productive way to describe
    events of motion (e.g., zou3-chu1 ‘walk-exit’), and state change (e.g., bai1-duan4 ‘bendbe.broken’), the most common types of events that children of all languages are exposed
    to from an early age. Since languages vary in how events are linguistically encoded and categorized, the development of verb compounding provides a window to investigate the
    acquisition of form and meaning mapping for highly productive but constrained constructions and the interaction between children’s linguistic development and cognitive
    development. The theoretical analysis of verb compounds has been one of the central issues in Chinese linguistics, but the acquisition of this grammatical system has never
    been systematically studied. This dissertation constitutes the first in-depth study of this topic. It analyzes speech data from two longitudinal corpora as well as the data collected from five experiments on production and comprehension of verb compounds from children in P. R. China. It provides a description of the developmental process and unravels the complex learning tasks from the perspective of language production, comprehension, event categorization, and the interface of semantics and syntax. In showing how first-language learners acquire the Mandarin-specific way of representing and encoding causal events and motion events, this study has significance both for studies of language acquisition and for studies of cognition and event construal.
  • Chen, X., Hartsuiker, R. J., Muylle, M., Slim, M. S., & Zhang, C. (2022). The effect of animacy on structural Priming: A replication of Bock, Loebell and Morey (1992). Journal of Memory and Language, 127: 104354. doi:10.1016/j.jml.2022.104354.

    Abstract

    Bock et al. (1992) found that the binding of animacy features onto grammatical roles is susceptible to priming in sentence production. Moreover, this effect did not interact with structural priming. This finding supports an account according to which syntactic representations are insensitive to the consistency of animacy-to-structure mapping. This account has contributed greatly to the development of syntactic processing theories in language production. However, this study has never been directly replicated and the few related studies showed mixed results. A meta-analysis of these studies failed to replicate the findings of Bock et al. (1992). Therefore, we conducted a well-powered replication (n = 496) that followed the original study as closely as possible. We found an effect of structural priming and an animacy priming effect, replicating Bock et al.’s findings. In addition, we replicated Bock et al.’s (1992) observed null interaction between structural priming and animacy binding, which suggests that syntactic representations are indeed independent of semantic information about animacy.
  • Chen, X. S., White, W. T. J., Collins, L. J., & Penny, D. (2008). Computational identification of four spliceosomal snRNAs from the deep-branch eukaryote Giardia intestinalis. PLoS One, 3(8), e3106. doi:10.1371/journal.pone.0003106.

    Abstract

    RNAs processing other RNAs is very general in eukaryotes, but is not clear to what extent it is ancestral to eukaryotes. Here we focus on pre-mRNA splicing, one of the most important RNA-processing mechanisms in eukaryotes. In most eukaryotes splicing is predominantly catalysed by the major spliceosome complex, which consists of five uridine-rich small nuclear RNAs (U-snRNAs) and over 200 proteins in humans. Three major spliceosomal introns have been found experimentally in Giardia; one Giardia U-snRNA (U5) and a number of spliceosomal proteins have also been identified. However, because of the low sequence similarity between the Giardia ncRNAs and those of other eukaryotes, the other U-snRNAs of Giardia had not been found. Using two computational methods, candidates for Giardia U1, U2, U4 and U6 snRNAs were identified in this study and shown by RT-PCR to be expressed. We found that identifying a U2 candidate helped identify U6 and U4 based on interactions between them. Secondary structural modelling of the Giardia U-snRNA candidates revealed typical features of eukaryotic U-snRNAs. We demonstrate a successful approach to combine computational and experimental methods to identify expected ncRNAs in a highly divergent protist genome. Our findings reinforce the conclusion that spliceosomal small-nuclear RNAs existed in the last common ancestor of eukaryotes.
  • Chen, A., & Mennen, I. (2008). Encoding interrogativity intonationally in a second language. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 513-516). Campinas: Editora RG/CNPq.

    Abstract

    This study investigated how untutored learners encode interrogativity intonationaly in a second language. Questions produced in free conversation were selected from longitudinal data of four untutored Italian learners of English. The questions were mostly wh-questions (WQs) and declarative questions (DQs). We examined the use of three cross-linguistically attested question cues: final rise, high peak and late peak. It was found that across learners the final rise occurred more frequently in DQs than in WQs. This is in line with the Functional Hypothesis whereby less syntactically-marked questions are more intonationally marked. However, the use of peak height and alignment is less consistent. The peak of the nuclear pitch accent was not necessarily higher and later in DQs than in WQs. The difference in learners’ exploitation of these cues can be explained by the relative importance of a question cue in the target language.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Cheung, C.-Y., Yakpo, K., & Coupé, C. (2022). A computational simulation of the genesis and spread of lexical items in situations of abrupt language contact. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 115-122). Nijmegen: Joint Conference on Language Evolution (JCoLE).

    Abstract

    The current study presents an agent-based model which simulates the innovation and
    competition among lexical items in cases of language contact. It is inspired by relatively
    recent historical cases in which the linguistic ecology and sociohistorical context are highly complex. Pidgin and creole genesis offers an opportunity to obtain linguistic facts, social dynamics, and historical demography in a highly segregated society. This provides a solid ground for researching the interaction of populations with different pre-existing language systems, and how different factors contribute to the genesis of the lexicon of a newly generated mixed language. We take into consideration the population dynamics and structures, as well as a distribution of word frequencies related to language use, in order to study how social factors may affect the developmental trajectory of languages. Focusing on the case of Sranan in Suriname, our study shows that it is possible to account for the
    composition of its core lexicon in relation to different social groups, contact patterns, and
    large population movements.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Cho, T. (2022). The Phonetics-Prosody Interface and Prosodic Strengthening in Korean. In S. Cho, & J. Whitman (Eds.), Cambridge handbook of Korean linguistics (pp. 248-293). Cambridge: Cambridge University Press.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chormai, P., Pu, Y., Hu, H., Fisher, S. E., Francks, C., & Kong, X. (2022). Machine learning of large-scale multimodal brain imaging data reveals neural correlates of hand preference. NeuroImage, 262: 119534. doi:10.1016/j.neuroimage.2022.119534.

    Abstract

    Lateralization is a fundamental characteristic of many behaviors and the organization of the brain, and atypical lateralization has been suggested to be linked to various brain-related disorders such as autism and schizophrenia. Right-handedness is one of the most prominent markers of human behavioural lateralization, yet its neurobiological basis remains to be determined. Here, we present a large-scale analysis of handedness, as measured by self-reported direction of hand preference, and its variability related to brain structural and functional organization in the UK Biobank (N = 36,024). A multivariate machine learning approach with multi-modalities of brain imaging data was adopted, to reveal how well brain imaging features could predict individual's handedness (i.e., right-handedness vs. non-right-handedness) and further identify the top brain signatures that contributed to the prediction. Overall, the results showed a good prediction performance, with an area under the receiver operating characteristic curve (AUROC) score of up to 0.72, driven largely by resting-state functional measures. Virtual lesion analysis and large-scale decoding analysis suggested that the brain networks with the highest importance in the prediction showed functional relevance to hand movement and several higher-level cognitive functions including language, arithmetic, and social interaction. Genetic analyses of contributions of common DNA polymorphisms to the imaging-derived handedness prediction score showed a significant heritability (h2=7.55%, p <0.001) that was similar to and slightly higher than that for the behavioural measure itself (h2=6.74%, p <0.001). The genetic correlation between the two was high (rg=0.71), suggesting that the imaging-derived score could be used as a surrogate in genetic studies where the behavioural measure is not available. This large-scale study using multimodal brain imaging and multivariate machine learning has shed new light on the neural correlates of human handedness.

    Additional information

    supplementary material
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Clough, S., Hilverman, C., Brown-Schmidt, S., & Duff, M. C. (2022). Evidence of audience design in amnesia: Adaptation in gesture but not speech. Brain Sciences, 12(8): 1082. doi:10.3390/brainsci12081082.

    Abstract

    Speakers design communication for their audience, providing more information in both speech and gesture when their listener is naive to the topic. We test whether the hippocampal declarative memory system contributes to multimodal audience design. The hippocampus, while traditionally linked to episodic and relational memory, has also been linked to the ability to imagine the mental states of others and use language flexibly. We examined the speech and gesture use of four patients with hippocampal amnesia when describing how to complete everyday tasks (e.g., how to tie a shoe) to an imagined child listener and an adult listener. Although patients with amnesia did not increase their total number of words and instructional steps for the child listener, they did produce representational gestures at significantly higher rates for the imagined child compared to the adult listener. They also gestured at similar frequencies to neurotypical peers, suggesting that hand gesture can be a meaningful communicative resource, even in the case of severe declarative memory impairment. We discuss the contributions of multiple memory systems to multimodal audience design and the potential of gesture to act as a window into the social cognitive processes of individuals with neurologic disorders.
  • Cohen, E., Van Leeuwen, E. J. C., Barbosa, A., & Haun, D. B. M. (2021). Does accent trump skin color in guiding children’s social preferences? Evidence from Brazil’s natural lab. Cognitive Development, 60: 101111. doi:10.1016/j.cogdev.2021.101111.

    Abstract

    Previous research has shown significant effects of race and accent on children’s developing social preferences. Accounts of the primacy of accent biases in the evolution and ontogeny of discriminant cooperation have been proposed, but lack systematic cross-cultural investigation. We report three controlled studies conducted with 5−10 year old children across four towns in the Brazilian Amazon, selected for their variation in racial and accent homogeneity/heterogeneity. Study 1 investigated participants’ (N = 289) decisions about friendship and sharing across color-contrasted pairs of target individuals: Black-White, Black-Pardo (Brown), Pardo-White. Study 2 (N = 283) investigated effects of both color and accent (Local vs Non-Local) on friendship and sharing decisions. Overall, there was a significant bias toward the lighter colored individual. A significant preference for local accent mitigates but does not override the color bias, except in the site characterized by both racial and accent heterogeneity. Results also vary by participant age and color. Study 3 (N = 235) reports results of an accent discrimination task that shows an overall increase in accuracy with age. The research suggests that cooperative preferences based on accent and race develop differently in response to locally relevant parameters of racial and linguistic variation.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooke, M., & Scharenborg, O. (2008). The Interspeech 2008 consonant challenge. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1765-1768). ISCA Archive.

    Abstract

    Listeners outperform automatic speech recognition systems at every level, including the very basic level of consonant identification. What is not clear is where the human advantage originates. Does the fault lie in the acoustic representations of speech or in the recognizer architecture, or in a lack of compatibility between the two? Many insights can be gained by carrying out a detailed human-machine comparison. The purpose of the Interspeech 2008 Consonant Challenge is to promote focused comparisons on a task involving intervocalic consonant identification in noise, with all participants using the same training and test data. This paper describes the Challenge, listener results and baseline ASR performance.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2022). Hierarchy in language interpretation: Evidence from behavioural experiments and computational modelling. Language, Cognition and Neuroscience, 37(4), 420-439. doi:10.1080/23273798.2021.1980595.

    Abstract

    It has long been recognised that phrases and sentences are organised hierarchically, but many computational models of language treat them as sequences of words without computing constituent structure. Against this background, we conducted two experiments which showed that participants interpret ambiguous noun phrases, such as second blue ball, in terms of their abstract hierarchical structure rather than their linear surface order. When a neural network model was tested on this task, it could simulate such “hierarchical” behaviour. However, when we changed the training data such that they were not entirely unambiguous anymore, the model stopped generalising in a human-like way. It did not systematically generalise to novel items, and when it was trained on ambiguous trials, it strongly favoured the linear interpretation. We argue that these models should be endowed with a bias to make generalisations over hierarchical structure in order to be cognitively adequate models of human language.
  • Coopmans, C. W., De Hoop, H., Hagoort, P., & Martin, A. E. (2022). Effects of structure and meaning on cortical tracking of linguistic units in naturalistic speech. Neurobiology of Language, 3(3), 386-412. doi:10.1162/nol_a_00070.

    Abstract

    Recent research has established that cortical activity “tracks” the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1–2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.

    Additional information

    supplementary information
  • Coopmans, C. W., & Cohn, N. (2022). An electrophysiological investigation of co-referential processes in visual narrative comprehension. Neuropsychologia, 172: 108253. doi:10.1016/j.neuropsychologia.2022.108253.

    Abstract

    Visual narratives make use of various means to convey referential and co-referential meaning, so comprehenders
    must recognize that different depictions across sequential images represent the same character(s). In this study,
    we investigated how the order in which different types of panels in visual sequences are presented affects how
    the unfolding narrative is comprehended. Participants viewed short comic strips while their electroencephalo-
    gram (EEG) was recorded. We analyzed evoked and induced EEG activity elicited by both full panels (showing a
    full character) and refiner panels (showing only a zoom of that full panel), and took into account whether they
    preceded or followed the panel to which they were co-referentially related (i.e., were cataphoric or anaphoric).
    We found that full panels elicited both larger N300 amplitude and increased gamma-band power compared to
    refiner panels. Anaphoric panels elicited a sustained negativity compared to cataphoric panels, which appeared
    to be sensitive to the referential status of the anaphoric panel. In the time-frequency domain, anaphoric panels
    elicited reduced 8–12 Hz alpha power and increased 45–65 Hz gamma-band power compared to cataphoric
    panels. These findings are consistent with models in which the processes involved in visual narrative compre-
    hension partially overlap with those in language comprehension.
  • Coopmans, C. W., De Hoop, H., Kaushik, K., Hagoort, P., & Martin, A. E. (2021). Structure-(in)dependent interpretation of phrases in humans and LSTMs. In Proceedings of the Society for Computation in Linguistics (SCiL 2021) (pp. 459-463).

    Abstract

    In this study, we compared the performance of a long short-term memory (LSTM) neural network to the behavior of human participants on a language task that requires hierarchically structured knowledge. We show that humans interpret ambiguous noun phrases, such as second blue ball, in line with their hierarchical constituent structure. LSTMs, instead, only do
    so after unambiguous training, and they do not systematically generalize to novel items. Overall, the results of our simulations indicate that a model can behave hierarchically without relying on hierarchical constituent structure.
  • Corps, R. E., Brooke, C., & Pickering, M. (2022). Prediction involves two stages: Evidence from visual-world eye-tracking. Journal of Memory and Language, 122: 104298. doi:10.1016/j.jml.2021.104298.

    Abstract

    Comprehenders often predict what they are going to hear. But do they make the best predictions possible? We addressed this question in three visual-world eye-tracking experiments by asking when comprehenders consider perspective. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress, distractor: hairdryer) objects. In all three experiments, participants rapidly predicted semantic associates of the verb. But participants also predicted consistently – that is, consistent with their beliefs about what the speaker would ultimately say. They predicted consistently from the speaker’s perspective in Experiment 1, their own perspective in Experiment 2, and the character’s perspective in Experiment 3. This consistent effect occurred later than the associative effect. We conclude that comprehenders consider perspective when predicting, but not from the earliest moments of prediction, consistent with a two-stage account.

    Additional information

    data and analysis scripts
  • Corps, R. E., Knudsen, B., & Meyer, A. S. (2022). Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation. Cognition, 223: 105037. doi:10.1016/j.cognition.2022.105037.

    Abstract

    Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O. A., Hanke, T., Efthimiou, E., Zwitserlood, I., & Thoutenhooft, E. (Eds.). (2008). Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages. Paris: ELDA.
  • Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora (pp. 39-43).

    Abstract

    The multimedia annotation tool ELAN was enhanced within the Corpus NGT project by a number of new and improved functions. Most of these functions were not specific to working with sign language video data, and can readily be used for other annotation purposes as well. Their direct utility for working with large amounts of annotation files during the development and use of the Corpus NGT project is what unites the various functions, which are described in this paper. In addition, we aim to characterise future developments that will be needed in order to work efficiently with larger amounts of annotation files, for which a closer integration with the use and display of metadata is foreseen.
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Creaghe, N., Quinn, S., & Kidd, E. (2021). Symbolic play provides a fertile context for language development. Infancy, 26(6), 980-1010. doi:10.1111/infa.12422.

    Abstract

    In this study we test the hypothesis that symbolic play represents a fertile context for language acquisition because its inherent ambiguity elicits communicative behaviours that positively influence development. Infant-caregiver dyads (N = 54) participated in two 20-minute play sessions six months apart (Time 1 = 18 months, Time 2 = 24 months). During each session the dyads played with two sets of toys that elicited either symbolic or functional play. The sessions were transcribed and coded for several features of dyadic interaction and speech; infants’ linguistic proficiency was measured via parental report. The two play contexts resulted in different communicative and linguistic behaviour. Notably, the symbolic play condition resulted in significantly greater conversational turn-taking than functional play, and also resulted in the greater use of questions and mimetics in infant-directed speech (IDS). In contrast, caregivers used more imperative clauses in functional play. Regression analyses showed that unique properties of symbolic play (i.e., turn-taking, yes-no questions, mimetics) positively predicted children’s language proficiency, whereas unique features of functional play (i.e., imperatives in IDS) negatively predicted proficiency. The results provide evidence in support of the hypothesis that symbolic play is a fertile context for language development, driven by the need to negotiate meaning.
  • Creaghe, N., & Kidd, E. (2022). Symbolic play as a zone of proximal development: An analysis of informational exchange. Social Development, 31(4), 1138-1156. doi:10.1111/sode.12592.

    Abstract

    Symbolic play has long been considered a beneficial context for development. According to Cultural Learning theory, one reason for this is that symbolically-infused dialogical interactions constitute a zone of proximal development. However, the dynamics of caregiver-child interactions during symbolic play are still not fully understood. In the current study, we investigated informational exchange between fifty-two 24-month-old infants and their primary caregivers during symbolic play and a comparable, non-symbolic, functional play context. We coded over 11,000 utterances for whether participants had superior, equivalent, or inferior knowledge concerning the current conversational topic. Results showed that children were significantly more knowledgeable speakers and recipients in symbolic play, whereas the opposite was the case for caregivers, who were more knowledgeable in functional play. The results suggest that, despite its potential conceptual complexity, symbolic play may scaffold development because it facilitates infants’ communicative success by promoting them to ‘co-constructors of meaning’.

    Additional information

    supporting information
  • Creemers, A., & Embick, D. (2022). The role of semantic transparency in the processing of spoken compound words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(5), 734-751. doi:10.1037/xlm0001132.

    Abstract

    The question of whether lexical decomposition is driven by semantic transparency in the lexical processing of morphologically complex words, such as compounds, remains controversial. Prior research on compound processing has predominantly examined visual processing. Focusing instead on spoken word word recognition, the present study examined the processing of auditorily presented English compounds that were semantically transparent (e.g., farmyard) or partially opaque with an opaque head (e.g., airline) or opaque modifier (e.g., pothole). Three auditory primed lexical decision experiments were run to examine to what extent constituent priming effects are affected by the semantic transparency of a compound and whether semantic transparency affects the processing of heads and modifiers equally. The results showed priming effects for both modifiers and heads regardless of their semantic transparency, indicating that individual constituents are accessed in transparent as well as opaque compounds. In addition, the results showed smaller priming effects for semantically opaque heads compared with matched transparent compounds with the same head. These findings suggest that semantically opaque heads induce an increased processing cost, which may result from the need to suppress the meaning of the head in favor of the meaning of the opaque compound.
  • Creemers, A., & Meyer, A. S. (2022). The processing of ambiguous pronominal reference is sensitive to depth of processing. Glossa Psycholinguistics, 1(1): 3. doi:10.5070/G601166.

    Abstract

    Previous studies on the processing of ambiguous pronominal reference have led to contradictory results: some suggested that ambiguity may hinder processing (Stewart, Holler, & Kidd, 2007), while others showed an ambiguity advantage (Grant, Sloggett, & Dillon, 2020) similar to what has been reported for structural ambiguities. This study provides a conceptual replication of Stewart et al. (2007, Experiment 1), to examine whether the discrepancy in earlier results is caused by the processing depth that participants engage in (cf. Swets, Desmet, Clifton, & Ferreira, 2008). We present the results from a word-by-word self-paced reading experiment with Dutch sentences that contained a personal pronoun in an embedded clause that was either ambiguous or disambiguated through gender features. Depth of processing of the embedded clause was manipulated through offline comprehension questions. The results showed that the difference in reading times for ambiguous versus unambiguous sentences depends on the processing depth: a significant ambiguity penalty was found under deep processing but not under shallow processing. No significant ambiguity advantage was found, regardless of processing depth. This replicates the results in Stewart et al. (2007) using a different methodology and a larger sample size for appropriate statistical power. These findings provide further evidence that ambiguous pronominal reference resolution is a flexible process, such that the way in which ambiguous sentences are processed depends on the depth of processing of the relevant information. Theoretical and methodological implications of these findings are discussed.
  • Creemers, A., & Embick, D. (2021). Retrieving stem meanings in opaque words during auditory lexical processing. Language, Cognition and Neuroscience, 36(9), 1107-1122. doi:10.1080/23273798.2021.1909085.

    Abstract

    Recent constituent priming experiments show that Dutch and German prefixed verbs prime their stem, regardless of semantic transparency (e.g. Smolka et al. [(2014). ‘Verstehen’ (‘understand’) primes ‘stehen’ (‘stand’): Morphological structure overrides semantic compositionality in the lexical representation of German complex verbs. Journal of Memory and Language, 72, 16–36. https://doi.org/10.1016/j.jml.2013.12.002]). We examine whether the processing of opaque verbs (e.g. herhalen “repeat”) involves the retrieval of only the whole-word meaning, or whether the lexical-semantic meaning of the stem (halen as “take/get”) is retrieved as well. We report the results of an auditory semantic priming experiment with Dutch prefixed verbs, testing whether the recognition of a semantic associate to the stem (BRENGEN “bring”) is facilitated by the presentation of an opaque prefixed verb. In contrast to prior visual studies, significant facilitation after semantically opaque primes is found, which suggests that the lexical-semantic meaning of stems in opaque words is retrieved. We examine the implications that these findings have for auditory word recognition, and for the way in which different types of meanings are represented and processed.

    Additional information

    supplemental material
  • Cristia, A., Tsuji, S., & Bergmann, C. (2022). A meta-analytic approach to evaluating the explanatory adequacy of theories. Meta-Psychology, 6: MP.2020.2741. doi:10.15626/MP.2020.2741.

    Abstract

    How can data be used to check theories’ explanatory adequacy? The two traditional and most widespread approaches use single studies and non-systematic narrative reviews to evaluate theories’ explanatory adequacy; more
    recently, large-scale replications entered the picture. We argue here that none of these approaches fits in with
    cumulative science tenets. We propose instead Community-Augmented Meta-Analyses (CAMAs), which, like metaanalyses and systematic reviews, are built using all available data; like meta-analyses but not systematic reviews, can
    rely on sound statistical practices to model methodological effects; and like no other approach, are broad-scoped,
    cumulative and open. We explain how CAMAs entail a conceptual shift from meta-analyses and systematic reviews, a
    shift that is useful when evaluating theories’ explanatory adequacy. We then provide step-by-step recommendations
    for how to implement this approach – and what it means when one cannot. This leads us to conclude that CAMAs
    highlight areas of uncertainty better than alternative approaches that bring data to bear on theory evaluation, and
    can trigger a much needed shift towards a cumulative mindset with respect to both theory and data, leading us to
    do and view experiments and narrative reviews differently.

    Additional information

    All data available at OSF
  • Cristia, A., Lavechin, M., Scaff, C., Soderstrom, M., Rowland, C. F., Räsänen, O., Bunce, J., & Bergelson, E. (2021). A thorough evaluation of the Language Environment Analysis (LENA) system. Behavior Research Methods, 53, 467-486. doi:10.3758/s13428-020-01393-5.

    Abstract

    In the previous decade, dozens of studies involving thousands of children across several research disciplines have made use of a combined daylong audio-recorder and automated algorithmic analysis called the LENAⓇ system, which aims to assess children’s language environment. While the system’s prevalence in the language acquisition domain is steadily growing, there are only scattered validation efforts on only some of its key characteristics. Here, we assess the LENAⓇ system’s accuracy across all of its key measures: speaker classification, Child Vocalization Counts (CVC), Conversational Turn Counts (CTC), and Adult Word Counts (AWC). Our assessment is based on manual annotation of clips that have been randomly or periodically sampled out of daylong recordings, collected from (a) populations similar to the system’s original training data (North American English-learning children aged 3-36 months), (b) children learning another dialect of English (UK), and (c) slightly older children growing up in a different linguistic and socio-cultural setting (Tsimane’ learners in rural Bolivia). We find reasonably high accuracy in some measures (AWC, CVC), with more problematic levels of performance in others (CTC, precision of male adults and other children). Statistical analyses do not support the view that performance is worse for children who are dissimilar from the LENAⓇ original training set. Whether LENAⓇ results are accurate enough for a given research, educational, or clinical application depends largely on the specifics at hand. We therefore conclude with a set of recommendations to help researchers make this determination for their goals.
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Cronin, K. A., & Snowdon, C. T. (2008). The effects of unequal reward distributions on cooperative problem solving by cottontop tamarins, Saguinus oedipus. Animal Behaviour, 75, 245-257. doi:10.1016/j.anbehav.2007.04.032.

    Abstract

    Cooperation among nonhuman animals has been the topic of much theoretical and empirical research, but few studies have examined systematically the effects of various reward payoffs on cooperative behaviour. Here, we presented heterosexual pairs of cooperatively breeding cottontop tamarins with a cooperative problem-solving task. In a series of four experiments, we examined how the tamarins’ cooperative performance changed under conditions in which (1) both actors were mutually rewarded, (2) both actors were rewarded reciprocally across days, (3) both actors competed for a monopolizable reward and (4) one actor repeatedly delivered a single reward to the other actor. The tamarins showed sensitivity to the reward structure, showing the greatest percentage of trials solved and shortest latency to solve the task in the mutual reward experiment and the lowest percentage of trials solved and longest latency to solve the task in the experiment in which one actor was repeatedly rewarded. However, even in the experiment in which the fewest trials were solved, the tamarins still solved 46 _ 12% of trials and little to no aggression was observed among partners following inequitable reward distributions. The tamarins did, however, show selfish motivation in each of the experiments. Nevertheless, in all experiments, unrewarded individuals continued to cooperate and procure rewards for their social partners.
  • Cucchiarini, C., Hubers, F., & Strik, H. (2022). Learning L2 idioms in a CALL environment: The role of practice intensity, modality, and idiom properties. Computer Assisted Language Learning, 35(4), 863-891. doi:10.1080/09588221.2020.1752734.

    Abstract

    Idiomatic expressions like hit the road or turn the tables are known to be problematic for L2 learners, but research indicates that learning L2 idiomatic language is important. Relatively few studies, most of them focusing on English idioms, have investigated how L2 idioms are actually acquired and how this process is affected by important idiom properties like transparency (the degree to which the figurative meaning of an idiom can be inferred from its literal analysis) and cross-language overlap (the degree to which L2 idioms correspond to L1 idioms). The present study employed a specially designed CALL system to investigate the effects of intensity of practice and the reading modality on learning Dutch L2 idioms, as well as the impact of idiom transparency and cross-language overlap. The results show that CALL practice with a focus on meaning and form is effective for learning L2 idioms and that the degree of practice needed depends on the properties of the idioms. L2 learners can achieve or even exceed native-like performance. Practicing reading idioms aloud does not lead to significantly higher performance than reading idioms silently.These findings have theoretical implications as they show that differences between native speakers and L2 learners are due to differences in exposure, rather than to different underlying acquisition mechanisms. For teaching practice, this study indicates that a properly designed CALL system is an effective and an ecologically sound environment for learning L2 idioms, a generally unattended area in L2 classes, and that teaching priorities should be based on degree of transparency and cross-language overlap of L2 idioms.
  • Cuellar-Partida, G., Tung, J. Y., Eriksson, N., Albrecht, E., Aliev, F., Andreassen, O. A., Barroso, I., Beckmann, J. S., Boks, M. P., Boomsma, D. I., Boyd, H. A., Breteler, M. M. B., Campbell, H., Chasman, D. I., Cherkas, L. F., Davies, G., De Geus, E. J. C., Deary, I. J., Deloukas, P., Dick, D. M. and 98 moreCuellar-Partida, G., Tung, J. Y., Eriksson, N., Albrecht, E., Aliev, F., Andreassen, O. A., Barroso, I., Beckmann, J. S., Boks, M. P., Boomsma, D. I., Boyd, H. A., Breteler, M. M. B., Campbell, H., Chasman, D. I., Cherkas, L. F., Davies, G., De Geus, E. J. C., Deary, I. J., Deloukas, P., Dick, D. M., Duffy, D. L., Eriksson, J. G., Esko, T., Feenstra, B., Geller, F., Gieger, C., Giegling, I., Gordon, S. D., Han, J., Hansen, T. F., Hartmann, A. M., Hayward, C., Heikkilä, K., Hicks, A. A., Hirschhorn, J. N., Hottenga, J.-J., Huffman, J. E., Hwang, L.-D., Ikram, M. A., Kaprio, J., Kemp, J. P., Khaw, K.-T., Klopp, N., Konte, B., Kutalik, Z., Lahti, J., Li, X., Loos, R. J. F., Luciano, M., Magnusson, S. H., Mangino, M., Marques-Vidal, P., Martin, N. G., McArdle, W. L., McCarthy, M. I., Medina-Gomez, C., Melbye, M., Melville, S. A., Metspalu, A., Milani, L., Mooser, V., Nelis, M., Nyholt, D. R., O'Connell, K. S., Ophoff, R. A., Palmer, C., Palotie, A., Palviainen, T., Pare, G., Paternoster, L., Peltonen, L., Penninx, B. W. J. H., Polasek, O., Pramstaller, P. P., Prokopenko, I., Raikkonen, K., Ripatti, S., Rivadeneira, F., Rudan, I., Rujescu, D., Smit, J. H., Smith, G. D., Smoller, J. W., Soranzo, N., Spector, T. D., St Pourcain, B., Starr, J. M., Stefánsson, H., Steinberg, S., Teder-Laving, M., Thorleifsson, G., Stefansson, K., Timpson, N. J., Uitterlinden, A. G., Van Duijn, C. M., Van Rooij, F. J. A., Vink, J. M., Vollenweider, P., Vuoksimaa, E., Waeber, G., Wareham, N. J., Warrington, N., Waterworth, D., Werge, T., Wichmann, H.-E., Widen, E., Willemsen, G., Wright, A. F., Wright, M. J., Xu, M., Zhao, J. H., Kraft, P., Hinds, D. A., Lindgren, C. M., Magi, R., Neale, B. M., Evans, D. M., & Medland, S. E. (2021). Genome-wide association study identifies 48 common genetic variants associated with handedness. Nature Human Behaviour, 5, 59-70. doi:10.1038/s41562-020-00956-y.

    Abstract

    Handedness has been extensively studied because of its relationship with language and the over-representation of left-handers in some neurodevelopmental disorders. Using data from the UK Biobank, 23andMe and the International Handedness Consortium, we conducted a genome-wide association meta-analysis of handedness (N = 1,766,671). We found 41 loci associated (P < 5 × 10−8) with left-handedness and 7 associated with ambidexterity. Tissue-enrichment analysis implicated the CNS in the aetiology of handedness. Pathways including regulation of microtubules and brain morphology were also highlighted. We found suggestive positive genetic correlations between left-handedness and neuropsychiatric traits, including schizophrenia and bipolar disorder. Furthermore, the genetic correlation between left-handedness and ambidexterity is low (rG = 0.26), which implies that these traits are largely influenced by different genetic mechanisms. Our findings suggest that handedness is highly polygenic and that the genetic variants that predispose to left-handedness may underlie part of the association with some psychiatric disorders.

    Additional information

    supplementary tables
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., & Jesse, A. (2021). Word stress in speech perception. In J. S. Pardo, L. C. Nygaard, & D. B. Pisoni (Eds.), The handbook of speech perception (2nd ed., pp. 239-265). Chichester: Wiley.
  • Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (Eds.). (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [Special Issue]. Cognition, 213.
  • Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [preface]. Cognition, 213: 104786. doi:10.1016/j.cognition.2021.104786.
  • Cutler, A., Ernestus, M., Warner, N., & Weber, A. (2022). Managing speech perception data sets. In B. McDonnell, E. Koller, & L. B. Collister (Eds.), The Open Handbook of Linguistic Data Management (pp. 565-573). Cambrdige, MA, USA: MIT Press. doi:10.7551/mitpress/12200.003.0055.
  • Ip, M. H. K., & Cutler, A. (2022). Juncture prosody across languages: Similar production but dissimilar perception. Laboratory Phonology, 13(1): 5. doi:10.16995/labphon.6464.

    Abstract

    How do speakers of languages with different intonation systems produce and perceive prosodic junctures in sentences with identical structural ambiguity? Native speakers of English and of Mandarin produced potentially ambiguous sentences with a prosodic juncture either earlier in the utterance (e.g., “He gave her # dog biscuits,” “他给她#狗饼干 ”), or later (e.g., “He gave her dog # biscuits,” “他给她狗 #饼干 ”). These productiondata showed that prosodic disambiguation is realised very similarly in the two languages, despite some differences in the degree to which individual juncture cues (e.g., pausing) were favoured. In perception experiments with a new disambiguation task, requiring speeded responses to select the correct meaning for structurally ambiguous sentences, language differences in disambiguation response time appeared: Mandarin speakers correctly disambiguated sentences with earlier juncture faster than those with later juncture, while English speakers showed the reverse. Mandarin-speakers with L2 English did not show their native-language response time pattern when they heard the English ambiguous sentences. Thus even with identical structural ambiguity and identically cued production, prosodic juncture perception across languages can differ.

    Additional information

    supplementary files
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.

Share this page