Publications

Displaying 101 - 200 of 2473
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Becker, M., Guadalupe, T., Franke, B., Hibar, D. P., Renteria, M. E., Stein, J. L., Thompson, P. M., Francks, C., Vernes, S. C., & Fisher, S. E. (2016). Early developmental gene enhancers affect subcortical volumes in the adult human brain. Human Brain Mapping, 37(5), 1788-1800. doi:10.1002/hbm.23136.

    Abstract

    Genome-wide association screens aim to identify common genetic variants contributing to the phenotypic variability of complex traits, such as human height or brain morphology. The identified genetic variants are mostly within noncoding genomic regions and the biology of the genotype–phenotype association typically remains unclear. In this article, we propose a complementary targeted strategy to reveal the genetic underpinnings of variability in subcortical brain volumes, by specifically selecting genomic loci that are experimentally validated forebrain enhancers, active in early embryonic development. We hypothesized that genetic variation within these enhancers may affect the development and ultimately the structure of subcortical brain regions in adults. We tested whether variants in forebrain enhancer regions showed an overall enrichment of association with volumetric variation in subcortical structures of >13,000 healthy adults. We observed significant enrichment of genomic loci that affect the volume of the hippocampus within forebrain enhancers (empirical P = 0.0015), a finding which robustly passed the adjusted threshold for testing of multiple brain phenotypes (cutoff of P < 0.0083 at an alpha of 0.05). In analyses of individual single nucleotide polymorphisms (SNPs), we identified an association upstream of the ID2 gene with rs7588305 and variation in hippocampal volume. This SNP-based association survived multiple-testing correction for the number of SNPs analyzed but not for the number of subcortical structures. Targeting known regulatory regions offers a way to understand the underlying biology that connects genotypes to phenotypes, particularly in the context of neuroimaging genetics. This biology-driven approach generates testable hypotheses regarding the functional biology of identified associations.
  • Becker, A., & Klein, W. (1984). Notes on the internal organization of a learner variety. In P. Auer, & A. Di Luzio (Eds.), Interpretive sociolinguistics (pp. 215-231). Tübingen: Narr.
  • Becker, M. (2016). On the identification of FOXP2 gene enhancers and their role in brain development. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bekemeier, N., Brenner, D., Klepp, A., Biermann-Ruben, K., & Indefrey, P. (2019). Electrophysiological correlates of concept type shifts. PLoS One, 14(3): e0212624. doi:10.1371/journal.pone.0212624.

    Abstract

    A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.

    Additional information

    dataset
  • Belke, E., Shao, Z., & Meyer, A. S. (2017). Strategic origins of early semantic facilitation in the blocked-cyclic naming paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(10), 1659-1668. doi:10.1037/xlm0000399.

    Abstract

    In the blocked-cyclic naming paradigm, participants repeatedly name small sets of objects that do or do not belong to the same semantic category. A standard finding is that, after a first presentation cycle where one might find semantic facilitation, naming is slower in related (homogeneous) than in unrelated (heterogeneous) sets. According to competitive theories of lexical selection, this is because the lexical representations of the object names compete more vigorously in homogeneous than in heterogeneous sets. However, Navarrete, del Prato, Peressotti, and Mahon (2014) argued that this pattern of results was not due to increased lexical competition but to weaker repetition priming in homogeneous compared to heterogeneous sets. They demonstrated that when homogeneous sets were not repeated immediately but interleaved with unrelated sets, semantic relatedness induced facilitation rather than interference. We replicate this finding but also show that the facilitation effect has a strategic origin: It is substantial when sets are separated by pauses, making it easy for participants to notice the relatedness within some sets and use it to predict upcoming items. However, the effect is much reduced when these pauses are eliminated. In our view, the semantic facilitation effect does not constitute evidence against competitive theories of lexical selection. It can be accounted for within any framework that acknowledges strategic influences on the speed of object naming in the blocked-cyclic naming paradigm.
  • Benazzo, S., Dimroth, C., Perdue, C., & Watorek, M. (2004). Le rôle des particules additives dans la construction de la cohésion discursive en langue maternelle et en langue étrangère. Langages, 155, 76-106.

    Abstract

    We compare the use of additive particles such as aussi ('also'), encore ('again, still'), and their 'translation équivalents', in a narrative task based on a séries of piclures performed by groups of children aged 4 years, 7 years and 10 years using their first language (L1 French, German, Polish), and by adult Polish and German learners of French as a second language (L2). From the cross-sectional analysis we propose developmental patterns which show remarkable similarities for ail types of learner, but which stem from différent determining factors. For the children, the patterns can best be explained by the development of their capacity to use available items in appropriate discourse contexts; for the adults, the limitations of their linguistic répertoire at différent levels of achievement détermines the possibility of incorporating thèse items into their utterance structure. Fïnally, we discuss to what extent thèse gênerai tendencies are influenced by the specificities of the différent languages used.
  • Benetti, S., Zonca, J., Ferrari, A., Rezk, M., Rabini, G., & Collignon, O. (2021). Visual motion processing recruits regions selective for auditory motion in early deaf individuals. NeuroImage, 230: 117816. doi:10.1016/j.neuroimage.2021.117816.

    Abstract

    In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the ‘deaf’ mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the ‘deaf’ motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the ‘deaf’ right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.

    Additional information

    supplementary materials
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.

    Abstract

    The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.

    Abstract

    The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.

    Abstract

    Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability.
  • Bentum, M. (2021). Listening with great expectations: A study of predictive natural speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bercelli, F., Viaro, M., & Rossano, F. (2004). Attività in alcuni generi di psicoterapia. Rivista di psicolinguistica applicata, IV (2/3), 111-127. doi:10.1400/19208.

    Abstract

    The main aim of our paper is to contribute to the outline of a general inventory of activities in psychotherapy, as a step towards a description of overall conversational organizations of diff erent therapeutic approaches. From the perspective of Conversation Analysis, we describe some activities commonly occurrring in a corpus of sessions conducted by cognitive and relational-systemic therapists. Two activities appear to be basic: (a) inquiry: therapists elicit information from patients on their problems and circumstances; (b) reworking: therapists say something designed as an elaboration of what patients have previously said, or as something that can be grounded on it; and patients are induced to confi rm/disprove and contribute to the elaboration. Furthermore, we describe other activities, which turn out to be auxiliary to the basic ones: storytelling, procedural arrangement, recalling, noticing, teaching. We fi nally show some ways in which these activities can be integrated through conversational interaction.
  • Bergelson*, E., Casillas*, M., Soderstrom, M., Seidl, A., Warlaumont, A. S., & Amatuni, A. (2019). What Do North American Babies Hear? A large-scale cross-corpus analysis. Developmental Science, 22(1): e12724. doi:10.1111/desc.12724.

    Abstract

    - * indicates joint first authorship - Abstract: A range of demographic variables influence how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically-valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2--3x more speech from females than males. Second, children in higher-maternal-education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for re-use and re-analysis by other researchers.

    Additional information

    desc12724-sup-0001-supinfo.pdf
  • Bergmann, C., & Cristia, A. (2016). Development of infants' segmentation of words from native speech: a meta-analytic approach. Developmental Science, 19(6), 901-917. doi:10.1111/desc.12341.

    Abstract

    nfants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta-analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task- and stimulus-related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.

    Additional information

    desc12341-sup-0001-sup_material.doc
  • Bergmann, C., Cristia, A., & Dupoux, E. (2016). Discriminability of sound contrasts in the face of speaker variation quantified. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. (pp. 1331-1336). Austin, TX: Cognitive Science Society.

    Abstract

    How does a naive language learner deal with speaker variation irrelevant to distinguishing word meanings? Experimental data is contradictory, and incompatible models have been proposed. Here, we examine basic assumptions regarding the acoustic signal the learner deals with: Is speaker variability a hurdle in discriminating sounds or can it easily be ignored? To this end, we summarize existing infant data. We then present machine-based discriminability scores of sound pairs obtained without any language knowledge. Our results show that speaker variability decreases sound contrast discriminability, and that some contrasts are affected more than others. However, chance performance is rare; most contrasts remain discriminable in the face of speaker variation. We take our results to mean that speaker variation is not a uniform hurdle to discriminating sound contrasts, and careful examination is necessary when planning and interpreting studies testing whether and to what extent infants (and adults) are sensitive to speaker differences.

    Additional information

    Scripts and data
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Measuring word learning performance in computational models and infants. In Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics. Frankfurt am Main, Germany, 24-27 Aug. 2011.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete categories may obscure interesting phenomena in the continuous responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured by averaging over all stimuli. Because different words behaved differently for different speakers, we could not identify a phonetic basis for the differences. Implications and new predictions for infant behaviour are discussed.
  • Bergmann, C., Boves, L., & Ten Bosch, L. (2011). Thresholding word activations for response scoring - Modelling psycholinguistic data. In Proceedings of the 12th Annual Conference of the International Speech Communication Association [Interspeech 2011] (pp. 769-772). ISCA.

    Abstract

    In the present paper we investigate the effect of categorising raw behavioural data or computational model responses. In addition, the effect of averaging over stimuli from potentially different populations is assessed. To this end, we replicate studies on word learning and generalisation abilities using the ACORNS models. Our results show that discrete
    categories may obscure interesting phenomena in the continuous
    responses. For example, the finding that learning in the model saturates very early at a uniform high recognition accuracy only holds for categorical representations. Additionally, a large difference in the accuracy for individual words is obscured
    by averaging over all stimuli. Because different words behaved
    differently for different speakers, we could not identify a phonetic
    basis for the differences. Implications and new predictions for
    infant behaviour are discussed.
  • Bergmann, C., Tsuji, S., & Cristia, A. (2017). Top-down versus bottom-up theories of phonological acquisition: A big data approach. In Proceedings of Interspeech 2017 (pp. 2103-2107).

    Abstract

    Recent work has made available a number of standardized meta- analyses bearing on various aspects of infant language processing. We utilize data from two such meta-analyses (discrimination of vowel contrasts and word segmentation, i.e., recognition of word forms extracted from running speech) to assess whether the published body of empirical evidence supports a bottom-up versus a top-down theory of early phonological development by leveling the power of results from thousands of infants. We predicted that if infants can rely purely on auditory experience to develop their phonological categories, then vowel discrimination and word segmentation should develop in parallel, with the latter being potentially lagged compared to the former. However, if infants crucially rely on word form information to build their phonological categories, then development at the word level must precede the acquisition of native sound categories. Our results do not support the latter prediction. We discuss potential implications and limitations, most saliently that word forms are only one top-down level proposed to affect phonological development, with other proposals suggesting that top-down pressures emerge from lexical (i.e., word-meaning pairs) development. This investigation also highlights general procedures by which standardized meta-analyses may be reused to answer theoretical questions spanning across phenomena.

    Additional information

    Scripts and data
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P., & Fotopoulou, A. (2016). Mentalizing the body: Spatial and social cognition in anosognosia for hemiplegia. Brain, 139(3), 971-985. doi:10.1093/brain/awv390.

    Abstract

    Following right-hemisphere damage, a specific disorder of motor awareness can occur called anosognosia for hemiplegia, i.e. the denial of motor deficits contralateral to a brain lesion. The study of anosognosia can offer unique insights into the neurocognitive basis of awareness. Typically, however, awareness is assessed as a first person judgement and the ability of patients to think about their bodies in more ‘objective’ (third person) terms is not directly assessed. This may be important as right-hemisphere spatial abilities may underlie our ability to take third person perspectives. This possibility was assessed for the first time in the present study. We investigated third person perspective taking using both visuospatial and verbal tasks in right-hemisphere stroke patients with anosognosia ( n = 15) and without anosognosia ( n = 15), as well as neurologically healthy control subjects ( n = 15). The anosognosic group performed worse than both control groups when having to perform the tasks from a third versus a first person perspective. Individual analysis further revealed a classical dissociation between most anosognosic patients and control subjects in mental (but not visuospatial) third person perspective taking abilities. Finally, the severity of unawareness in anosognosia patients was correlated to greater impairments in such third person, mental perspective taking abilities (but not visuospatial perspective taking). In voxel-based lesion mapping we also identified the lesion sites linked with such deficits, including some brain areas previously associated with inhibition, perspective taking and mentalizing, such as the inferior and middle frontal gyri, as well as the supramarginal and superior temporal gyri. These results suggest that neurocognitive deficits in mental perspective taking may contribute to anosognosia and provide novel insights regarding the relation between self-awareness and social cognition.
  • Bianco, R., Zuk, N. J., Bigand, F., Quarta, E., Grasso, S., Arnese, F., Ravignani, A., Battaglia-Mayer, A., & Novembre, G. (2024). Neural encoding of musical expectations in a non-human primate. Current Biology, 34(2), 444-450. doi:10.1016/j.cub.2023.12.019.

    Abstract

    The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features—pitch and timing12—in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys’ capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
  • Bickel, B. (1991). Der Hang zur Exzentrik - Annäherungen an das kognitive Modell der Relativkonstruktion. In W. Bisang, & P. Rinderknecht (Eds.), Von Europa bis Ozeanien - von der Antinomie zum Relativsatz (pp. 15-37). Zurich, Switzerland: Seminar für Allgemeine Sprachwissenschaft der Universität.
  • Bidgood, A., Pine, J., Rowland, C. F., Sala, G., Freudenthal, D., & Ambridge, B. (2021). Verb argument structure overgeneralisations for the English intransitive and transitive constructions: Grammaticality judgments and production priming. Language and Cognition, 13(3), 397-437. doi:10.1017/langcog.2021.8.

    Abstract

    We used a multi-method approach to investigate how children avoid (or retreat from) argument structure overgeneralisation errors (e.g., *You giggled me). Experiment 1 investigated how semantic and statistical constraints (preemption and entrenchment) influence children’s and adults’ judgments of the grammatical acceptability of 120 verbs in transitive and intransitive sentences. Experiment 2 used syntactic priming to elicit overgeneralisation errors from children (aged 5–6) to investigate whether the same constraints operate in production. For judgments, the data showed effects of preemption, entrenchment, and semantics for all ages. For production, only an effect of preemption was observed, and only for transitivisation errors with intransitive-only verbs (e.g., *The man laughed the girl). We conclude that preemption, entrenchment, and semantic effects are real, but are obscured by particular features of the present production task.

    Additional information

    supplementary material
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Bien, H., Baayen, H. R., & Levelt, W. J. M. (2011). Frequency effects in the production of Dutch deverbal adjectives and inflected verbs. Language and Cognitive Processes, 26, 683-715. doi:10.1080/01690965.2010.511475.

    Abstract

    In two experiments, we studied the role of frequency information in the production of deverbal adjectives and inflected verbs in Dutch. Naming latencies were triggered in a position-response association task and analysed using stepwise mixed-effects modelling, with subject and word as crossed random effects. The production latency of deverbal adjectives was affected by the cumulative frequencies of their verbal stems, arguing for decomposition and against full listing. However, for the inflected verbs, there was an inhibitory effect of Inflectional Entropy, and a nonlinear effect of Lemma Frequency. Additional effects of Position-specific Neighbourhood Density and Cohort Entropy in both types of words underline the importance of paradigmatic relations in the mental lexicon. Taken together, the data suggest that the word-form level does neither contain full forms nor strictly separated morphemes, but rather morphemes with links to phonologically andin case of inflected verbsmorphologically related word forms.
  • Bignardi, G., Smit, D. J. A., Vessel, E. A., Trupp, M. D., Ticini, L. F., Fisher, S. E., & Polderman, T. J. C. (2024). Genetic effects on variability in visual aesthetic evaluations are partially shared across visual domains. Communications Biology, 7: 55. doi:10.1038/s42003-023-05710-4.

    Abstract

    The aesthetic values that individuals place on visual images are formed and shaped over a lifetime. However, whether the formation of visual aesthetic value is solely influenced by environmental exposure is still a matter of debate. Here, we considered differences in aesthetic value emerging across three visual domains: abstract images, scenes, and faces. We examined variability in two major dimensions of ordinary aesthetic experiences: taste-typicality and evaluation-bias. We build on two samples from the Australian Twin Registry where 1547 and 1231 monozygotic and dizygotic twins originally rated visual images belonging to the three domains. Genetic influences explained 26% to 41% of the variance in taste-typicality and evaluation-bias. Multivariate analyses showed that genetic effects were partially shared across visual domains. Results indicate that the heritability of major dimensions of aesthetic evaluations is comparable to that of other complex social traits, albeit lower than for other complex cognitive traits. The exception was taste-typicality for abstract images, for which we found only shared and unique environmental influences. Our study reveals that diverse sources of genetic and environmental variation influence the formation of aesthetic value across distinct visual domains and provides improved metrics to assess inter-individual differences in aesthetic value.

    Additional information

    supplementary information
  • Birchall, J., Dunn, M., & Greenhill, S. J. (2016). A combined comparative and phylogenetic analysis of the Chapacuran language family. International Journal of American Linguistics, 82(3), 255-284. doi:10.1086/687383.

    Abstract

    The Chapacuran language family, with three extant members and nine historically attested lects, has yet to be classified following modern standards in historical linguistics. This paper presents an internal classification of these languages by combining both the traditional comparative method (CM) and Bayesian phylogenetic inference (BPI). We identify multiple systematic sound correspondences and 285 cognate sets of basic vocabulary using the available documentation. These allow us to reconstruct a large portion of the Proto-Chapacuran phonemic inventory and identify tentative major subgroupings. The cognate sets form the input for the BPI analysis, which uses a stochastic Continuous-Time Markov Chain to model the change of these cognate sets over time. We test various models of lexical substitution and evolutionary clocks, and use ethnohistorical information and data collection dates to calibrate the resulting trees. The CM and BPI analyses produce largely congruent results, suggesting a division of the family into three different clades.

    Additional information

    Appendix
  • Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Kvinder, Køn & Forskning, 29(2), 60-73. doi:10.7146/kkf.v29i2.124899.

    Abstract

    This article sets out our perspective on how to begin the journey of decolonising computational fi elds, such as data and cognitive sciences. We see this struggle as requiring two basic steps: a) realisation that the present-day system has inherited, and still enacts, hostile, conservative, and oppressive behaviours and principles towards women of colour; and b) rejection of the idea that centring individual people is a solution to system-level problems. The longer we ignore these two steps, the more “our” academic system maintains its toxic structure, excludes, and harms Black women and other minoritised groups. This also keeps the door open to discredited pseudoscience, like eugenics and physiognomy. We propose that grappling with our fi elds’ histories and heritage holds the key to avoiding mistakes of the past. In contrast to, for example, initiatives such as “diversity boards”, which can be harmful because they superfi cially appear reformatory but nonetheless center whiteness and maintain the status quo. Building on the work of many women of colour, we hope to advance the dialogue required to build both a grass-roots and a top-down re-imagining of computational sciences — including but not limited to psychology, neuroscience, cognitive science, computer science, data science, statistics, machine learning, and artifi cial intelligence. We aspire to progress away from
    these fi elds’ stagnant, sexist, and racist shared past into an ecosystem that welcomes and nurtures
    demographically diverse researchers and ideas that critically challenge the status quo.
  • Black, A., & Bergmann, C. (2017). Quantifying infants' statistical word segmentation: A meta-analysis. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (pp. 124-129). Austin, TX: Cognitive Science Society.

    Abstract

    Theories of language acquisition and perceptual learning increasingly rely on statistical learning mechanisms. The current meta-analysis aims to clarify the robustness of this capacity in infancy within the word segmentation literature. Our analysis reveals a significant, small effect size for conceptual replications of Saffran, Aslin, & Newport (1996), and a nonsignificant effect across all studies that incorporate transitional probabilities to segment words. In both conceptual replications and the broader literature, however, statistical learning is moderated by whether stimuli are naturally produced or synthesized. These findings invite deeper questions about the complex factors that influence statistical learning, and the role of statistical learning in language acquisition.
  • Blasi, A., Mercure, E., Lloyd-Fox, S., Thomson, A., Brammer, M., Sauter, D., Deeley, Q., Barker, G. J., Renvall, V., Deoni, S., Gasston, D., Williams, S. C., Johnson, M. H., Simmons, A., & Murphy, D. G. (2011). Early specialization for voice and emotion processing in the infant brain. Current Biology, 21, 1220-1224. doi:10.1016/j.cub.2011.06.009.

    Abstract

    Human voices play a fundamental role in social communication, and areas of the adult ‘social brain’ show specialization for processing voices and its emotional content (superior temporal sulcus - STS, inferior prefrontal cortex, premotor cortical regions, amygdala and insula [1-8]. However, it is unclear when this specialization develops. Functional magnetic resonance (fMRI) studies suggest the infant temporal cortex does not differentiate speech from music or backward speech [10, 11], but a prior study with functional near infrared spectroscopy revealed preferential activation for human voices in 7-month-olds, in a more posterior location of the temporal cortex than in adults [12]. Yet, the brain networks involved in processing non-speech human vocalizations in early development are still unknown. For this purpose, in the present fMRI study, 3 to 7 month olds were presented with adult non-speech vocalizations (emotionally neutral, emotionally positive and emotionally negative), and non-vocal environmental sounds. Infants displayed significant activation in the anterior portion of the temporal cortex, similarly to adults [1]. Moreover, sad vocalizations modulated the activity of brain regions known to be involved in processing affective stimuli such as the orbitofrontal cortex [13] and insula [7, 8]. These results suggest remarkably early functional specialization for processing human voice and negative emotions.
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Bluijs, S., Dera, J., & Peeters, D. (2021). Waarom digitale literatuur in het literatuuronderwijs thuishoort. Tijdschrift voor Nederlandse Taal- en Letterkunde, 137(2), 150-163. doi:10.5117/TNTL2021.2.003.BLUI.
  • Blythe, J. (2011). Laughter is the best medicine: Roles for prosody in a Murriny Patha conversational narrative. In B. Baker, I. Mushin, M. Harvey, & R. Gardner (Eds.), Indigenous Language and Social Identity: Papers in Honour of Michael Walsh (pp. 223-236). Canberra: Pacific Linguistics.
  • Bobb, S., Huettig, F., & Mani, N. (2016). Predicting visual information during sentence processing: Toddlers activate an object's shape before it is mentioned. Journal of Experimental Child Psychology, 151, 51-64. doi:10.1016/j.jecp.2015.11.002.

    Abstract

    We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual-shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. 30-month-old toddlers heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts: Hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bode, S., Feuerriegel, D., Bennett, D., & Alday, P. M. (2019). The Decision Decoding ToolBOX (DDTBOX) -- A Multivariate Pattern Analysis Toolbox for Event-Related Potentials. Neuroinformatics, 17(1), 27-42. doi:10.1007/s12021-018-9375-z.

    Abstract

    In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.
  • Bodur, K., Branje, S., Peirolo, M., Tiscareno, I., & German, J. S. (2021). Domain-initial strengthening in Turkish: Acoustic cues to prosodic hierarchy in stop consonants. In Proceedings of Interspeech 2021 (pp. 1459-1463). doi:10.21437/Interspeech.2021-2230.

    Abstract

    Studies have shown that cross-linguistically, consonants at the left edge of higher-level prosodic boundaries tend to be more forcefully articulated than those at lower-level boundaries, a phenomenon known as domain-initial strengthening. This study tests whether similar effects occur in Turkish, using the Autosegmental-Metrical model proposed by Ipek & Jun [1, 2] as the basis for assessing boundary strength. Productions of /t/ and /d/ were elicited in four domain-initial prosodic positions corresponding to progressively higher-level boundaries: syllable, word, intermediate phrase, and Intonational Phrase. A fifth position, nuclear word, was included in order to better situate it within the prosodic hierarchy. Acoustic correlates of articulatory strength were measured, including closure duration for /d/ and /t/, as well as voice onset time and burst energy for /t/. Our results show that closure duration increases cumulatively from syllable to intermediate phrase, while voice onset time and burst energy are not influenced by boundary strength. These findings provide corroborating evidence for Ipek & Jun’s model, particularly for the distinction between word and intermediate phrase boundaries. Additionally, articulatory strength at the left edge of the nuclear word patterned closely with word-initial position, supporting the view that the nuclear word is not associated with a distinct phrasing domain
  • Boen, R., Kaufmann, T., Van der Meer, D., Frei, O., Agartz, I., Ames, D., Andersson, M., Armstrong, N. J., Artiges, E., Atkins, J. R., Bauer, J., Benedetti, F., Boomsma, D. I., Brodaty, H., Brosch, K., Buckner, R. L., Cairns, M. J., Calhoun, V., Caspers, S., Cichon, S. and 96 moreBoen, R., Kaufmann, T., Van der Meer, D., Frei, O., Agartz, I., Ames, D., Andersson, M., Armstrong, N. J., Artiges, E., Atkins, J. R., Bauer, J., Benedetti, F., Boomsma, D. I., Brodaty, H., Brosch, K., Buckner, R. L., Cairns, M. J., Calhoun, V., Caspers, S., Cichon, S., Corvin, A. P., Crespo Facorro, B., Dannlowski, U., David, F. S., De Geus, E. J., De Zubicaray, G. I., Desrivières, S., Doherty, J. L., Donohoe, G., Ehrlich, S., Eising, E., Espeseth, T., Fisher, S. E., Forstner, A. J., Fortaner Uyà, L., Frouin, V., Fukunaga, M., Ge, T., Glahn, D. C., Goltermann, J., Grabe, H. J., Green, M. J., Groenewold, N. A., Grotegerd, D., Hahn, T., Hashimoto, R., Hehir-Kwa, J. Y., Henskens, F. A., Holmes, A. J., Haberg, A. K., Haavik, J., Jacquemont, S., Jansen, A., Jockwitz, C., Jonsson, E. G., Kikuchi, M., Kircher, T., Kumar, K., Le Hellard, S., Leu, C., Linden, D. E., Liu, J., Loughnan, R., Mather, K. A., McMahon, K. L., McRae, A. F., Medland, S. E., Meinert, S., Moreau, C. A., Morris, D. W., Mowry, B. J., Muhleisen, T. W., Nenadić, I., Nöthen, M. M., Nyberg, L., Owen, M. J., Paolini, M., Paus, T., Pausova, Z., Persson, K., Quidé, Y., Reis Marques, T., Sachdev, P. S., Sando, S. B., Schall, U., Scott, R. J., Selbæk, G., Shumskaya, E., Silva, A. I., Sisodiya, S. M., Stein, F., Stein, D. J., Straube, B., Streit, F., Strike, L. T., Teumer, A., Teutenberg, L., Thalamuthu, A., Tooney, P. A., Tordesillas-Gutierrez, D., Trollor, J. N., Van 't Ent, D., Van den Bree, M. B. M., Van Haren, N. E. M., Vazquez-Bourgon, J., Volzke, H., Wen, W., Wittfeld, K., Ching, C. R., Westlye, L. T., Thompson, P. M., Bearden, C. E., Selmer, K. K., Alnæs, D., Andreassen, O. A., & Sonderby, I. E. (2024). Beyond the global brain differences: Intra-individual variability differences in 1q21.1 distal and 15q11.2 BP1-BP2 deletion carriers. Biological Psychiatry, 95(2), 147-160. doi:10.1016/j.biopsych.2023.08.018.

    Abstract

    Background

    The 1q21.1 distal and 15q11.2 BP1-BP2 CNVs exhibit regional and global brain differences compared to non-carriers. However, interpreting regional differences is challenging if a global difference drives the regional brain differences. Intra-individual variability measures can be used to test for regional differences beyond global differences in brain structure.

    Methods

    Magnetic resonance imaging data were used to obtain regional brain values for 1q21.1 distal deletion (n=30) and duplication (n=27), and 15q11.2 BP1-BP2 deletion (n=170) and duplication (n=243) carriers and matched non-carriers (n=2,350). Regional intra-deviation (RID) scores i.e., the standardized difference between an individual’s regional difference and global difference, were used to test for regional differences that diverge from the global difference.

    Results

    For the 1q21.1 distal deletion carriers, cortical surface area for regions in the medial visual cortex, posterior cingulate and temporal pole differed less, and regions in the prefrontal and superior temporal cortex differed more than the global difference in cortical surface area. For the 15q11.2 BP1-BP2 deletion carriers, cortical thickness in regions in the medial visual cortex, auditory cortex and temporal pole differed less, and the prefrontal and somatosensory cortex differed more than the global difference in cortical thickness.

    Conclusion

    We find evidence for regional effects beyond differences in global brain measures in 1q21.1 distal and 15q11.2 BP1-BP2 CNVs. The results provide new insight into brain profiling of the 1q21.1 distal and 15q11.2 BP1-BP2 CNVs, with the potential to increase our understanding of mechanisms involved in altered neurodevelopment.

    Additional information

    supplementary material
  • De Boer, M., Kokal, I., Blokpoel, M., Liu, R., Stolk, A., Roelofs, K., Van Rooij, I., & Toni, I. (2017). Oxytocin modulates human communication by enhancing cognitive exploration. Psychoneuroendocrinology, 86, 64-72. doi:10.1016/j.psyneuen.2017.09.010.

    Abstract

    Oxytocin is a neuropeptide known to influence how humans share material resources. Here we explore whether oxytocin influences how we share knowledge. We focus on two distinguishing features of human communication, namely the ability to select communicative signals that disambiguate the many-to-many mappings that exist between a signal’s form and meaning, and adjustments of those signals to the presumed cognitive characteristics of the addressee (“audience design”). Fifty-five males participated in a randomized, double-blind, placebo controlled experiment involving the intranasal administration of oxytocin. The participants produced novel non-verbal communicative signals towards two different addressees, an adult or a child, in an experimentally-controlled live interactive setting. We found that oxytocin administration drives participants to generate signals of higher referential quality, i.e. signals that disambiguate more communicative problems; and to rapidly adjust those communicative signals to what the addressee understands. The combined effects of oxytocin on referential quality and audience design fit with the notion that oxytocin administration leads participants to explore more pervasively behaviors that can convey their intention, and diverse models of the addressees. These findings suggest that, besides affecting prosocial drive and salience of social cues, oxytocin influences how we share knowledge by promoting cognitive exploration
  • Bögels, S., & Torreira, F. (2021). Turn-end estimation in conversational turn-taking: The roles of context and prosody. Discourse Processes, 58(10), 903-924. doi:10.1080/0163853X.2021.1986664.

    Abstract

    This study investigated the role of contextual and prosodic information in turn-end estimation by means of a button-press task. We presented participants with turns extracted from a corpus of telephone calls visually (i.e., in transcribed form, word-by-word) and auditorily, and asked them to anticipate turn ends by pressing a button. The availability of the previous conversational context was generally helpful for turn-end estimation in short turns only, and more clearly so in the visual task than in the auditory task. To investigate the role of prosody, we examined whether participants in the auditory task pressed the button close to turn-medial points likely to constitute turn ends based on lexico-syntactic information alone. We observed that the vast majority of such button presses occurred in the presence of an intonational boundary rather than in its absence. These results are consistent with the view that prosodic cues in the proximity of turn ends play a relevant role in turn-end estimation.
  • Bögels, S., Schriefers, H. J., Vonk, W., & Chwilla, D. (2011). Prosodic breaks in sentence processing investigated by event-related potentials. Language and Linguistics Compass, 5, 424-440. doi:10.1111/j.1749-818X.2011.00291.x.

    Abstract

    Prosodic breaks (PBs) can indicate a sentence’s syntactic structure. Event-related brain potentials (ERPs) are an excellent way to study auditory sentence processing, since they provide an on-line measure across a complete sentence, in contrast to other on- and off-line methods. ERPs for the first time allowed investigating the processing of a PB itself. PBs reliably elicit a closure positive shift (CPS). We first review several studies on the CPS, leading to the conclusion that it is elicited by abstract structuring or phrasing of the input. Then we review ERP findings concerning the role of PBs in sentence processing as indicated by ERP components like the N400, P600 and LAN. We focus on whether and how PBs can (help to) disambiguate locally ambiguous sentences. Differences in results between different studies can be related to differences in items, initial parsing preferences and tasks. Finally, directions for future research are discussed.
  • Bögels, S., & Levinson, S. C. (2017). The brain behind the response: Insights into turn-taking in conversation from neuroimaging. Research on Language and Social Interaction, 50, 71-89. doi:10.1080/08351813.2017.1262118.

    Abstract

    This paper reviews the prospects for the cross-fertilization of conversation-analytic (CA) and neurocognitive studies of conversation, focusing on turn-taking. Although conversation is the primary ecological niche for language use, relatively little brain research has focused on interactive language use, partly due to the challenges of using brain-imaging methods that are controlled enough to perform sound experiments, but still reflect the rich and spontaneous nature of conversation. Recently, though, brain researchers have started to investigate conversational phenomena, for example by using 'overhearer' or controlled interaction paradigms. We review neuroimaging studies related to turn-taking and sequence organization, phenomena historically described by CA. These studies for example show early action recognition and immediate planning of responses midway during an incoming turn. The review discusses studies with an eye to a fruitful interchange between CA and neuroimaging research on conversation and an indication of how these disciplines can benefit from each other.
  • Bögels, S., Schriefers, H., Vonk, W., & Chwilla, D. (2011). Pitch accents in context: How listeners process accentuation in referential communication. Neuropsychologia, 49, 2022-2036. doi:10.1016/j.neuropsychologia.2011.03.032.

    Abstract

    We investigated whether listeners are sensitive to (mis)matching accentuation patterns with respect to contrasts in the linguistic and visual context, using Event-Related Potentials. We presented participants with displays of two pictures followed by a spoken reference to one of these pictures (e.g., “the red ball”). The referent was contrastive with respect to the linguistic context (utterance in the previous trial: e.g., “the blue ball”) or with respect to the visual context (other picture in the display; e.g., a display with a red ball and a blue ball). The spoken reference carried a pitch accent on the noun (“the red BALL”) or on the adjective (“the RED ball”), or an intermediate (‘neutral’) accentuation. For the linguistic context, we found evidence for the Missing Accent Hypothesis: Listeners showed processing difficulties, in the form of increased negativities in the ERPs, for missing accents, but not for superfluous accents. ‘Neutral’ or intermediate accents were interpreted as ‘missing’ accents when they occurred late in the referential utterance, but not when they occurred early. For the visual context, we found evidence for the Missing Accent Hypothesis for a missing accent on the adjective (an increase in negativity in the ERPs) and a superfluous accent on the noun (no effect). However, a redundant color adjective (e.g., in the case of a display with a red ball and a red hat) led to less processing problems when the adjective carried a pitch accent.

    Files private

    Request files
  • Bögels, S., Schriefers, H., Vonk, W., & Chwilla, D. J. (2011). The role of prosodic breaks and pitch accents in grouping words during on-line sentence processing. Journal of Cognitive Neuroscience, 23, 2447-2467. doi:10.1162/jocn.2010.21587.

    Abstract

    The present study addresses the question whether accentuation and prosodic phrasing can have a similar function, namely, to group words in a sentence together. Participants listened to locally ambiguous sentences containing object- and subject-control verbs while ERPs were measured. In Experiment 1, these sentences contained a prosodic break, which can create a certain syntactic grouping of words, or no prosodic break. At the disambiguation, an N400 effect occurred when the disambiguation was in conflict with the syntactic grouping created by the break. We found a similar N400 effect without the break, indicating that the break did not strengthen an already existing preference. This pattern held for both object- and subject-control items. In Experiment 2, the same sentences contained a break and a pitch accent on the noun following the break. We argue that the pitch accent indicates a broad focus covering two words [see Gussenhoven, C. On the limits of focus projection in English. In P. Bosch & R. van der Sandt (Eds.), Focus: Linguistic, cognitive, and computational perspectives. Cambridge: University Press, 1999], thus grouping these words together. For object-control items, this was semantically possible, which led to a “good-enough” interpretation of the sentence. Therefore, both sentences were interpreted equally well and the N400 effect found in Experiment 1 was absent. In contrast, for subject-control items, a corresponding grouping of the words was impossible, both semantically and syntactically, leading to processing difficulty in the form of an N400 effect and a late positivity. In conclusion, accentuation can group words together on the level of information structure, leading to either a semantically “good-enough” interpretation or a processing problem when such a semantic interpretation is not possible.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., Majid, A., & van Staden, M. (2011). Configuraciones temáticas atípicas y el uso de predicados complejos en perspectiva tipológica [Atypical thematic configurations and the use of complex predicates in typological perspective]. In A. L. Munguía (Ed.), Colección Estudios Lingüísticos. Vol. I: Fonología, morfología, y tipología semántico-sintáctica [Collection Linguistic Studies. Vol 1: Phonology, morphology, and semantico-syntactic typology] (pp. 173-194). Hermosillo, Mexico: Universidad de Sonora.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2011). Landscape terms and place names questionnaire. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 19-23). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005606.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., & Kita, S. (2011). The macro-event property: The segmentation of causal chains. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 43-67). New York: Cambridge University Press.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Bornkessel-Schlesewsky, I., Alday, P. M., & Schlesewsky, M. (2016). A modality-independent, neurobiological grounding for the combinatory capacity of the language-ready brain: Comment on “Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain” by Michael A. Arbib. Physics of Life Reviews, 16, 55-57. doi:10.1016/j.plrev.2016.01.003.
  • Bosker, H. R. (2021). Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies. Behavior Research Methods, 53(5), 1945-1953. doi:10.3758/s13428-021-01542-4.

    Abstract

    Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means
    of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often
    expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring
    the participants’ raw responses for words correctly identified from the target sentence is a time-
    consuming task, and hence resource-intensive. Moreover, there is no consensus among speech
    scientists about what specific protocol to use for the human scoring, limiting the reliability of
    human scores. The present paper evaluates various forms of fuzzy string matching between
    participants’ responses and target sentences, as automated metrics of listener transcript accuracy.
    We demonstrate that one particular metric, the Token Sort Ratio, is a consistent, highly efficient,
    and accurate metric for automated assessment of listener transcripts, as evidenced by high
    correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to
    acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for
    assessment of listener transcript accuracy in large-scale speech intelligibility studies. See
    https://tokensortratio.netlify.app for an online implementation.
  • Bosker, H. R., Badaya, E., & Corley, M. (2021). Discourse markers activate their, like, cohort competitors. Discourse Processes, 58(9), 837-851. doi:10.1080/0163853X.2021.1924000.

    Abstract

    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
  • Bosker, H. R., & Peeters, D. (2021). Beat gestures influence which speech sounds you hear. Proceedings of the Royal Society B: Biological Sciences, 288: 20202419. doi:10.1098/rspb.2020.2419.

    Abstract

    Beat gestures—spontaneously produced biphasic movements of the hand—
    are among the most frequently encountered co-speech gestures in human
    communication. They are closely temporally aligned to the prosodic charac-
    teristics of the speech signal, typically occurring on lexically stressed
    syllables. Despite their prevalence across speakers of the world’s languages,
    how beat gestures impact spoken word recognition is unclear. Can these
    simple ‘flicks of the hand’ influence speech perception? Across a range
    of experiments, we demonstrate that beat gestures influence the explicit
    and implicit perception of lexical stress (e.g. distinguishing OBject from
    obJECT), and in turn can influence what vowels listeners hear. Thus, we pro-
    vide converging evidence for a manual McGurk effect: relatively simple and
    widely occurring hand movements influence which speech sounds we hear

    Additional information

    example stimuli and experimental data
  • Bosker, H. R. (2017). Accounting for rate-dependent category boundary shifts in speech perception. Attention, Perception & Psychophysics, 79, 333-343. doi:10.3758/s13414-016-1206-4.

    Abstract

    The perception of temporal contrasts in speech is known to be influenced by the speech rate in the surrounding context. This rate-dependent perception is suggested to involve general auditory processes since it is also elicited by non-speech contexts, such as pure tone sequences. Two general auditory mechanisms have been proposed to underlie rate-dependent perception: durational contrast and neural entrainment. The present study compares the predictions of these two accounts of rate-dependent speech perception by means of four experiments in which participants heard tone sequences followed by Dutch target words ambiguous between /ɑs/ “ash” and /a:s/ “bait”. Tone sequences varied in the duration of tones (short vs. long) and in the presentation rate of the tones (fast vs. slow). Results show that the duration of preceding tones did not influence target perception in any of the experiments, thus challenging durational contrast as explanatory mechanism behind rate-dependent perception. Instead, the presentation rate consistently elicited a category boundary shift, with faster presentation rates inducing more /a:s/ responses, but only if the tone sequence was isochronous. Therefore, this study proposes an alternative, neurobiologically plausible, account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus.
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2017). Cognitive load makes speech sound fast, but does not modulate acoustic context effects. Journal of Memory and Language, 94, 166-176. doi:10.1016/j.jml.2016.12.002.

    Abstract

    In natural situations, speech perception often takes place during the concurrent execution of other cognitive tasks, such as listening while viewing a visual scene. The execution of a dual task typically has detrimental effects on concurrent speech perception, but how exactly cognitive load disrupts speech encoding is still unclear. The detrimental effect on speech representations may consist of either a general reduction in the robustness of processing of the speech signal (‘noisy encoding’), or, alternatively it may specifically influence the temporal sampling of the sensory input, with listeners missing temporal pulses, thus underestimating segmental durations (‘shrinking of time’). The present study investigated whether and how spectral and temporal cues in a precursor sentence that has been processed under high vs. low cognitive load influence the perception of a subsequent target word. If cognitive load effects are implemented through ‘noisy encoding’, increasing cognitive load during the precursor should attenuate the encoding of both its temporal and spectral cues, and hence reduce the contextual effect that these cues can have on subsequent target sound perception. However, if cognitive load effects are expressed as ‘shrinking of time’, context effects should not be modulated by load, but a main effect would be expected on the perceived duration of the speech signal. Results from two experiments indicate that increasing cognitive load (manipulated through a secondary visual search task) did not modulate temporal (Experiment 1) or spectral context effects (Experiment 2). However, a consistent main effect of cognitive load was found: increasing cognitive load during the precursor induced a perceptual increase in its perceived speech rate, biasing the perception of a following target word towards longer durations. This finding suggests that cognitive load effects in speech perception are implemented via ‘shrinking of time’, in line with a temporal sampling framework. In addition, we argue that our results align with a model in which early (spectral and temporal) normalization is unaffected by attention but later adjustments may be attention-dependent.
  • Bosker, H. R., & Kösem, A. (2017). An entrained rhythm's frequency, not phase, influences temporal sampling of speech. In Proceedings of Interspeech 2017 (pp. 2416-2420). doi:10.21437/Interspeech.2017-73.

    Abstract

    Brain oscillations have been shown to track the slow amplitude fluctuations in speech during comprehension. Moreover, there is evidence that these stimulus-induced cortical rhythms may persist even after the driving stimulus has ceased. However, how exactly this neural entrainment shapes speech perception remains debated. This behavioral study investigated whether and how the frequency and phase of an entrained rhythm would influence the temporal sampling of subsequent speech. In two behavioral experiments, participants were presented with slow and fast isochronous tone sequences, followed by Dutch target words ambiguous between as /ɑs/ “ash” (with a short vowel) and aas /a:s/ “bait” (with a long vowel). Target words were presented at various phases of the entrained rhythm. Both experiments revealed effects of the frequency of the tone sequence on target word perception: fast sequences biased listeners to more long /a:s/ responses. However, no evidence for phase effects could be discerned. These findings show that an entrained rhythm’s frequency, but not phase, influences the temporal sampling of subsequent speech. These outcomes are compatible with theories suggesting that sensory timing is evaluated relative to entrained frequency. Furthermore, they suggest that phase tracking of (syllabic) rhythms by theta oscillations plays a limited role in speech parsing.
  • Bosker, H. R., & Reinisch, E. (2017). Foreign languages sound fast: evidence from implicit rate normalization. Frontiers in Psychology, 8: 1063. doi:10.3389/fpsyg.2017.01063.

    Abstract

    Anecdotal evidence suggests that unfamiliar languages sound faster than one’s native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages have effects on implicit speech processing. Our measure of implicit rate perception was “normalization for speaking rate”: an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence. That is, listeners did not judge speech rate itself; instead, they categorized ambiguous vowels whose perception was implicitly affected by the rate of the context. We asked whether a bias towards long /a:/ might be observed when the context is not actually faster but simply spoken in a foreign language. A fully symmetrical experimental design was used: Dutch and German participants listened to rate matched (fast and slow) sentences in both languages spoken by the same bilingual speaker. Sentences were followed by nonwords that contained vowels from an /a-a:/ duration continuum. Results from Experiments 1 and 2 showed a consistent effect of rate normalization for both listener groups. Moreover, for German listeners, across the two experiments, foreign sentences triggered more /a:/ responses than (rate matched) native sentences, suggesting that foreign sentences were indeed perceived as faster. Moreover, this Foreign Language effect was modulated by participants’ ability to understand the foreign language: those participants that scored higher on a foreign language translation task showed less of a Foreign Language effect. However, opposite effects were found for the Dutch listeners. For them, their native rather than the foreign language induced more /a:/ responses. Nevertheless, this reversed effect could be reduced when additional spectral properties of the context were controlled for. Experiment 3, using explicit rate judgments, replicated the effect for German but not Dutch listeners. We therefore conclude that the subjective impression that foreign languages sound fast may have an effect on implicit speech processing, with implications for how language learners perceive spoken segments in a foreign language.

    Additional information

    data sheet 1.docx
  • Bosker, H. R. (2017). How our own speech rate influences our perception of others. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1225-1238. doi:10.1037/xlm0000381.

    Abstract

    In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through six experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing pre-recorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2016). Listening under cognitive load makes speech sound fast. In H. van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments [SPIRE] Workshop (pp. 23-24). Groningen.
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Additional information

    pdf via conference website227
  • Bosker, H. R. (2021). The contribution of amplitude modulations in speech to perceived charisma. In B. Weiss, J. Trouvain, M. Barkat-Defradas, & J. J. Ohala (Eds.), Voice attractiveness: Prosody, phonology and phonetics (pp. 165-181). Singapore: Springer. doi:10.1007/978-981-15-6627-1_10.

    Abstract

    Speech contains pronounced amplitude modulations in the 1–9 Hz range, correlating with the syllabic rate of speech. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition and has beneficial effects on language processing. Here, we investigated the contribution of amplitude modulations to the subjective impression listeners have of public speakers. The speech from US presidential candidates Hillary Clinton and Donald Trump in the three TV debates of 2016 was acoustically analyzed by means of modulation spectra. These indicated that Clinton’s speech had more pronounced amplitude modulations than Trump’s speech, particularly in the 1–9 Hz range. A subsequent perception experiment, with listeners rating the perceived charisma of (low-pass filtered versions of) Clinton’s and Trump’s speech, showed that more pronounced amplitude modulations (i.e., more ‘rhythmic’ speech) increased perceived charisma ratings. These outcomes highlight the important contribution of speech rhythm to charisma perception.
  • Bosker, H. R. (2017). The role of temporal amplitude modulations in the political arena: Hillary Clinton vs. Donald Trump. In Proceedings of Interspeech 2017 (pp. 2228-2232). doi:10.21437/Interspeech.2017-142.

    Abstract

    Speech is an acoustic signal with inherent amplitude modulations in the 1-9 Hz range. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition. Moreover, rhythmic amplitude modulations have been shown to have beneficial effects on language processing and the subjective impression listeners have of the speaker. This study investigated the role of amplitude modulations in the political arena by comparing the speech produced by Hillary Clinton and Donald Trump in the three presidential debates of 2016. Inspection of the modulation spectra, revealing the spectral content of the two speakers’ amplitude envelopes after matching for overall intensity, showed considerably greater power in Clinton’s modulation spectra (compared to Trump’s) across the three debates, particularly in the 1-9 Hz range. The findings suggest that Clinton’s speech had a more pronounced temporal envelope with rhythmic amplitude modulations below 9 Hz, with a preference for modulations around 3 Hz. This may be taken as evidence for a more structured temporal organization of syllables in Clinton’s speech, potentially due to more frequent use of preplanned utterances. Outcomes are interpreted in light of the potential beneficial effects of a rhythmic temporal envelope on intelligibility and speaker perception.
  • Bosking, W. H., Sun, P., Ozker, M., Pei, X., Foster, B. L., Beauchamp, M. S., & Yoshor, D. (2017). Saturation in phosphene size with increasing current levels delivered to human visual cortex. The Journal of Neuroscience, 37(30), 7188-7197. doi:10.1523/JNEUROSCI.2896-16.2017.

    Abstract

    Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices.
  • Bosman, A., Moisik, S. R., Dediu, D., & Waters-Rist, A. (2017). Talking heads: Morphological variation in the human mandible over the last 500 years in the Netherlands. HOMO - Journal of Comparative Human Biology, 68(5), 329-342. doi:10.1016/j.jchb.2017.08.002.

    Abstract

    The primary aim of this paper is to assess patterns of morphological variation in the mandible to investigate changes during the last 500 years in the Netherlands. Three-dimensional geometric morphometrics is used on data collected from adults from three populations living in the Netherlands during three time-periods. Two of these samples come from Dutch archaeological sites (Alkmaar, 1484-1574, n = 37; and Middenbeemster, 1829-1866, n = 51) and were digitized using a 3D laser scanner. The third is a modern sample obtained from MRI scans of 34 modern Dutch individuals. Differences between mandibles are dominated by size. Significant differences in size are found among samples, with on average, males from Alkmaar having the largest mandibles and females from Middenbeemster having the smallest. The results are possibly linked to a softening of the diet, due to a combination of differences in food types and food processing that occurred between these time-periods. Differences in shape are most noticeable between males from Alkmaar and Middenbeemster. Shape differences between males and females are concentrated in the symphysis and ramus, which is mostly the consequence of sexual dimorphism. The relevance of this research is a better understanding of the anatomical variation of the mandible that can occur over an evolutionarily short time, as well as supporting research that has shown plasticity of the mandibular form related to diet and food processing. This plasticity of form must be taken into account in phylogenetic research and when the mandible is used in sex estimation of skeletons.
  • Bottini, R., & Casasanto, D. (2011). Space and time in the child’s mind: Further evidence for a cross-dimensional asymmetry [Abstract]. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3010). Austin, TX: Cognitive Science Society.

    Abstract

    Space and time appear to be related asymmetrically in the child’s mind: temporal representations depend on spatial representations more than vice versa, as predicted by space-time metaphors in language. In a study supporting this conclusion, spatial information interfered with children’s temporal judgments more than vice versa (Casasanto, Fotakopoulou, & Boroditsky, 2010, Cognitive Science). In this earlier study, however, spatial information was available to participants for more time than temporal information was (as is often the case when people observe natural events), suggesting a skeptical explanation for the observed effect. Here we conducted a stronger test of the hypothesized space-time asymmetry, controlling spatial and temporal aspects of the stimuli even more stringently than they are generally ’controlled’ in the natural world. Results replicated Casasanto and colleagues’, validating their finding of a robust representational asymmetry between space and time, and extending it to children (4-10 y.o.) who speak Dutch and Brazilian Portuguese.
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Bowerman, M., & Pederson, E. (1992). Topological relations picture series. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 51). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883589.

    Abstract

    This task is designed to elicit expressions of spatial relations. It was originally designed by Melissa Bowerman for use with young children, but was then developed further by Bowerman in collaboration with Pederson for crosslinguistic comparison. It has been used in fieldsites all over the world and is commonly known as “BowPed” or “TPRS”. Older incarnations did not always come with instructions. This entry includes a one-page instruction sheet and high quality versions of the original pictures.
  • Bowerman, M. (1992). Topological Relations Pictures: Topological Paths. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 18-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512508.

    Abstract

    This entry suggests ways to elicit descriptions of caused motion involving topological relations (the domain of English put IN/ON/TOGETHER, take OUT/OFF/APART, etc.). There is a large amount of cross-linguistic variation in this domain. The tasks outlined here address matters such as the division of labor between the various elements of spatial semantics in the sentence. For example, is most of the work of expressing PATH done in a locative marker, or in the verb, or both?
  • Bowerman, M. (1992). Topological Relations Pictures: Static Relations. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 25-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512672.

    Abstract

    The precursor to the Bowped stimuli, this entry suggests various spatial configurations to explore using real objects, rather than the line drawings used in Bowped.
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M., & Meyer, A. (1991). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.12 1991. Nijmegen: MPI for Psycholinguistics.
  • Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M. L. Rice, & R. L. Schiefelbusch (Eds.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (2011). Linguistic typology and first language acquisition. In J. J. Song (Ed.), The Oxford handbook of linguistic typology (pp. 591-617). Oxford: Oxford University Press.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Braden, R. O., Amor, D. J., Fisher, S. E., Mei, C., Myers, C. T., Mefford, H., Gill, D., Srivastava, S., Swanson, L. C., Goel, H., Scheffer, I. E., & Morgan, A. T. (2021). Severe speech impairment is a distinguishing feature of FOXP1-related disorder. Developmental Medicine & Child Neurology, 63(12), 1417-1426. doi:10.1111/dmcn.14955.

    Abstract

    Aim
    To delineate the speech and language phenotype of a cohort of individuals with FOXP1-related disorder.

    Method
    We administered a standardized test battery to examine speech and oral motor function, receptive and expressive language, non-verbal cognition, and adaptive behaviour. Clinical history and cognitive assessments were analysed together with speech and language findings.

    Results
    Twenty-nine patients (17 females, 12 males; mean age 9y 6mo; median age 8y [range 2y 7mo–33y]; SD 6y 5mo) with pathogenic FOXP1 variants (14 truncating, three missense, three splice site, one in-frame deletion, eight cytogenic deletions; 28 out of 29 were de novo variants) were studied. All had atypical speech, with 21 being verbal and eight minimally verbal. All verbal patients had dysarthric and apraxic features, with phonological deficits in most (14 out of 16). Language scores were low overall. In the 21 individuals who carried truncating or splice site variants and small deletions, expressive abilities were relatively preserved compared with comprehension.

    Interpretation
    FOXP1-related disorder is characterized by a complex speech and language phenotype with prominent dysarthria, broader motor planning and programming deficits, and linguistic-based phonological errors. Diagnosis of the speech phenotype associated with FOXP1-related dysfunction will inform early targeted therapy.

    Additional information

    figure S1 table S1
  • Bramão, I., Reis, A., Petersson, K. M., & Faísca, L. (2016). Knowing that strawberries are red and seeing red strawberries: The interaction between surface colour and colour knowledge information. Journal of Cognitive Psychology, 28(6), 641-657. doi:10.1080/20445911.2016.1182171.

    Abstract

    his study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising structurally dissimilar shaped objects.
  • Bramão, B., Reis, A., Petersson, K. M., & Faísca, L. (2011). The role of color in object recognition: A review and meta-analysis. Acta Psychologica, 138, 244-253. doi:10.1016/j.actpsy.2011.06.010.

    Abstract

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d = 0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d = 0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d = 0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d = 0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition.

    Files private

    Request files
  • Bramão, I., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2011). The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects. The Journal of General Psychology, 138(1), 49-65. doi:10.1080/00221309.2010.533718.

    Abstract

    In the present study, the authors explore in detail the level of visual object recognition at which perceptual color information improves the recognition of color diagnostic and noncolor diagnostic objects. To address this issue, 3 object recognition tasks, with different cognitive demands, were designed: (a) an object verification task; (b) a category verification task; and (c) a name verification task. They found that perceptual color information improved color diagnostic object recognition mainly in tasks for which access to the semantic knowledge about the object was necessary to perform the task; that is, in category and name verification. In contrast, the authors found that perceptual color information facilitates noncolor diagnostic object recognition when access to the object’s structural description from long-term memory was necessary—that is, object verification. In summary, the present study shows that the role of perceptual color information in object recognition is dependent on color diagnosticity
  • Brand, S., & Ernestus, M. (2021). Reduction of word-final obstruent-liquid-schwa clusters in Parisian French. Corpus Linguistics and Linguistic Theory, 17(1), 249-285. doi:10.1515/cllt-2017-0067.

    Abstract

    This corpus study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in nouns in casual Parisian French. Results showed that at least one phoneme was absent in 80.7% of the 291 noun tokens in the dataset, and that the whole cluster was absent (e.g., [mis] for ministre) in no less than 15.5% of the tokens. We demonstrate that phonemes are not always completely absent, but that they may leave traces on neighbouring phonemes. Further, the clusters display undocumented voice assimilation patterns. Statistical modelling showed that a phoneme is most likely to be absent if the following phoneme is also absent. The durations of the phonemes are conditioned particularly by the position of the word in the prosodic phrase. We argue, on the basis of three different types of evidence, that in French word-final OLS clusters, the absence of obstruents is mainly due to gradient reduction processes, whereas the absence of schwa and liquids may also be due to categorical deletion processes.
  • Brand, S. (2017). The processing of reduced word pronunciation variants by natives and learners: Evidence from French casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Brandmeyer, A., Sadakata, M., Timmers, R., & Desain, P. (2011). Learning expressive percussion performance under different visual feedback conditions. Psychological Research, 75, 107-121. doi:10.1007/s00426-010-0291-6.

    Abstract

    A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.
  • Brandt, S., Nitschke, S., & Kidd, E. (2017). Priming the comprehension of German object relative clauses. Language Learning and Development, 13(3), 241-261. doi:10.1080/15475441.2016.1235500.

    Abstract

    Structural priming is a useful laboratory-based technique for investigating how children respond to temporary changes in the distribution of structures in their input. In the current study we investigated whether increasing the number of object relative clauses (RCs) in German-speaking children’s input changes their processing preferences for ambiguous RCs. Fifty-one 6-year-olds and 54 9-year-olds participated in a priming task that (i) gauged their baseline interpretations for ambiguous RC structures, (ii) primed an object-RC interpretation of ambiguous RCs, and (iii) determined whether priming persevered beyond immediate prime-target pairs. The 6-year old children showed no priming effect, whereas the 9-year-old group showed robust priming that was long lasting. Unlike in studies of priming in production, priming did not increase in magnitude when there was lexical overlap between prime and target. Overall, the results suggest that increased exposure to object RCs facilitates children’s interpretation of this otherwise infrequent structure, but only in older children. The implications for acquisition theory are discussed.
  • Braun, B., Dainora, A., & Ernestus, M. (2011). An unfamiliar intonation contour slows down online speech comprehension. Language and Cognitive Processes, 26(3), 350 -375. doi:10.1080/01690965.2010.492641.

    Abstract

    This study investigates whether listeners' familiarity with an intonation contour affects speech processing. In three experiments, Dutch participants heard Dutch sentences with normal intonation contours and with unfamiliar ones and performed word-monitoring, lexical decision, or semantic categorisation tasks (the latter two with cross-modal identity priming). The unfamiliar intonation contour slowed down participants on all tasks, which demonstrates that an unfamiliar intonation contour has a robust detrimental effect on speech processing. Since cross-modal identity priming with a lexical decision task taps into lexical access, this effect obtained in this task suggests that an unfamiliar intonation contour hinders lexical access. Furthermore, results from the semantic categorisation task show that the effect of an uncommon intonation contour is long-lasting and hinders subsequent processing. Hence, intonation not only contributes to utterance meaning (emotion, sentence type, and focus), but also affects crucial aspects of the speech comprehension process and is more important than previously thought.
  • Braun, B., & Tagliapietra, L. (2011). On-line interpretation of intonational meaning in L2. Language and Cognitive Processes, 26(2), 224 -235. doi:10.1080/01690965.2010.486209.

    Abstract

    Despite their relatedness, Dutch and German differ in the interpretation of a particular intonation contour, the hat pattern. In the literature, this contour has been described as neutral for Dutch, and as contrastive for German. A recent study supports the idea that Dutch listeners interpret this contour neutrally, compared to the contrastive interpretation of a lexically identical utterance realised with a double peak pattern. In particular, this study showed shorter lexical decision latencies to visual targets (e.g., PELIKAAN, “pelican”) following a contrastively related prime (e.g., flamingo, “flamingo”) only when the primes were embedded in sentences with a contrastive double peak contour, not in sentences with a neutral hat pattern. The present study replicates Experiment 1a of Braun and Tagliapietra (2009) with German learners of Dutch. Highly proficient learners of Dutch differed from Dutch natives in that they showed reliable priming effects for both intonation contours. Thus, the interpretation of intonational meaning in L2 appears to be fast, automatic, and driven by the associations learned in the native language.
  • Braun, B., Lemhofer, K., & Mani, N. (2011). Perceiving unstressed vowels in foreign-accented English. Journal of the Acoustical Society of America, 129, 376-387. doi:10.1121/1.3500688.

    Abstract

    This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to /@/, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments—produced by either an English or a Dutch speaker of English—and performed lexical decisions on visual targets. Primes were either stress-matching (“ab” excised from absurd), stress-mismatching (“ab” from absence), or unrelated (“pro” from profound) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel quality is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general, but it is in instances where the language-specific implementation of lexical stress differs across languages.
  • Brehm, L., & Meyer, A. S. (2021). Planning when to say: Dissociating cue use in utterance initiation using cross-validation. Journal of Experimental Psychology: General, 150(9), 1772-1799. doi:10.1037/xge0001012.

    Abstract

    In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner’s turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate’s utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt’s length, and whether within a block of trials, the confederate prompt’s length was predictable. We measured how these factors affected the gap between turns and the participants’ allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate’s stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner’s speech than corepresentation of their utterance content.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2021). Probabilistic online processing of sentence anomalies. Language, Cognition and Neuroscience, 36(8), 959-983. doi:10.1080/23273798.2021.1900579.

    Abstract

    Listeners can successfully interpret the intended meaning of an utterance even when it contains errors or other unexpected anomalies. The present work combines an online measure of attention to sentence referents (visual world eye-tracking) with offline judgments of sentence meaning to disclose how the interpretation of anomalous sentences unfolds over time in order to explore mechanisms of non-literal processing. We use a metalinguistic judgment in Experiment 1 and an elicited imitation task in Experiment 2. In both experiments, we focus on one morphosyntactic anomaly (Subject-verb agreement; The key to the cabinets literally *were … ) and one semantic anomaly (Without; Lulu went to the gym without her hat ?off) and show that non-literal referents to each are considered upon hearing the anomalous region of the sentence. This shows that listeners understand anomalies by overwriting or adding to an initial interpretation and that this occurs incrementally and adaptively as the sentence unfolds.
  • Brehm, L., & Goldrick, M. (2016). Empirical and conceptual challenges for neurocognitive theories of language production. Language, Cognition and Neuroscience, 31(4), 504-507. doi:10.1080/23273798.2015.1110604.
  • Brehm, L., & Goldrick, M. (2017). Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions. Journal of Experimental Psychology: Learning, Memory, and Cognition., 43(10), 1537-1556. doi:10.1037/xlm0000390.

    Abstract

    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., & Bock, K. (2017). Referential and lexical forces in number agreement. Language, Cognition and Neuroscience, 32(2), 129-146. doi:10.1080/23273798.2016.1234060.

    Abstract

    In work on grammatical agreement in sentence production, there are accounts of verb number formulation that emphasise the role of whole-structure properties and accounts that emphasise the role of word-driven properties. To evaluate these alternatives, we carried out two experiments that examined a referential (wholistic) contributor to agreement along with two lexical-semantic (local) factors. Both experiments gauged the accuracy and latency of inflected-verb production in order to assess how variations in grammatical number interacted with the other factors. The accuracy of verb production was modulated both by the referential effect of notional number and by the lexical-semantic effects of relatedness and category membership. As an index of agreement difficulty, latencies were little affected by either factor. The findings suggest that agreement is sensitive to referential as well as lexical forces and highlight the importance of lexical-structural integration in the process of sentence production.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material

Share this page