Publications

Displaying 101 - 200 of 2177
  • Bergmann, C., & Cristia, A. (2018). Environmental influences on infants’ native vowel discrimination: The case of talker number in daily life. Infancy, 23(4), 484-501. doi:10.1111/infa.12232.

    Abstract

    Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4-month-olds (before attunement to native vowels), 6-month-olds (at the cusp of native vowel attunement) and 12-month-olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4- and 6-month-olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.
  • Bergmann, C., & Cristia, A. (2016). Development of infants' segmentation of words from native speech: a meta-analytic approach. Developmental Science, 19(6), 901-917. doi:10.1111/desc.12341.

    Abstract

    nfants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta-analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task- and stimulus-related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.

    Additional information

    desc12341-sup-0001-sup_material.doc
  • Bergmann, C., Cristia, A., & Dupoux, E. (2016). Discriminability of sound contrasts in the face of speaker variation quantified. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. (pp. 1331-1336). Austin, TX: Cognitive Science Society.

    Abstract

    How does a naive language learner deal with speaker variation irrelevant to distinguishing word meanings? Experimental data is contradictory, and incompatible models have been proposed. Here, we examine basic assumptions regarding the acoustic signal the learner deals with: Is speaker variability a hurdle in discriminating sounds or can it easily be ignored? To this end, we summarize existing infant data. We then present machine-based discriminability scores of sound pairs obtained without any language knowledge. Our results show that speaker variability decreases sound contrast discriminability, and that some contrasts are affected more than others. However, chance performance is rare; most contrasts remain discriminable in the face of speaker variation. We take our results to mean that speaker variation is not a uniform hurdle to discriminating sound contrasts, and careful examination is necessary when planning and interpreting studies testing whether and to what extent infants (and adults) are sensitive to speaker differences.

    Additional information

    Scripts and data
  • Bergmann, C., Tsuji, S., Piccinini, P. E., Lewis, M. L., Braginsky, M. B., Frank, M. C., & Cristia, A. (2018). Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Development, 89(6), 1996-2009. doi:10.1111/cdev.13079.

    Abstract

    Previous work suggests key factors for replicability, a necessary feature for theory
    building, include statistical power and appropriate research planning. These factors are examined by analyzing a collection of 12 standardized meta-analyses on language development between birth and 5 years. With a median effect size of Cohen's d= 0.45 and typical sample size of 18 participants, most research is underpowered (range: 6%-99%;
    median 44%); and calculating power based on seminal publications is not a suitable strategy.
    Method choice can be improved, as shown in analyses on exclusion rates and effect size as a
    function of method. The article ends with a discussion on how to increase replicability in both language acquisition studies specifically and developmental research more generally.
  • Bergmann, C., Tsuji, S., & Cristia, A. (2017). Top-down versus bottom-up theories of phonological acquisition: A big data approach. In Proceedings of Interspeech 2017 (pp. 2103-2107).

    Abstract

    Recent work has made available a number of standardized meta- analyses bearing on various aspects of infant language processing. We utilize data from two such meta-analyses (discrimination of vowel contrasts and word segmentation, i.e., recognition of word forms extracted from running speech) to assess whether the published body of empirical evidence supports a bottom-up versus a top-down theory of early phonological development by leveling the power of results from thousands of infants. We predicted that if infants can rely purely on auditory experience to develop their phonological categories, then vowel discrimination and word segmentation should develop in parallel, with the latter being potentially lagged compared to the former. However, if infants crucially rely on word form information to build their phonological categories, then development at the word level must precede the acquisition of native sound categories. Our results do not support the latter prediction. We discuss potential implications and limitations, most saliently that word forms are only one top-down level proposed to affect phonological development, with other proposals suggesting that top-down pressures emerge from lexical (i.e., word-meaning pairs) development. This investigation also highlights general procedures by which standardized meta-analyses may be reused to answer theoretical questions spanning across phenomena.

    Additional information

    Scripts and data
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P., & Fotopoulou, A. (2016). Mentalizing the body: Spatial and social cognition in anosognosia for hemiplegia. Brain, 139(3), 971-985. doi:10.1093/brain/awv390.

    Abstract

    Following right-hemisphere damage, a specific disorder of motor awareness can occur called anosognosia for hemiplegia, i.e. the denial of motor deficits contralateral to a brain lesion. The study of anosognosia can offer unique insights into the neurocognitive basis of awareness. Typically, however, awareness is assessed as a first person judgement and the ability of patients to think about their bodies in more ‘objective’ (third person) terms is not directly assessed. This may be important as right-hemisphere spatial abilities may underlie our ability to take third person perspectives. This possibility was assessed for the first time in the present study. We investigated third person perspective taking using both visuospatial and verbal tasks in right-hemisphere stroke patients with anosognosia ( n = 15) and without anosognosia ( n = 15), as well as neurologically healthy control subjects ( n = 15). The anosognosic group performed worse than both control groups when having to perform the tasks from a third versus a first person perspective. Individual analysis further revealed a classical dissociation between most anosognosic patients and control subjects in mental (but not visuospatial) third person perspective taking abilities. Finally, the severity of unawareness in anosognosia patients was correlated to greater impairments in such third person, mental perspective taking abilities (but not visuospatial perspective taking). In voxel-based lesion mapping we also identified the lesion sites linked with such deficits, including some brain areas previously associated with inhibition, perspective taking and mentalizing, such as the inferior and middle frontal gyri, as well as the supramarginal and superior temporal gyri. These results suggest that neurocognitive deficits in mental perspective taking may contribute to anosognosia and provide novel insights regarding the relation between self-awareness and social cognition.
  • Bickel, B. (1991). Der Hang zur Exzentrik - Annäherungen an das kognitive Modell der Relativkonstruktion. In W. Bisang, & P. Rinderknecht (Eds.), Von Europa bis Ozeanien - von der Antinomie zum Relativsatz (pp. 15-37). Zurich, Switzerland: Seminar für Allgemeine Sprachwissenschaft der Universität.
  • Bickel, B. (1994). In the vestibule of meaning: Transivity inversion as a morphological phenomenon. Studies in Language, 19(1), 73-127.
  • Birchall, J., Dunn, M., & Greenhill, S. J. (2016). A combined comparative and phylogenetic analysis of the Chapacuran language family. International Journal of American Linguistics, 82(3), 255-284. doi:10.1086/687383.

    Abstract

    The Chapacuran language family, with three extant members and nine historically attested lects, has yet to be classified following modern standards in historical linguistics. This paper presents an internal classification of these languages by combining both the traditional comparative method (CM) and Bayesian phylogenetic inference (BPI). We identify multiple systematic sound correspondences and 285 cognate sets of basic vocabulary using the available documentation. These allow us to reconstruct a large portion of the Proto-Chapacuran phonemic inventory and identify tentative major subgroupings. The cognate sets form the input for the BPI analysis, which uses a stochastic Continuous-Time Markov Chain to model the change of these cognate sets over time. We test various models of lexical substitution and evolutionary clocks, and use ethnohistorical information and data collection dates to calibrate the resulting trees. The CM and BPI analyses produce largely congruent results, suggesting a division of the family into three different clades.

    Additional information

    Appendix
  • Black, A., & Bergmann, C. (2017). Quantifying infants' statistical word segmentation: A meta-analysis. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (pp. 124-129). Austin, TX: Cognitive Science Society.

    Abstract

    Theories of language acquisition and perceptual learning increasingly rely on statistical learning mechanisms. The current meta-analysis aims to clarify the robustness of this capacity in infancy within the word segmentation literature. Our analysis reveals a significant, small effect size for conceptual replications of Saffran, Aslin, & Newport (1996), and a nonsignificant effect across all studies that incorporate transitional probabilities to segment words. In both conceptual replications and the broader literature, however, statistical learning is moderated by whether stimuli are naturally produced or synthesized. These findings invite deeper questions about the complex factors that influence statistical learning, and the role of statistical learning in language acquisition.
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Blumstein, S., & Cutler, A. (2003). Speech perception: Phonetic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 151-154). Oxford: Oxford University Press.
  • Blythe, J. (2018). Genesis of the trinity: The convergent evolution of trirelational kinterms. In P. McConvell, & P. Kelly (Eds.), Skin, kin and clan: The dynamics of social categories in Indigenous Australia (pp. 431-471). Canberra: ANU EPress.
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bobb, S., Huettig, F., & Mani, N. (2016). Predicting visual information during sentence processing: Toddlers activate an object's shape before it is mentioned. Journal of Experimental Child Psychology, 151, 51-64. doi:10.1016/j.jecp.2015.11.002.

    Abstract

    We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual-shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. 30-month-old toddlers heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts: Hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of Psycholinguistics (pp. 945-984). San Diego,: Academic Press.
  • De Boer, M., Kokal, I., Blokpoel, M., Liu, R., Stolk, A., Roelofs, K., Van Rooij, I., & Toni, I. (2017). Oxytocin modulates human communication by enhancing cognitive exploration. Psychoneuroendocrinology, 86, 64-72. doi:10.1016/j.psyneuen.2017.09.010.

    Abstract

    Oxytocin is a neuropeptide known to influence how humans share material resources. Here we explore whether oxytocin influences how we share knowledge. We focus on two distinguishing features of human communication, namely the ability to select communicative signals that disambiguate the many-to-many mappings that exist between a signal’s form and meaning, and adjustments of those signals to the presumed cognitive characteristics of the addressee (“audience design”). Fifty-five males participated in a randomized, double-blind, placebo controlled experiment involving the intranasal administration of oxytocin. The participants produced novel non-verbal communicative signals towards two different addressees, an adult or a child, in an experimentally-controlled live interactive setting. We found that oxytocin administration drives participants to generate signals of higher referential quality, i.e. signals that disambiguate more communicative problems; and to rapidly adjust those communicative signals to what the addressee understands. The combined effects of oxytocin on referential quality and audience design fit with the notion that oxytocin administration leads participants to explore more pervasively behaviors that can convey their intention, and diverse models of the addressees. These findings suggest that, besides affecting prosocial drive and salience of social cues, oxytocin influences how we share knowledge by promoting cognitive exploration
  • De Boer, B., & Thompson, B. (2018). Biology-culture co-evolution in finite populations. Scientific Reports, 8: 1209. doi:10.1038/s41598-017-18928-0.

    Abstract

    Language is the result of two concurrent evolutionary processes: Biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: The speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: A generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: In realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another. © 2018 The Author(s).

    Additional information

    41598_2017_18928_MOESM1_ESM.pdf
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bögels, S., & Levinson, S. C. (2017). The brain behind the response: Insights into turn-taking in conversation from neuroimaging. Research on Language and Social Interaction, 50, 71-89. doi:10.1080/08351813.2017.1262118.

    Abstract

    This paper reviews the prospects for the cross-fertilization of conversation-analytic (CA) and neurocognitive studies of conversation, focusing on turn-taking. Although conversation is the primary ecological niche for language use, relatively little brain research has focused on interactive language use, partly due to the challenges of using brain-imaging methods that are controlled enough to perform sound experiments, but still reflect the rich and spontaneous nature of conversation. Recently, though, brain researchers have started to investigate conversational phenomena, for example by using 'overhearer' or controlled interaction paradigms. We review neuroimaging studies related to turn-taking and sequence organization, phenomena historically described by CA. These studies for example show early action recognition and immediate planning of responses midway during an incoming turn. The review discusses studies with an eye to a fruitful interchange between CA and neuroimaging research on conversation and an indication of how these disciplines can benefit from each other.
  • Bögels, S., Casillas, M., & Levinson, S. C. (2018). Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question. Neuropsychologia, 109, 295-310. doi:10.1016/j.neuropsychologia.2017.12.028.

    Abstract

    Rapid response latencies in conversation suggest that responders start planning before the ongoing turn is finished. Indeed, an earlier EEG study suggests that listeners start planning their responses to questions as soon as they can (Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5, 12881). The present study aimed to (1) replicate this early planning effect and (2) investigate whether such early response planning incurs a cost on participants’ concurrent comprehension of the ongoing turn. During the experiment participants answered questions from a confederate partner. To address aim (1), the questions were designed such that response planning could start either early or late in the turn. Our results largely replicate Bögels et al. (2015) showing a large positive ERP effect and an oscillatory alpha/beta reduction right after participants could have first started planning their verbal response, again suggesting an early start of response planning. To address aim (2), the confederate's questions also contained either an expected word or an unexpected one to elicit a differential N400 effect, either before or after the start of response planning. We hypothesized an attenuated N400 effect after response planning had started. In contrast, the N400 effects before and after planning did not differ. There was, however, a positive correlation between participants' response time and their N400 effect size after planning had started; quick responders showed a smaller N400 effect, suggesting reduced attention to comprehension and possibly reduced anticipatory processing. We conclude that early response planning can indeed impact comprehension processing.

    Additional information

    mmc1.pdf
  • Bohnemeyer, J. (2003). The unique vector constraint: The impact of direction changes on the linguistic segmentation of motion events. In E. v. d. Zee, & J. Slack (Eds.), Axes and vectors in language and space (pp. 86-110). Oxford: Oxford University Press.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bohnemeyer, J. (2003). Fictive motion questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 81-85). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877601.

    Abstract

    Fictive Motion is the metaphoric use of path relators in the expression of spatial relations or configurations that are static, or at any rate do not in any obvious way involve physical entities moving in real space. The goal is to study the expression of such relations or configurations in the target language, with an eye particularly on whether these expressions exclusively/preferably/possibly involve motion verbs and/or path relators, i.e., Fictive Motion. Section 2 gives Talmy’s (2000: ch. 2) phenomenology of Fictive Motion construals. The researcher’s task is to “distill” the intended spatial relations/configurations from Talmy’s description of the particular Fictive Motion metaphors and elicit as many different examples of the relations/configurations as (s)he deems necessary to obtain a basic sense of whether and how much Fictive Motion the target language offers or prescribes for the encoding of the particular type of relation/configuration. As a first stab, the researcher may try to elicit natural translations of culturally appropriate adaptations of the examples Talmy provides with each type of Fictive Motion metaphor.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J., Burenhult, N., Levinson, S. C., & Enfield, N. J. (2003). Landscape terms and place names questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 60-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877604.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Bornkessel-Schlesewsky, I., Alday, P. M., & Schlesewsky, M. (2016). A modality-independent, neurobiological grounding for the combinatory capacity of the language-ready brain: Comment on “Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain” by Michael A. Arbib. Physics of Life Reviews, 16, 55-57. doi:10.1016/j.plrev.2016.01.003.
  • Bosker, H. R. (2017). Accounting for rate-dependent category boundary shifts in speech perception. Attention, Perception & Psychophysics, 79, 333-343. doi:10.3758/s13414-016-1206-4.

    Abstract

    The perception of temporal contrasts in speech is known to be influenced by the speech rate in the surrounding context. This rate-dependent perception is suggested to involve general auditory processes since it is also elicited by non-speech contexts, such as pure tone sequences. Two general auditory mechanisms have been proposed to underlie rate-dependent perception: durational contrast and neural entrainment. The present study compares the predictions of these two accounts of rate-dependent speech perception by means of four experiments in which participants heard tone sequences followed by Dutch target words ambiguous between /ɑs/ “ash” and /a:s/ “bait”. Tone sequences varied in the duration of tones (short vs. long) and in the presentation rate of the tones (fast vs. slow). Results show that the duration of preceding tones did not influence target perception in any of the experiments, thus challenging durational contrast as explanatory mechanism behind rate-dependent perception. Instead, the presentation rate consistently elicited a category boundary shift, with faster presentation rates inducing more /a:s/ responses, but only if the tone sequence was isochronous. Therefore, this study proposes an alternative, neurobiologically plausible, account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2017). Cognitive load makes speech sound fast, but does not modulate acoustic context effects. Journal of Memory and Language, 94, 166-176. doi:10.1016/j.jml.2016.12.002.

    Abstract

    In natural situations, speech perception often takes place during the concurrent execution of other cognitive tasks, such as listening while viewing a visual scene. The execution of a dual task typically has detrimental effects on concurrent speech perception, but how exactly cognitive load disrupts speech encoding is still unclear. The detrimental effect on speech representations may consist of either a general reduction in the robustness of processing of the speech signal (‘noisy encoding’), or, alternatively it may specifically influence the temporal sampling of the sensory input, with listeners missing temporal pulses, thus underestimating segmental durations (‘shrinking of time’). The present study investigated whether and how spectral and temporal cues in a precursor sentence that has been processed under high vs. low cognitive load influence the perception of a subsequent target word. If cognitive load effects are implemented through ‘noisy encoding’, increasing cognitive load during the precursor should attenuate the encoding of both its temporal and spectral cues, and hence reduce the contextual effect that these cues can have on subsequent target sound perception. However, if cognitive load effects are expressed as ‘shrinking of time’, context effects should not be modulated by load, but a main effect would be expected on the perceived duration of the speech signal. Results from two experiments indicate that increasing cognitive load (manipulated through a secondary visual search task) did not modulate temporal (Experiment 1) or spectral context effects (Experiment 2). However, a consistent main effect of cognitive load was found: increasing cognitive load during the precursor induced a perceptual increase in its perceived speech rate, biasing the perception of a following target word towards longer durations. This finding suggests that cognitive load effects in speech perception are implemented via ‘shrinking of time’, in line with a temporal sampling framework. In addition, we argue that our results align with a model in which early (spectral and temporal) normalization is unaffected by attention but later adjustments may be attention-dependent.
  • Bosker, H. R., & Kösem, A. (2017). An entrained rhythm's frequency, not phase, influences temporal sampling of speech. In Proceedings of Interspeech 2017 (pp. 2416-2420). doi:10.21437/Interspeech.2017-73.

    Abstract

    Brain oscillations have been shown to track the slow amplitude fluctuations in speech during comprehension. Moreover, there is evidence that these stimulus-induced cortical rhythms may persist even after the driving stimulus has ceased. However, how exactly this neural entrainment shapes speech perception remains debated. This behavioral study investigated whether and how the frequency and phase of an entrained rhythm would influence the temporal sampling of subsequent speech. In two behavioral experiments, participants were presented with slow and fast isochronous tone sequences, followed by Dutch target words ambiguous between as /ɑs/ “ash” (with a short vowel) and aas /a:s/ “bait” (with a long vowel). Target words were presented at various phases of the entrained rhythm. Both experiments revealed effects of the frequency of the tone sequence on target word perception: fast sequences biased listeners to more long /a:s/ responses. However, no evidence for phase effects could be discerned. These findings show that an entrained rhythm’s frequency, but not phase, influences the temporal sampling of subsequent speech. These outcomes are compatible with theories suggesting that sensory timing is evaluated relative to entrained frequency. Furthermore, they suggest that phase tracking of (syllabic) rhythms by theta oscillations plays a limited role in speech parsing.
  • Bosker, H. R., & Reinisch, E. (2017). Foreign languages sound fast: evidence from implicit rate normalization. Frontiers in Psychology, 8: 1063. doi:10.3389/fpsyg.2017.01063.

    Abstract

    Anecdotal evidence suggests that unfamiliar languages sound faster than one’s native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages have effects on implicit speech processing. Our measure of implicit rate perception was “normalization for speaking rate”: an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence. That is, listeners did not judge speech rate itself; instead, they categorized ambiguous vowels whose perception was implicitly affected by the rate of the context. We asked whether a bias towards long /a:/ might be observed when the context is not actually faster but simply spoken in a foreign language. A fully symmetrical experimental design was used: Dutch and German participants listened to rate matched (fast and slow) sentences in both languages spoken by the same bilingual speaker. Sentences were followed by nonwords that contained vowels from an /a-a:/ duration continuum. Results from Experiments 1 and 2 showed a consistent effect of rate normalization for both listener groups. Moreover, for German listeners, across the two experiments, foreign sentences triggered more /a:/ responses than (rate matched) native sentences, suggesting that foreign sentences were indeed perceived as faster. Moreover, this Foreign Language effect was modulated by participants’ ability to understand the foreign language: those participants that scored higher on a foreign language translation task showed less of a Foreign Language effect. However, opposite effects were found for the Dutch listeners. For them, their native rather than the foreign language induced more /a:/ responses. Nevertheless, this reversed effect could be reduced when additional spectral properties of the context were controlled for. Experiment 3, using explicit rate judgments, replicated the effect for German but not Dutch listeners. We therefore conclude that the subjective impression that foreign languages sound fast may have an effect on implicit speech processing, with implications for how language learners perceive spoken segments in a foreign language.

    Additional information

    data sheet 1.docx
  • Bosker, H. R. (2017). How our own speech rate influences our perception of others. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1225-1238. doi:10.1037/xlm0000381.

    Abstract

    In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through six experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing pre-recorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2016). Listening under cognitive load makes speech sound fast. In H. van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments [SPIRE] Workshop (pp. 23-24). Groningen.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Additional information

    pdf via conference website227
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R. (2017). The role of temporal amplitude modulations in the political arena: Hillary Clinton vs. Donald Trump. In Proceedings of Interspeech 2017 (pp. 2228-2232). doi:10.21437/Interspeech.2017-142.

    Abstract

    Speech is an acoustic signal with inherent amplitude modulations in the 1-9 Hz range. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition. Moreover, rhythmic amplitude modulations have been shown to have beneficial effects on language processing and the subjective impression listeners have of the speaker. This study investigated the role of amplitude modulations in the political arena by comparing the speech produced by Hillary Clinton and Donald Trump in the three presidential debates of 2016. Inspection of the modulation spectra, revealing the spectral content of the two speakers’ amplitude envelopes after matching for overall intensity, showed considerably greater power in Clinton’s modulation spectra (compared to Trump’s) across the three debates, particularly in the 1-9 Hz range. The findings suggest that Clinton’s speech had a more pronounced temporal envelope with rhythmic amplitude modulations below 9 Hz, with a preference for modulations around 3 Hz. This may be taken as evidence for a more structured temporal organization of syllables in Clinton’s speech, potentially due to more frequent use of preplanned utterances. Outcomes are interpreted in light of the potential beneficial effects of a rhythmic temporal envelope on intelligibility and speaker perception.
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Bosking, W. H., Sun, P., Ozker, M., Pei, X., Foster, B. L., Beauchamp, M. S., & Yoshor, D. (2017). Saturation in phosphene size with increasing current levels delivered to human visual cortex. The Journal of Neuroscience, 37(30), 7188-7197. doi:10.1523/JNEUROSCI.2896-16.2017.

    Abstract

    Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices.
  • Bosman, A., Moisik, S. R., Dediu, D., & Waters-Rist, A. (2017). Talking heads: Morphological variation in the human mandible over the last 500 years in the Netherlands. HOMO - Journal of Comparative Human Biology, 68(5), 329-342. doi:10.1016/j.jchb.2017.08.002.

    Abstract

    The primary aim of this paper is to assess patterns of morphological variation in the mandible to investigate changes during the last 500 years in the Netherlands. Three-dimensional geometric morphometrics is used on data collected from adults from three populations living in the Netherlands during three time-periods. Two of these samples come from Dutch archaeological sites (Alkmaar, 1484-1574, n = 37; and Middenbeemster, 1829-1866, n = 51) and were digitized using a 3D laser scanner. The third is a modern sample obtained from MRI scans of 34 modern Dutch individuals. Differences between mandibles are dominated by size. Significant differences in size are found among samples, with on average, males from Alkmaar having the largest mandibles and females from Middenbeemster having the smallest. The results are possibly linked to a softening of the diet, due to a combination of differences in food types and food processing that occurred between these time-periods. Differences in shape are most noticeable between males from Alkmaar and Middenbeemster. Shape differences between males and females are concentrated in the symphysis and ramus, which is mostly the consequence of sexual dimorphism. The relevance of this research is a better understanding of the anatomical variation of the mandible that can occur over an evolutionarily short time, as well as supporting research that has shown plasticity of the mandibular form related to diet and food processing. This plasticity of form must be taken into account in phylogenetic research and when the mandible is used in sex estimation of skeletons.
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Bouman, M. A., & Levelt, W. J. M. (1994). Werner E. Reichardt: Levensbericht. In H. W. Pleket (Ed.), Levensberichten en herdenkingen 1993 (pp. 75-80). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Bowerman, M. (2003). Rola predyspozycji kognitywnych w przyswajaniu systemu semantycznego [Reprint]. In E. Dabrowska, & W. Kubiński (Eds.), Akwizycja języka w świetle językoznawstwa kognitywnego [Language acquisition from a cognitive linguistic perspective]. Kraków: Uniwersitas.

    Abstract

    Reprinted from; Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M., & Choi, S. (2003). Space under construction: Language-specific spatial categorization in first language acquisition. In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and thought (pp. 387-427). Cambridge: MIT Press.
  • Bowerman, M. (1994). From universal to language-specific in early grammatical development. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 346, 34-45. doi:10.1098/rstb.1994.0126.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with direct motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M., & Meyer, A. (1991). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.12 1991. Nijmegen: MPI for Psycholinguistics.
  • Bowerman, M., & Majid, A. (2003). Kids’ cut & break. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 70-71). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877607.

    Abstract

    Kids’ Cut & Break is a task inspired by the original Cut & Break task (see MPI L&C Group Field Manual 2001), but designed for use with children as well as adults. There are fewer videoclips to be described (34 as opposed to 61), and they are “friendlier” and more interesting: the actors wear colorful clothes, smile, and act cheerfully. The first 2 items are warm-ups and 4 more items are fillers (interspersed with test items), so only 28 of the items are actually “test items”. In the original Cut & Break, each clip is in a separate file. In Kids’ Cut & Break, all 34 clips are edited into a single file, which plays the clips successively with 5 seconds of black screen between each clip.

    Additional information

    2003_1_Kids_cut_and_break_films.zip
  • Bowerman, M. (1994). Learning a semantic system: What role do cognitive predispositions play? [Reprint]. In P. Bloom (Ed.), Language acquisition: Core readings (pp. 329-363). Cambridge, MA: MIT Press.

    Abstract

    Reprint from: Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Bramão, I., Reis, A., Petersson, K. M., & Faísca, L. (2016). Knowing that strawberries are red and seeing red strawberries: The interaction between surface colour and colour knowledge information. Journal of Cognitive Psychology, 28(6), 641-657. doi:10.1080/20445911.2016.1182171.

    Abstract

    his study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising structurally dissimilar shaped objects.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brand, S., & Ernestus, M. (2018). Listeners’ processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: evidence from native listeners and learners of French. Quarterly Journal of Experimental Psychology, 71(5), 1240-1259. doi:10.1080/17470218.2017.1313282.

    Abstract

    n casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀəvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners’ exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves
  • Brand, J., Monaghan, P., & Walker, P. (2018). The changing role of sound‐symbolism for small versus large vocabularies. Cognitive Science, 42(S2), 578-590. doi:10.1111/cogs.12565.

    Abstract

    Natural language contains many examples of sound‐symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound‐symbolism for word learning as vocabulary size varies. Participants learned form‐meaning mappings for words which were either congruent or incongruent with regard to sound‐symbolic relations. For the smaller vocabulary, sound‐symbolism facilitated learning individual words, whereas for larger vocabularies sound‐symbolism supported learning category distinctions. The changing properties of form‐meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development.

    Additional information

    https://git.io/v5BXJ
  • Brand, S. (2017). The processing of reduced word pronunciation variants by natives and learners: Evidence from French casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Brandt, S., Nitschke, S., & Kidd, E. (2017). Priming the comprehension of German object relative clauses. Language Learning and Development, 13(3), 241-261. doi:10.1080/15475441.2016.1235500.

    Abstract

    Structural priming is a useful laboratory-based technique for investigating how children respond to temporary changes in the distribution of structures in their input. In the current study we investigated whether increasing the number of object relative clauses (RCs) in German-speaking children’s input changes their processing preferences for ambiguous RCs. Fifty-one 6-year-olds and 54 9-year-olds participated in a priming task that (i) gauged their baseline interpretations for ambiguous RC structures, (ii) primed an object-RC interpretation of ambiguous RCs, and (iii) determined whether priming persevered beyond immediate prime-target pairs. The 6-year old children showed no priming effect, whereas the 9-year-old group showed robust priming that was long lasting. Unlike in studies of priming in production, priming did not increase in magnitude when there was lexical overlap between prime and target. Overall, the results suggest that increased exposure to object RCs facilitates children’s interpretation of this otherwise infrequent structure, but only in older children. The implications for acquisition theory are discussed.
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Brehm, L., & Goldrick, M. (2016). Empirical and conceptual challenges for neurocognitive theories of language production. Language, Cognition and Neuroscience, 31(4), 504-507. doi:10.1080/23273798.2015.1110604.
  • Brehm, L., & Goldrick, M. (2017). Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions. Journal of Experimental Psychology: Learning, Memory, and Cognition., 43(10), 1537-1556. doi:10.1037/xlm0000390.

    Abstract

    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing.
  • Brehm, L., & Bock, K. (2017). Referential and lexical forces in number agreement. Language, Cognition and Neuroscience, 32(2), 129-146. doi:10.1080/23273798.2016.1234060.

    Abstract

    In work on grammatical agreement in sentence production, there are accounts of verb number formulation that emphasise the role of whole-structure properties and accounts that emphasise the role of word-driven properties. To evaluate these alternatives, we carried out two experiments that examined a referential (wholistic) contributor to agreement along with two lexical-semantic (local) factors. Both experiments gauged the accuracy and latency of inflected-verb production in order to assess how variations in grammatical number interacted with the other factors. The accuracy of verb production was modulated both by the referential effect of notional number and by the lexical-semantic effects of relatedness and category membership. As an index of agreement difficulty, latencies were little affected by either factor. The findings suggest that agreement is sensitive to referential as well as lexical forces and highlight the importance of lexical-structural integration in the process of sentence production.
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broersma, M., Carter, D., & Acheson, D. J. (2016). Cognate costs in bilingual speech production: Evidence from language switching. Frontiers in Psychology, 7: 1461. doi:10.3389/fpsyg.2016.01461.

    Abstract

    This study investigates cross-language lexical competition in the bilingual mental lexicon. It provides evidence for the occurrence of inhibition as well as the commonly reported facilitation during the production of cognates (words with similar phonological form and meaning in two languages) in a mixed picture naming task by highly proficient Welsh-English bilinguals. Previous studies have typically found cognate facilitation. It has previously been proposed (with respect to non-cognates) that cross-language inhibition is limited to low-proficient bilinguals; therefore, we tested highly proficient, early bilinguals. In a mixed naming experiment (i.e., picture naming with language switching), 48 highly proficient, early Welsh-English bilinguals named pictures in Welsh and English, including cognate and non-cognate targets. Participants were English-dominant, Welsh-dominant, or had equal language dominance. The results showed evidence for cognate inhibition in two ways. First, both facilitation and inhibition were found on the cognate trials themselves, compared to non-cognate controls, modulated by the participants' language dominance. The English-dominant group showed cognate inhibition when naming in Welsh (and no difference between cognates and controls when naming in English), and the Welsh-dominant and equal dominance groups generally showed cognate facilitation. Second, cognate inhibition was found as a behavioral adaptation effect, with slower naming for non-cognate filler words in trials after cognates than after non-cognate controls. This effect was consistent across all language dominance groups and both target languages, suggesting that cognate production involved cognitive control even if this was not measurable in the cognate trials themselves. Finally, the results replicated patterns of symmetrical switch costs, as commonly reported for balanced bilinguals. We propose that cognate processing might be affected by two different processes, namely competition at the lexical-semantic level and facilitation at the word form level, and that facilitation at the word form level might (sometimes) outweigh any effects of inhibition at the lemma level. In sum, this study provides evidence that cognate naming can cause costs in addition to benefits. The finding of cognate inhibition, particularly for the highly proficient bilinguals tested, provides strong evidence for the occurrence of lexical competition across languages in the bilingual mental lexicon.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (2003). Multimodal multiperson interaction with infants aged 9 to 15 months. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 22-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877610.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, A., & Gullberg, M. (2013). L1–L2 convergence in clausal packaging in Japanese and English. Bilingualism: Language and Cognition, 16, 477-494. doi:10.1017/S1366728912000491.

    Abstract

    This research received technical and financial support from Syracuse University, the Max Planck Institute for Psycholinguistics, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to Marianne Gullberg and Peter Indefrey).
  • Brown, P. (2013). La estructura conversacional y la adquisición del lenguaje: El papel de la repetición en el habla de los adultos y niños tzeltales. In L. de León Pasquel (Ed.), Nuevos senderos en el studio de la adquisición de lenguas mesoamericanas: Estructura, narrativa y socialización (pp. 35-82). Mexico: CIESAS-UNAM.

    Abstract

    This is a translation of the Brown 1998 article in Journal of Linguistic Anthropology, 'Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech'.

    Files private

    Request files
  • Brown, P., Pfeiler, B., de León, L., & Pye, C. (2013). The acquisition of agreement in four Mayan languages. In E. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 271-306). Amsterdam: Benjamins.

    Abstract

    This paper presents results of a comparative project documenting the development of verbal agreement inflections in children learning four different Mayan languages: K’iche’, Tzeltal, Tzotzil, and Yukatek. These languages have similar inflectional paradigms: they have a generally agglutinative morphology, with transitive verbs obligatorily marked with separate cross-referencing inflections for the two core arguments (‘ergative’ and ‘absolutive’). Verbs are also inflected for aspect and mood, and they carry a ‘status suffix’ which generally marks verb transitivity and mood. At a more detailed level, the four languages differ strikingly in the realization of cross-reference marking. For each language, we examined longitudinal language production data from two children at around 2;0, 2;6, 3;0, and 3;6 years of age. We relate differences in the acquisition patterns of verbal morphology in the languages to 1) the placement of affixes, 2) phonological and prosodic prominence, 3) language-specific constraints on the various forms of the affixes, and 4) consistent vs. split ergativity, and conclude that prosodic salience accounts provide th ebest explanation for the acquisition patterns in these four languages.

    Files private

    Request files
  • Brown, P. (1991). Sind Frauen höflicher? Befunde aus einer Maya-Gemeinde. In S. Günther, & H. Kotthoff (Eds.), Von fremden Stimmen: Weibliches und männliches Sprechen im Kulturvergleich. Frankfurt am Main: Suhrkamp.

    Abstract

    This is a German translation of Brown 1980, How and why are women more polite: Some evidence from a Mayan community.
  • Brown, P. (2017). Politeness and impoliteness. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 383-399). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.16.

    Abstract

    This article selectively reviews the literature on politeness across different disciplines—linguistics, anthropology, communications, conversation analysis, social psychology, and sociology—and critically assesses how both theoretical approaches to politeness and research on linguistic politeness phenomena have evolved over the past forty years. Major new developments include a shift from predominantly linguistic approaches to those examining politeness and impoliteness as processes that are embedded and negotiated in interactional and cultural contexts, as well as a greater focus on how both politeness and interactional confrontation and conflict fit into our developing understanding of human cooperation and universal aspects of human social interaction.

    Files private

    Request files

Share this page