Publications

Displaying 101 - 200 of 1992
  • Bien, H., Levelt, W. J. M., & Baayen, R. H. (2005). Frequency effects in compound production. Proceedings of the National Academy of Sciences of the United States of America, 102(49), 17876-17881.

    Abstract

    Four experiments investigated the role of frequency information in compound production by independently varying the frequencies of the first and second constituent as well as the frequency of the compound itself. Pairs of Dutch noun-noun compounds were selected such that there was a maximal contrast for one frequency while matching the other two frequencies. In a position-response association task, participants first learned to associate a compound with a visually marked position on a computer screen. In the test phase, participants had to produce the associated compound in response to the appearance of the position mark, and we measured speech onset latencies. The compound production latencies varied significantly according to factorial contrasts in the frequencies of both constituting morphemes but not according to a factorial contrast in compound frequency, providing further evidence for decompositional models of speech production. In a stepwise regression analysis of the joint data of Experiments 1-4, however, compound frequency was a significant nonlinear predictor, with facilitation in the low-frequency range and a trend toward inhibition in the high-frequency range. Furthermore, a combination of structural measures of constituent frequencies and entropies explained significantly more variance than a strict decompositional model, including cumulative root frequency as the only measure of constituent frequency, suggesting a role for paradigmatic relations in the mental lexicon.
  • Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Kvinder, Køn & Forskning, 29(2), 60-73. doi:10.7146/kkf.v29i2.124899.

    Abstract

    This article sets out our perspective on how to begin the journey of decolonising computational fi elds, such as data and cognitive sciences. We see this struggle as requiring two basic steps: a) realisation that the present-day system has inherited, and still enacts, hostile, conservative, and oppressive behaviours and principles towards women of colour; and b) rejection of the idea that centring individual people is a solution to system-level problems. The longer we ignore these two steps, the more “our” academic system maintains its toxic structure, excludes, and harms Black women and other minoritised groups. This also keeps the door open to discredited pseudoscience, like eugenics and physiognomy. We propose that grappling with our fi elds’ histories and heritage holds the key to avoiding mistakes of the past. In contrast to, for example, initiatives such as “diversity boards”, which can be harmful because they superfi cially appear reformatory but nonetheless center whiteness and maintain the status quo. Building on the work of many women of colour, we hope to advance the dialogue required to build both a grass-roots and a top-down re-imagining of computational sciences — including but not limited to psychology, neuroscience, cognitive science, computer science, data science, statistics, machine learning, and artifi cial intelligence. We aspire to progress away from
    these fi elds’ stagnant, sexist, and racist shared past into an ecosystem that welcomes and nurtures
    demographically diverse researchers and ideas that critically challenge the status quo.
  • Blomert, L., & Hagoort, P. (1987). Neurobiologische en neuropsychologische aspecten van dyslexie. In J. Hamers, & A. Van der Leij (Eds.), Dyslexie 87 (pp. 35-44). Lisse: Swets and Zeitlinger.
  • Bluijs, S., Dera, J., & Peeters, D. (2021). Waarom digitale literatuur in het literatuuronderwijs thuishoort. Tijdschrift voor Nederlandse Taal- en Letterkunde, 137(2), 150-163. doi:10.5117/TNTL2021.2.003.BLUI.
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bock, K., & Levelt, W. J. M. (2002). Language production: Grammatical encoding. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 405-452). London: Routledge.
  • Bodur, K., Branje, S., Peirolo, M., Tiscareno, I., & German, J. S. (2021). Domain-initial strengthening in Turkish: Acoustic cues to prosodic hierarchy in stop consonants. In Proceedings of Interspeech 2021 (pp. 1459-1463). doi:10.21437/Interspeech.2021-2230.

    Abstract

    Studies have shown that cross-linguistically, consonants at the left edge of higher-level prosodic boundaries tend to be more forcefully articulated than those at lower-level boundaries, a phenomenon known as domain-initial strengthening. This study tests whether similar effects occur in Turkish, using the Autosegmental-Metrical model proposed by Ipek & Jun [1, 2] as the basis for assessing boundary strength. Productions of /t/ and /d/ were elicited in four domain-initial prosodic positions corresponding to progressively higher-level boundaries: syllable, word, intermediate phrase, and Intonational Phrase. A fifth position, nuclear word, was included in order to better situate it within the prosodic hierarchy. Acoustic correlates of articulatory strength were measured, including closure duration for /d/ and /t/, as well as voice onset time and burst energy for /t/. Our results show that closure duration increases cumulatively from syllable to intermediate phrase, while voice onset time and burst energy are not influenced by boundary strength. These findings provide corroborating evidence for Ipek & Jun’s model, particularly for the distinction between word and intermediate phrase boundaries. Additionally, articulatory strength at the left edge of the nuclear word patterned closely with word-initial position, supporting the view that the nuclear word is not associated with a distinct phrasing domain
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., & Torreira, F. (2021). Turn-end estimation in conversational turn-taking: The roles of context and prosody. Discourse Processes, 58(10), 903-924. doi:10.1080/0163853X.2021.1986664.

    Abstract

    This study investigated the role of contextual and prosodic information in turn-end estimation by means of a button-press task. We presented participants with turns extracted from a corpus of telephone calls visually (i.e., in transcribed form, word-by-word) and auditorily, and asked them to anticipate turn ends by pressing a button. The availability of the previous conversational context was generally helpful for turn-end estimation in short turns only, and more clearly so in the visual task than in the auditory task. To investigate the role of prosody, we examined whether participants in the auditory task pressed the button close to turn-medial points likely to constitute turn ends based on lexico-syntactic information alone. We observed that the vast majority of such button presses occurred in the presence of an intonational boundary rather than in its absence. These results are consistent with the view that prosodic cues in the proximity of turn ends play a relevant role in turn-end estimation.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bohnemeyer, J. (2002). The grammar of time reference in Yukatek Maya. Munich: LINCOM.
  • Bohnemeyer, J., & Majid, A. (2002). ECOM causality revisited version 4. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 35-38). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J. (2002). [Review of the book Explorations in linguistic relativity ed. by Martin Pütz and Marjolijn H. Verspoor]. Language in Society, 31(3), 452-456. doi:DOI: 10.1017.S004740502020316502020316.
  • Bohnemeyer, J., Kelly, A., & Abdel Rahman, R. (2002). Max-Planck-Institute for Psycholinguistics: Annual Report 2002. Nijmegen: MPI for Psycholinguistics.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bohnemeyer, J. (2008). The pitfalls of getting from here to there. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic Perspectives on Argument Structure: Implications for Learnability (pp. 49-68). New York City, NY, USA: Lawrence Erlbaum Associates.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Bonte, M. L., Mitterer, H., Zellagui, N., Poelmans, H., & Blomert, L. (2005). Auditory cortical tuning to statistical regularities in phonology. Clinical Neurophysiology, 16(12), 2765-2774. doi:10.1016/j.clinph.2005.08.012.

    Abstract

    Objective: Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of phoneme combinations. Methods: We employed an ERP measure indicative of experience-dependent auditory memory traces, the mismatch negativity (MMN). We presented pairs of non-words that differed by the degree of phonotactic probability in a codified passive oddball design that minimizes the contribution of acoustic processes. Results: In Experiment 1 the non-word with high phonotactic probability (notsel) elicited a significantly enhanced MMN as compared to the non-word with low phonotactic probability (notkel). In Experiment 2 this finding was replicated with a non-word pair with a smaller acoustic difference (notsel–notfel). An MMN enhancement was not observed in a third acoustic control experiment with stimuli having comparable phonotactic probability (so–fo). Conclusions: Our data suggest that auditory cortical responses to phoneme clusters are modulated by statistical regularities of phoneme combinations. Significance: This study indicates that the language environment is relevant in shaping the neural processing of speech. Furthermore, it provides a potentially useful design for investigating implicit phonological processing in children with anomalous language functions like dyslexia.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2005). Onset entropy matters: Letter-to-phoneme mappings in seven languages. Reading and Writing, 18, 211-229. doi:10.1007/s11145-005-3001-9.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Boroditsky, L., Gaby, A., & Levinson, S. C. (2008). Time in space. In A. Majid (Ed.), Field Manual Volume 11 (pp. 52-76). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492932.

    Abstract

    How do different languages and cultures conceptualise time? This question is part of a broader set of questions about how humans come to represent and reason about abstract entities – things we cannot see or touch. For example, how do we come to represent and reason about abstract domains like justice, ideas, kinship, morality, or politics? There are two aspects of this project: (1) Time arrangement tasks to assess the way people arrange time either as temporal progressions expressed in picture cards or done using small tokens or points in space. (2) A time & space language inventory to discover and document the linguistic coding of time and its relation to space, as well as the cultural knowledge structures related to time.

    Additional information

    2008_Time_in_space_stimuli.zip
  • Bosker, H. R. (2021). Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies. Behavior Research Methods, 53(5), 1945-1953. doi:10.3758/s13428-021-01542-4.

    Abstract

    Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means
    of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often
    expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring
    the participants’ raw responses for words correctly identified from the target sentence is a time-
    consuming task, and hence resource-intensive. Moreover, there is no consensus among speech
    scientists about what specific protocol to use for the human scoring, limiting the reliability of
    human scores. The present paper evaluates various forms of fuzzy string matching between
    participants’ responses and target sentences, as automated metrics of listener transcript accuracy.
    We demonstrate that one particular metric, the Token Sort Ratio, is a consistent, highly efficient,
    and accurate metric for automated assessment of listener transcripts, as evidenced by high
    correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to
    acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for
    assessment of listener transcript accuracy in large-scale speech intelligibility studies. See
    https://tokensortratio.netlify.app for an online implementation.
  • Bosker, H. R., Badaya, E., & Corley, M. (2021). Discourse markers activate their, like, cohort competitors. Discourse Processes, 58(9), 837-851. doi:10.1080/0163853X.2021.1924000.

    Abstract

    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
  • Bosker, H. R., & Peeters, D. (2021). Beat gestures influence which speech sounds you hear. Proceedings of the Royal Society B: Biological Sciences, 288: 20202419. doi:10.1098/rspb.2020.2419.

    Abstract

    Beat gestures—spontaneously produced biphasic movements of the hand—
    are among the most frequently encountered co-speech gestures in human
    communication. They are closely temporally aligned to the prosodic charac-
    teristics of the speech signal, typically occurring on lexically stressed
    syllables. Despite their prevalence across speakers of the world’s languages,
    how beat gestures impact spoken word recognition is unclear. Can these
    simple ‘flicks of the hand’ influence speech perception? Across a range
    of experiments, we demonstrate that beat gestures influence the explicit
    and implicit perception of lexical stress (e.g. distinguishing OBject from
    obJECT), and in turn can influence what vowels listeners hear. Thus, we pro-
    vide converging evidence for a manual McGurk effect: relatively simple and
    widely occurring hand movements influence which speech sounds we hear

    Additional information

    example stimuli and experimental data
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R. (2021). The contribution of amplitude modulations in speech to perceived charisma. In B. Weiss, J. Trouvain, M. Barkat-Defradas, & J. J. Ohala (Eds.), Voice attractiveness: Prosody, phonology and phonetics (pp. 165-181). Singapore: Springer. doi:10.1007/978-981-15-6627-1_10.

    Abstract

    Speech contains pronounced amplitude modulations in the 1–9 Hz range, correlating with the syllabic rate of speech. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition and has beneficial effects on language processing. Here, we investigated the contribution of amplitude modulations to the subjective impression listeners have of public speakers. The speech from US presidential candidates Hillary Clinton and Donald Trump in the three TV debates of 2016 was acoustically analyzed by means of modulation spectra. These indicated that Clinton’s speech had more pronounced amplitude modulations than Trump’s speech, particularly in the 1–9 Hz range. A subsequent perception experiment, with listeners rating the perceived charisma of (low-pass filtered versions of) Clinton’s and Trump’s speech, showed that more pronounced amplitude modulations (i.e., more ‘rhythmic’ speech) increased perceived charisma ratings. These outcomes highlight the important contribution of speech rhythm to charisma perception.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Böttner, M. (1998). A collective extension of relational grammar. Logic Journal of the IGPL, 6(2), 175-793. doi:10.1093/jigpal/6.2.175.

    Abstract

    Relational grammar was proposed in Suppes (1976) as a semantical grammar for natural language. Fragments considered so far are restricted to distributive notions. In this article, relational grammar is extended to collective notions.
  • Bowerman, M. (2002). Taalverwerving, cognitie en cultuur. In T. Janssen (Ed.), Taal in gebruik: Een inleiding in de taalwetenschap (pp. 27-44). The Hague: Sdu.
  • Bowerman, M., & Croft, W. (2008). The acquisition of the English causative alternation. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic perspectives on argument structure: Implications for learnability (pp. 279-306). Mahwah, NJ: Erlbaum.
  • Bowerman, M., Brown, P., Eisenbeiss, S., Narasimhan, B., & Slobin, D. I. (2002). Putting things in places: Developmental consequences of linguistic typology. In E. V. Clark (Ed.), Proceedings of the 31st Stanford Child Language Research Forum. Space in language location, motion, path, and manner (pp. 1-29). Stanford: Center for the Study of Language & Information.

    Abstract

    This study explores how adults and children describe placement events (e.g., putting a book on a table) in a range of different languages (Finnish, English, German, Russian, Hindi, Tzeltal Maya, Spanish, and Turkish). Results show that the eight languages grammatically encode placement events in two main ways (Talmy, 1985, 1991), but further investigation reveals fine-grained crosslinguistic variation within each of the two groups. Children are sensitive to these finer-grained characteristics of the input language at an early age, but only when such features are perceptually salient. Our study demonstrates that a unitary notion of 'event' does not suffice to characterize complex but systematic patterns of event encoding crosslinguistically, and that children are sensitive to multiple influences, including the distributional properties of the target language, in constructing these patterns in their own speech.
  • Bowerman, M. (2005). Why can't you "open" a nut or "break" a cooked noodle? Learning covert object categories in action word meanings. In L. Gershkoff-Stowe, & D. H. Rakison (Eds.), Building object categories in developmental time (pp. 209-243). Mahwah, NJ: Erlbaum.
  • Bowerman, M. (1975). Cross linguistic similarities at two stages of syntactic development. In E. Lenneberg, & E. Lenneberg (Eds.), Foundations of language development: A multidisciplinary approach (pp. 267-282). New York: Academic Press.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bowerman, M. (1987). Commentary: Mechanisms of language acquisition. In B. MacWhinney (Ed.), Mechanisms of language acquisition (pp. 443-466). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M. (1971). [Review of A. Bar Adon & W.F. Leopold (Eds.), Child language: A book of readings (Prentice Hall, 1971)]. Contemporary Psychology: APA Review of Books, 16, 808-809.
  • Bowerman, M., & Brown, P. (Eds.). (2008). Crosslinguistic perspectives on argument structure: Implications for learnability. Mahwah, NJ: Erlbaum.

    Abstract

    This book offers an interdisciplinary perspective on verb argument structure and its role in language acquisition. Much contemporary work in linguistics and psychology assumes that argument structure is strongly constrained by a set of universal principles, and that these principles are innate, providing children with certain “bootstrapping” strategies that help them home in on basic aspects of the syntax and lexicon of their language. Drawing on a broad range of crosslinguistic data, this volume shows that languages are much more diverse in their argument structure properties than has been realized. This diversity raises challenges for many existing proposals about language acquisition, affects the range of solutions that can be considered plausible, and highlights new acquisition puzzles that until now have passed unnoticed. The volume is the outcome of an integrated research project and comprises chapters by both specialists in first language acquisition and field linguists working on a variety of lesser-known languages. The research draws on original fieldwork and on adult data, child data, or both from thirteen languages from nine different language families. Some chapters offer typological perspectives, examining the basic structures of a given language with language-learnability issues in mind. Other chapters investigate specific problems of language acquisition in one or more languages. Taken as a whole, the volume illustrates how detailed work on crosslinguistic variation is critical to the development of insightful theories of language acquisition.
  • Bowerman, M., & Brown, P. (2008). Introduction. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic perspectives on argument structure: Implications for learnability (pp. 1-26). Mahwah, NJ: Erlbaum.

    Abstract

    This chapter outlines two influential "bootstrapping" proposals that draw on presumed universals of argument structure to account for young children's acquisition of grammar (semantic bootstrapping) and verb meaning (syntactic bootstrapping), discusses controversial issues raised by these proposals, and summarizes the new insights contributed to the debate by each of the chapters in this volume.
  • Bowerman, M. (2005). Linguistics. In B. Hopkins (Ed.), The Cambridge encyclopedia of child development (pp. 497-501). Cambridge: Cambridge University Press.
  • Bowerman, M. (1986). First steps in acquiring conditionals. In E. C. Traugott, A. G. t. Meulen, J. S. Reilly, & C. A. Ferguson (Eds.), On conditionals (pp. 285-308). Cambridge University Press.

    Abstract

    This chapter is about the initial flowering of conditionals, if-(then) constructions, in children's spontaneous speech. It is motivated by two major theoretical interests. The first and most immediate is to understand the acquisition process itself. Conditionals are conceptually, and in many languages morphosyntactically, complex. What aspects of cognitive and grammatical development are implicated in their acquisition? Does learning take place in the context of particular interactions with other speakers? Where do conditionals fit in with the acquisition of other complex sentences? What are the semantic, syntactic and pragmatic properties of the first conditionals? Underlying this first interest is a second, more strictly linguistic one. Research of recent years has found increasing evidence that natural languages are constrained in certain ways. The source of these constraints is not yet clearly understood, but it is widely assumed that some of them derive ultimately from properties of children's capacity for language acquisition.

    Files private

    Request files
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M. (1988). Inducing the latent structure of language. In F. Kessel (Ed.), The development of language and language researchers: Essays presented to Roger Brown (pp. 23-49). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M. (2002). Mapping thematic roles onto syntactic functions: Are children helped by innate linking rules? [Reprint]. In Mouton Classics: From syntax to cognition, from phonology to text (vol.2) (pp. 495-531). Berlin: Mouton de Gruyter.

    Abstract

    Reprinted from: Bowerman, M. (1990). Mapping thematic roles onto syntactic functions: Are children helped by innate linking rules? Linguistics, 28, 1253-1289.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Bowerman, M. (1988). The 'no negative evidence' problem: How do children avoid constructing an overly general grammar? In J. Hawkins (Ed.), Explaining language universals (pp. 73-101). Oxford: Basil Blackwell.
  • Li, P., & Bowerman, M. (1998). The acquisition of lexical and grammatical aspect in Chinese. First Language, 18, 311-350. doi:10.1177/014272379801805404.

    Abstract

    This study reports three experiments on how children learning Mandarin Chinese comprehend and use aspect markers. These experiments examine the role of lexical aspect in children's acquisition of grammatical aspect. Results provide converging evidence for children's early sensitivity to (1) the association between atelic verbs and the imperfective aspect markers zai, -zhe, and -ne, and (2) the association between telic verbs and the perfective aspect marker -le. Children did not show a sensitivity in their use or understanding of aspect markers to the difference between stative and activity verbs or between semelfactive and activity verbs. These results are consistent with Slobin's (1985) basic child grammar hypothesis that the contrast between process and result is important in children's early acquisition of temporal morphology. In contrast, they are inconsistent with Bickerton's (1981, 1984) language bioprogram hypothesis that the distinctions between state and process and between punctual and nonpunctual are preprogrammed into language learners. We suggest new ways of looking at the results in the light of recent probabilistic hypotheses that emphasize the role of input, prototypes and connectionist representations.
  • Bowerman, M. (1988). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology [Reprint]. In M. B. Franklin, & S. S. Barten (Eds.), Child language: A reader (pp. 106-117). Oxford: Oxford University Press.

    Abstract

    Reprinted from: Bowerman, M. (1981). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology. In H. Winitz (Ed.), Native language and foreign language acquisition (pp. 172 189). New York: New York Academy of Sciences.
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Braden, R. O., Amor, D. J., Fisher, S. E., Mei, C., Myers, C. T., Mefford, H., Gill, D., Srivastava, S., Swanson, L. C., Goel, H., Scheffer, I. E., & Morgan, A. T. (2021). Severe speech impairment is a distinguishing feature of FOXP1-related disorder. Developmental Medicine & Child Neurology, 63(12), 1417-1426. doi:10.1111/dmcn.14955.

    Abstract

    Aim
    To delineate the speech and language phenotype of a cohort of individuals with FOXP1-related disorder.

    Method
    We administered a standardized test battery to examine speech and oral motor function, receptive and expressive language, non-verbal cognition, and adaptive behaviour. Clinical history and cognitive assessments were analysed together with speech and language findings.

    Results
    Twenty-nine patients (17 females, 12 males; mean age 9y 6mo; median age 8y [range 2y 7mo–33y]; SD 6y 5mo) with pathogenic FOXP1 variants (14 truncating, three missense, three splice site, one in-frame deletion, eight cytogenic deletions; 28 out of 29 were de novo variants) were studied. All had atypical speech, with 21 being verbal and eight minimally verbal. All verbal patients had dysarthric and apraxic features, with phonological deficits in most (14 out of 16). Language scores were low overall. In the 21 individuals who carried truncating or splice site variants and small deletions, expressive abilities were relatively preserved compared with comprehension.

    Interpretation
    FOXP1-related disorder is characterized by a complex speech and language phenotype with prominent dysarthria, broader motor planning and programming deficits, and linguistic-based phonological errors. Diagnosis of the speech phenotype associated with FOXP1-related dysfunction will inform early targeted therapy.

    Additional information

    figure S1 table S1
  • Brand, S., & Ernestus, M. (2021). Reduction of word-final obstruent-liquid-schwa clusters in Parisian French. Corpus Linguistics and Linguistic Theory, 17(1), 249-285. doi:10.1515/cllt-2017-0067.

    Abstract

    This corpus study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in nouns in casual Parisian French. Results showed that at least one phoneme was absent in 80.7% of the 291 noun tokens in the dataset, and that the whole cluster was absent (e.g., [mis] for ministre) in no less than 15.5% of the tokens. We demonstrate that phonemes are not always completely absent, but that they may leave traces on neighbouring phonemes. Further, the clusters display undocumented voice assimilation patterns. Statistical modelling showed that a phoneme is most likely to be absent if the following phoneme is also absent. The durations of the phonemes are conditioned particularly by the position of the word in the prosodic phrase. We argue, on the basis of three different types of evidence, that in French word-final OLS clusters, the absence of obstruents is mainly due to gradient reduction processes, whereas the absence of schwa and liquids may also be due to categorical deletion processes.
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Braun, B. (2005). Production and perception of thematic contrast in German. Oxford: Lang.
  • Braun, B., & Chen, A. (2008). Now move X into cell Y: intonation of 'now' in on-line reference resolution. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 477-480). Campinas: Editora RG/CNPq.

    Abstract

    Prior work has shown that listeners efficiently exploit prosodic information both in the discourse referent and in the preceding modifier to identify the referent. This study investigated whether listeners make use of prosodic information prior to the ENTIRE referential expression, i.e. the intonational realization of the adverb 'now', to identify the upcoming referent. The adverb ‘now’ can be used to draw attention to contrasting information in the sentence. (e.g., ‘put the book on the bookshelf. Now put the pen on the bookshelf.’). It has been shown for Dutch that nu ('now') is realized prosodically differently in different information structural contexts though certain realizations occur across information structural contexts. In an eye-tracking experiment we tested two hypotheses regarding the role of the intonation of nu in online reference resolution in Dutch: the “irrelevant intonation” hypothesis, whereby listeners make no use of the intonation of nu, vs. the “linguistic intonation” hypothesis, whereby listeners are sensitive to the conditional probabilities between different intonational realizations of nu and the referent. Our findings show that listeners employ the intonation of nu to identify the upcoming referent. They are mislead by an accented nu but correctly interpret an unaccented nu as referring to a new, unmentioned entity.
  • Braun, B., Weber, A., & Crocker, M. (2005). Does narrow focus activate alternative referents? In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 1709-1712).

    Abstract

    Narrow focus refers to accent placement that forces one interpretation of a sentence, which is then often perceived contrastively. Narrow focus is formalised in terms of alternative sets, i.e. contextually or situationally salient alternatives. In this paper, we investigate whether this model is valid also in human utterance processing. We present an eye-tracking experiment to study listeners’ expectations (i.e. eye-movements) with respect to upcoming referents. Some of the objects contrast in colour with objects that were previously referred to, others do not; the objects are referred to with either a narrow focus on the colour adjective or with broad focus on the noun. Results show that narrow focus on the adjective increases early fixations to contrastive referents. Narrow focus hence activates alternative referents in human utterance processing
  • Braun, B., Lemhöfer, K., & Cutler, A. (2008). English word stress as produced by English and Dutch speakers: The role of segmental and suprasegmental differences. In Proceedings of Interspeech 2008 (pp. 1953-1953).

    Abstract

    It has been claimed that Dutch listeners use suprasegmental cues (duration, spectral tilt) more than English listeners in distinguishing English word stress. We tested whether this asymmetry also holds in production, comparing the realization of English word stress by native English speakers and Dutch speakers. Results confirmed that English speakers centralize unstressed vowels more, while Dutch speakers of English make more use of suprasegmental differences.
  • Braun, B., Tagliapietra, L., & Cutler, A. (2008). Contrastive utterances make alternatives salient: Cross-modal priming evidence. In Proceedings of Interspeech 2008 (pp. 69-69).

    Abstract

    Sentences with contrastive intonation are assumed to presuppose contextual alternatives to the accented elements. Two cross-modal priming experiments tested in Dutch whether such contextual alternatives are automatically available to listeners. Contrastive associates – but not non- contrastive associates - were facilitated only when primes were produced in sentences with contrastive intonation, indicating that contrastive intonation makes unmentioned contextual alternatives immediately available. Possibly, contrastive contours trigger a “presupposition resolution mechanism” by which these alternatives become salient.
  • De Bree, E., Van Alphen, P. M., Fikkert, P., & Wijnen, F. (2008). Metrical stress in comprehension and production of Dutch children at risk of dyslexia. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings of the 32nd Annual Boston University Conference on Language Development (pp. 60-71). Somerville, Mass: Cascadilla Press.

    Abstract

    The present study compared the role of metrical stress in comprehension and production of three-year-old children with a familial risk of dyslexia with that of normally developing children to further explore the phonological deficit in dyslexia. A visual fixation task with stress (mis-)matches in bisyllabic words, as well as a non-word repetition task with bisyllabic targets were presented to the control and at-risk children. Results show that the at-risk group was less sensitive to stress mismatches in word recognition than the control group. Correct production of metrical stress patterns did not differ significantly between the groups, but the percentages of phonemes produced correctly were lower for the at-risk than the control group. These findings suggest that processing of metrical stress is not impaired in at-risk children, but that this group cannot exploit metrical stress for speech in word recognition. This study demonstrates the importance of including suprasegmental skills in dyslexia research.
  • Brehm, L., & Meyer, A. S. (2021). Planning when to say: Dissociating cue use in utterance initiation using cross-validation. Journal of Experimental Psychology: General, 150(9), 1772-1799. doi:10.1037/xge0001012.

    Abstract

    In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner’s turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate’s utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt’s length, and whether within a block of trials, the confederate prompt’s length was predictable. We measured how these factors affected the gap between turns and the participants’ allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate’s stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner’s speech than corepresentation of their utterance content.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2021). Probabilistic online processing of sentence anomalies. Language, Cognition and Neuroscience, 36(8), 959-983. doi:10.1080/23273798.2021.1900579.

    Abstract

    Listeners can successfully interpret the intended meaning of an utterance even when it contains errors or other unexpected anomalies. The present work combines an online measure of attention to sentence referents (visual world eye-tracking) with offline judgments of sentence meaning to disclose how the interpretation of anomalous sentences unfolds over time in order to explore mechanisms of non-literal processing. We use a metalinguistic judgment in Experiment 1 and an elicited imitation task in Experiment 2. In both experiments, we focus on one morphosyntactic anomaly (Subject-verb agreement; The key to the cabinets literally *were … ) and one semantic anomaly (Without; Lulu went to the gym without her hat ?off) and show that non-literal referents to each are considered upon hearing the anomalous region of the sentence. This shows that listeners understand anomalies by overwriting or adding to an initial interpretation and that this occurs incrementally and adaptively as the sentence unfolds.
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Brugman, H., & Senft, G. (2005). Documentation of languages and archiving of language data at the Max Planck Institute for Psycholinguistics in Nijmegen. Linguistische Berichte, no. 201, 89-103.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., Nathan, D., Strömqvist, S., & Van Veenendaal, R. (2008). Building a federation of Language Resource Repositories: The DAM-LR project and its continuation within CLARIN. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The DAM-LR project aims at virtually integrating various European language resource archives that allow users to navigate and operate in a single unified domain of language resources. This type of integration introduces Grid technology to the humanities disciplines and forms a federation of archives. The complete architecture is designed based on a few well-known components .This is considered the basis for building a research infrastructure for Language Resources as is planned within the CLARIN project. The DAM-LR project was purposefully started with only a small number of participants for flexibility and to avoid complex contract negotiations with respect to legal issues. Now that we have gained insights into the basic technology issues and organizational issues, it is foreseen that the federation will be expanded considerably within the CLARIN project that will also address the associated legal issues.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broeder, D., Offenga, F., & Willems, D. (2002). Metadata tools supporting controlled vocabulary services. In M. Rodriguez González, & C. Paz SuárezR Araujo (Eds.), Third international conference on language resources and evaluation (pp. 1055-1059). Paris: European Language Resources Association.

    Abstract

    Within the ISLE Metadata Initiative (IMDI) project a user-friendly editor to enter metadata descriptions and a browser operating on the linked metadata descriptions were developed. Both tools support the usage of Controlled Vocabulary (CV) repositories by means of the specification of an URL where the formal CV definition data is available.
  • Broeder, D., Wittenburg, P., Declerck, T., & Romary, L. (2002). LREP: A language repository exchange protocol. In M. Rodriguez González, & C. Paz Suárez Araujo (Eds.), Third international conference on language resources and evaluation (pp. 1302-1305). Paris: European Language Resources Association.

    Abstract

    The recent increase in the number and complexity of the language resources available on the Internet is followed by a similar increase of available tools for linguistic analysis. Ideally the user does not need to be confronted with the question in how to match tools with resources. If resource repositories and tool repositories offer adequate metadata information and a suitable exchange protocol is developed this matching process could be performed (semi-) automatically.
  • Broeder, D., Declerck, T., Hinrichs, E., Piperidis, S., Romary, L., Calzolari, N., & Wittenburg, P. (2008). Foundation of a component-based flexible registry for language resources and technology. In N. Calzorali (Ed.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008) (pp. 1433-1436). European Language Resources Association (ELRA).

    Abstract

    Within the CLARIN e-science infrastructure project it is foreseen to develop a component-based registry for metadata for Language Resources and Language Technology. With this registry it is hoped to overcome the problems of the current available systems with respect to inflexible fixed schema, unsuitable terminology and interoperability problems. The registry will address interoperability needs by refering to a shared vocabulary registered in data category registries as they are suggested by ISO.
  • Broeder, D., Auer, E., Kemps-Snijders, M., Sloetjes, H., Wittenburg, P., & Zinn, C. (2008). Managing very large multimedia archives and their integration into federations. In P. Manghi, P. Pagano, & P. Zezula (Eds.), First Workshop in Very Large Digital Libraries (VLDL 2008).
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Broersma, M. (2005). Phonetic and lexical processing in a second language. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58294.
  • Broersma, M. (2005). Perception of familiar contrasts in unfamiliar positions. Journal of the Acoustical Society of America, 117(6), 3890-3901. doi:10.1121/1.1906060.
  • Broersma, M. (2002). Comprehension of non-native speech: Inaccurate phoneme processing and activation of lexical competitors. In ICSLP-2002 (pp. 261-264). Denver: Center for Spoken Language Research, U. of Colorado Boulder.

    Abstract

    Native speakers of Dutch with English as a second language and native speakers of English participated in an English lexical decision experiment. Phonemes in real words were replaced by others from which they are hard to distinguish for Dutch listeners. Non-native listeners judged the resulting near-words more often as a word than native listeners. This not only happened when the phonemes that were exchanged did not exist as separate phonemes in the native language Dutch, but also when phoneme pairs that do exist in Dutch were used in word-final position, where they are not distinctive in Dutch. In an English bimodal priming experiment with similar groups of participants, word pairs were used which differed in one phoneme. These phonemes were hard to distinguish for the non-native listeners. Whereas in native listening both words inhibited each other, in non-native listening presentation of one word led to unresolved competition between both words. The results suggest that inaccurate phoneme processing by non-native listeners leads to the activation of spurious lexical competitors.
  • Broersma, M. (2008). Flexible cue use in nonnative phonetic categorization (L). Journal of the Acoustical Society of America, 124(2), 712-715. doi:10.1121/1.2940578.

    Abstract

    Native and nonnative listeners categorized final /v/ versus /f/ in English nonwords. Fricatives followed phonetically long originally /v/-preceding or short originally /f/-preceding vowels. Vowel duration was constant for each participant and sometimes mismatched other voicing cues. Previous results showed that English but not Dutch listeners whose L1 has no final voicing contrast nevertheless used the misleading vowel duration for /v/-/f/ categorization. New analyses showed that Dutch listeners did use vowel duration initially, but quickly reduced its use, whereas the English listeners used it consistently throughout the experiment. Thus, nonnative listeners adapted to the stimuli more flexibly than native listeners did.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brouwer, S., Cornips, L., & Hulk, A. (2008). Misrepresentation of Dutch neuter gender in older bilingual children? In B. Hazdenar, & E. Gavruseva (Eds.), Current trends in child second language acquisition: A generative perspective (pp. 83-96). Amsterdam: Benjamins.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P. (2008). Up, down, and across the land: Landscape terms and place names in Tzeltal. Language Sciences, 30(2/3), 151-181. doi:10.1016/j.langsci.2006.12.003.

    Abstract

    The Tzeltal language is spoken in a mountainous region of southern Mexico by some 280,000 Mayan corn farmers. This paper focuses on landscape and place vocabulary in the Tzeltal municipio of Tenejapa, where speakers use an absolute system of spatial reckoning based on the overall uphill (southward)/downhill (northward) slope of the land. The paper examines the formal and functional properties of the Tenejapa Tzeltal vocabulary labelling features of the local landscape and relates it to spatial vocabulary for describing locative relations, including the uphill/downhill axis for spatial reckoning as well as body part terms for specifying parts of locative grounds. I then examine the local place names, discuss their semantic and morphosyntactic properties, and relate them to the landscape vocabulary, to spatial vocabulary, and also to cultural narratives about events associated with particular places. I conclude with some observations on the determinants of landscape and place terminology in Tzeltal, and what this vocabulary and how it is used reveal about the conceptualization of landscape and places.
  • Brown, P. (2008). Verb specificity and argument realization in Tzeltal child language. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic perspectives on argument structure: Implications for learnability (pp. 167-189). Mahwah, NJ: Erlbaum.

    Abstract

    How do children learn a language whose arguments are freely ellipsed? The Mayan language Tzeltal, spoken in southern Mexico, is such a language. The acquisition pattern for Tzeltal is distinctive, in at least two ways: verbs predominate even in children’s very early production vocabulary, and these verbs are often very specific in meaning. This runs counter to the patterns found in most Indo-European languages, where nouns tend to predominate in early vocabulary and children’s first verbs tend to be ‘light’ or semantically general. Here I explore the idea that noun ellipsis and ‘heavy’ verbs are related: the ‘heavy’ verbs restrict the nominal reference and so allow recovery of the ‘missing’ nouns. Using data drawn from videotaped interaction of four Tzeltal children and their caregivers, I examined transitive clauses in an adult input sample and in child speech, and tested the hypothesis that direct object arguments are less likely to be realized overtly with semantically specific verbs than with general verbs. This hypothesis was confirmed, both for the adult input and for the speech of the children (aged 3;4-3;9). It is therefore possible that argument ellipsis could provide a clue to verb semantics (specific vs. general) for the Tzeltal child.
  • Brown, P. (2005). What does it mean to learn the meaning of words? [Review of the book How children learn the meanings of words by Paul Bloom]. Journal of the Learning Sciences, 14(2), 293-300. doi:10.1207/s15327809jls1402_6.
  • Brown, P., Sicoli, M. A., & Le Guen, O. (2021). Cross-speaker repetition and epistemic stance in Tzeltal, Yucatec, and Zapotec conversations. Journal of Pragmatics, 183, 256-272. doi:10.1016/j.pragma.2021.07.005.

    Abstract

    As a turn-design strategy, repeating another has been described for English as a fairly restricted way of constructing a response, which, through re-saying what another speaker just said, is exploitable for claiming epistemic primacy, and thus avoided when a second speaker has no direct experience. Conversations in Mesoamerican languages present a challenge to the generality of this claim. This paper examines the epistemics of dialogic repetition in video-recordings of conversations in three Indigenous languages of Mexico: Tzeltal and Yucatec Maya, both spoken in southeastern Mexico, and Lachixío Zapotec, spoken in Oaxaca. We develop a typology of repetition in different sequential environments. We show that while the functions of repeats in Mesoamerica overlap with the range of repeat functions described for English, there is an additional epistemic environment in the Mesoamerican routine of repeating for affirmation: a responding speaker can repeat to affirm something introduced by another speaker of which s/he has no prior knowledge. We argue that, while dialogic repetition is a universally available turn-design strategy that makes epistemics potentially relevant, cross-cultural comparison reveals that cultural preferences intervene such that, in Mesoamerican conversations, repetition co-constructs knowledge as collective process over which no individual participant has final authority or ownership.

    Files private

    Request files
  • Brown, A. R., Pouw, W., Brentari, D., & Goldin-Meadow, S. (2021). People are less susceptible to illusion when they use their hands to communicate rather than estimate. Psychological Science, 32, 1227-1237. doi:10.1177/0956797621991552.

    Abstract

    When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.

    Additional information

    supplementary material data via OSF
  • Brown, P. (1998). Children's first verbs in Tzeltal: Evidence for an early verb category. Linguistics, 36(4), 713-753.

    Abstract

    A major finding in studies of early vocabulary acquisition has been that children tend to learn a lot of nouns early but make do with relatively few verbs, among which semantically general-purpose verbs like do, make, get, have, give, come, go, and be play a prominent role. The preponderance of nouns is explained in terms of nouns labelling concrete objects beings “easier” to learn than verbs, which label relational categories. Nouns label “natural categories” observable in the world, verbs label more linguistically and culturally specific categories of events linking objects belonging to such natural categories (Gentner 1978, 1982; Clark 1993). This view has been challenged recently by data from children learning certain non-Indo-European languges like Korean, where children have an early verb explosion and verbs dominate in early child utterances. Children learning the Mayan language Tzeltal also acquire verbs early, prior to any noun explosion as measured by production. Verb types are roughly equivalent to noun types in children’s beginning production vocabulary and soon outnumber them. At the one-word stage children’s verbs mostly have the form of a root stripped of affixes, correctly segmented despite structural difficulties. Quite early (before the MLU 2.0 point) there is evidence of productivity of some grammatical markers (although they are not always present): the person-marking affixes cross-referencing core arguments, and the completive/incompletive aspectual distinctions. The Tzeltal facts argue against a natural-categories explanation for childre’s early vocabulary, in favor of a view emphasizing the early effects of language-specific properties of the input. They suggest that when and how a child acquires a “verb” category is centrally influenced by the structural properties of the input, and that the semantic structure of the language - where the referential load is concentrated - plays a fundamental role in addition to distributional facts.
  • Brown, P. (1998). Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology, 8(2), 197-221. doi:10.1525/jlin.1998.8.2.197.

    Abstract

    When Tzeltal children in the Mayan community of Tenejapa, in southern Mexico, begin speaking, their production vocabulary consists predominantly of verb roots, in contrast to the dominance of nouns in the initial vocabulary of first‐language learners of Indo‐European languages. This article proposes that a particular Tzeltal conversational feature—known in the Mayanist literature as "dialogic repetition"—provides a context that facilitates the early analysis and use of verbs. Although Tzeltal babies are not treated by adults as genuine interlocutors worthy of sustained interaction, dialogic repetition in the speech the children are exposed to may have an important role in revealing to them the structural properties of the language, as well as in socializing the collaborative style of verbal interaction adults favor in this community.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (2002). Everyone has to lie in Tzeltal. In S. Blum-Kulka, & C. E. Snow (Eds.), Talking to adults: The contribution of multiparty discourse to language acquisition (pp. 241-275). Mahwah, NJ: Erlbaum.

    Abstract

    In a famous paper Harvey Sacks (1974) argued that the sequential properties of greeting conventions, as well as those governing the flow of information, mean that 'everyone has to lie'. In this paper I show this dictum to be equally true in the Tzeltal Mayan community of Tenejapa, in southern Mexico, but for somewhat different reasons. The phenomenon of interest is the practice of routine fearsome threats to small children. Based on a longitudinal corpus of videotaped and tape-recorded naturally-occurring interaction between caregivers and children in five Tzeltal families, the study examines sequences of Tzeltal caregivers' speech aimed at controlling the children's behaviour and analyzes the children's developing pragmatic skills in handling such controlling utterances, from prelinguistic infants to age five and over. Infants in this society are considered to be vulnerable, easily scared or shocked into losing their 'souls', and therefore at all costs to be protected and hidden from outsiders and other dangers. Nonetheless, the chief form of control (aside from physically removing a child from danger) is to threaten, saying things like "Don't do that, or I'll take you to the clinic for an injection," These overt scare-threats - rarely actually realized - lead Tzeltal children by the age of 2;6 to 3;0 to the understanding that speech does not necessarily convey true propositions, and to a sensitivity to the underlying motivations for utterances distinct from their literal meaning. By age 4;0 children perform the same role to their younger siblings;they also begin to use more subtle non-true (e.g. ironic) utterances. The caretaker practice described here is related to adult norms of social lying, to the sociocultural context of constraints on information flow, social control through gossip, and the different notion of 'truth' that arises in the context of non-verifiability characteristic of a small-scale nonliterate society.

Share this page