Publications

Displaying 101 - 200 of 1829
  • Bluijs, S., Dera, J., & Peeters, D. (2021). Waarom digitale literatuur in het literatuuronderwijs thuishoort. Tijdschrift voor Nederlandse Taal- en Letterkunde, 137(2), 150-163. doi:10.5117/TNTL2021.2.003.BLUI.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of Psycholinguistics (pp. 945-984). San Diego,: Academic Press.
  • Bodur, K., Branje, S., Peirolo, M., Tiscareno, I., & German, J. S. (2021). Domain-initial strengthening in Turkish: Acoustic cues to prosodic hierarchy in stop consonants. In Proceedings of Interspeech 2021 (pp. 1459-1463). doi:10.21437/Interspeech.2021-2230.

    Abstract

    Studies have shown that cross-linguistically, consonants at the left edge of higher-level prosodic boundaries tend to be more forcefully articulated than those at lower-level boundaries, a phenomenon known as domain-initial strengthening. This study tests whether similar effects occur in Turkish, using the Autosegmental-Metrical model proposed by Ipek & Jun [1, 2] as the basis for assessing boundary strength. Productions of /t/ and /d/ were elicited in four domain-initial prosodic positions corresponding to progressively higher-level boundaries: syllable, word, intermediate phrase, and Intonational Phrase. A fifth position, nuclear word, was included in order to better situate it within the prosodic hierarchy. Acoustic correlates of articulatory strength were measured, including closure duration for /d/ and /t/, as well as voice onset time and burst energy for /t/. Our results show that closure duration increases cumulatively from syllable to intermediate phrase, while voice onset time and burst energy are not influenced by boundary strength. These findings provide corroborating evidence for Ipek & Jun’s model, particularly for the distinction between word and intermediate phrase boundaries. Additionally, articulatory strength at the left edge of the nuclear word patterned closely with word-initial position, supporting the view that the nuclear word is not associated with a distinct phrasing domain
  • De Boer, M., Kokal, I., Blokpoel, M., Liu, R., Stolk, A., Roelofs, K., Van Rooij, I., & Toni, I. (2017). Oxytocin modulates human communication by enhancing cognitive exploration. Psychoneuroendocrinology, 86, 64-72. doi:10.1016/j.psyneuen.2017.09.010.

    Abstract

    Oxytocin is a neuropeptide known to influence how humans share material resources. Here we explore whether oxytocin influences how we share knowledge. We focus on two distinguishing features of human communication, namely the ability to select communicative signals that disambiguate the many-to-many mappings that exist between a signal’s form and meaning, and adjustments of those signals to the presumed cognitive characteristics of the addressee (“audience design”). Fifty-five males participated in a randomized, double-blind, placebo controlled experiment involving the intranasal administration of oxytocin. The participants produced novel non-verbal communicative signals towards two different addressees, an adult or a child, in an experimentally-controlled live interactive setting. We found that oxytocin administration drives participants to generate signals of higher referential quality, i.e. signals that disambiguate more communicative problems; and to rapidly adjust those communicative signals to what the addressee understands. The combined effects of oxytocin on referential quality and audience design fit with the notion that oxytocin administration leads participants to explore more pervasively behaviors that can convey their intention, and diverse models of the addressees. These findings suggest that, besides affecting prosocial drive and salience of social cues, oxytocin influences how we share knowledge by promoting cognitive exploration
  • Bögels, S., & Torreira, F. (2021). Turn-end estimation in conversational turn-taking: The roles of context and prosody. Discourse Processes, 58(10), 903-924. doi:10.1080/0163853X.2021.1986664.

    Abstract

    This study investigated the role of contextual and prosodic information in turn-end estimation by means of a button-press task. We presented participants with turns extracted from a corpus of telephone calls visually (i.e., in transcribed form, word-by-word) and auditorily, and asked them to anticipate turn ends by pressing a button. The availability of the previous conversational context was generally helpful for turn-end estimation in short turns only, and more clearly so in the visual task than in the auditory task. To investigate the role of prosody, we examined whether participants in the auditory task pressed the button close to turn-medial points likely to constitute turn ends based on lexico-syntactic information alone. We observed that the vast majority of such button presses occurred in the presence of an intonational boundary rather than in its absence. These results are consistent with the view that prosodic cues in the proximity of turn ends play a relevant role in turn-end estimation.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bögels, S., & Levinson, S. C. (2017). The brain behind the response: Insights into turn-taking in conversation from neuroimaging. Research on Language and Social Interaction, 50, 71-89. doi:10.1080/08351813.2017.1262118.

    Abstract

    This paper reviews the prospects for the cross-fertilization of conversation-analytic (CA) and neurocognitive studies of conversation, focusing on turn-taking. Although conversation is the primary ecological niche for language use, relatively little brain research has focused on interactive language use, partly due to the challenges of using brain-imaging methods that are controlled enough to perform sound experiments, but still reflect the rich and spontaneous nature of conversation. Recently, though, brain researchers have started to investigate conversational phenomena, for example by using 'overhearer' or controlled interaction paradigms. We review neuroimaging studies related to turn-taking and sequence organization, phenomena historically described by CA. These studies for example show early action recognition and immediate planning of responses midway during an incoming turn. The review discusses studies with an eye to a fruitful interchange between CA and neuroimaging research on conversation and an indication of how these disciplines can benefit from each other.
  • Bohnemeyer, J. (2004). Argument and event structure in Yukatek verb classes. In J.-Y. Kim, & A. Werle (Eds.), Proceedings of The Semantics of Under-Represented Languages in the Americas. Amherst, Mass: GLSA.

    Abstract

    In Yukatek Maya, event types are lexicalized in verb roots and stems that fall into a number of different form classes on the basis of (a) patterns of aspect-mood marking and (b) priviledges of undergoing valence-changing operations. Of particular interest are the intransitive classes in the light of Perlmutter’s (1978) Unaccusativity hypothesis. In the spirit of Levin & Rappaport Hovav (1995) [L&RH], Van Valin (1990), Zaenen (1993), and others, this paper investigates whether (and to what extent) the association between formal predicate classes and event types is determined by argument structure features such as ‘agentivity’ and ‘control’ or features of lexical aspect such as ‘telicity’ and ‘durativity’. It is shown that mismatches between agentivity/control and telicity/durativity are even more extensive in Yukatek than they are in English (Abusch 1985; L&RH, Van Valin & LaPolla 1997), providing new evidence against Dowty’s (1979) reconstruction of Vendler’s (1967) ‘time schemata of verbs’ in terms of argument structure configurations. Moreover, contrary to what has been claimed in earlier studies of Yukatek (Krämer & Wunderlich 1999, Lucy 1994), neither agentivity/control nor telicity/durativity turn out to be good predictors of verb class membership. Instead, the patterns of aspect-mood marking prove to be sensitive only to the presence or absense of state change, in a way that supports the unified analysis of all verbs of gradual change proposed by Kennedy & Levin (2001). The presence or absence of ‘internal causation’ (L&RH) may motivate the semantic interpretation of transitivization operations. An explicit semantics for the valence-changing operations is proposed, based on Parsons’s (1990) Neo-Davidsonian approach.
  • Bohnemeyer, J., Burenhult, N., Enfield, N. J., & Levinson, S. C. (2004). Landscape terms and place names elicitation guide. In A. Majid (Ed.), Field Manual Volume 9 (pp. 75-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492904.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bohnemeyer, J. (1998). Temporale Relatoren im Hispano-Yukatekischen Sprachkontakt. In A. Koechert, & T. Stolz (Eds.), Convergencia e Individualidad - Las lenguas Mayas entre hispanización e indigenismo (pp. 195-241). Hannover, Germany: Verlag für Ethnologie.
  • Bohnemeyer, J. (1998). Sententiale Topics im Yukatekischen. In Z. Dietmar (Ed.), Deskriptive Grammatik und allgemeiner Sprachvergleich (pp. 55-85). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Bohnemeyer, J. (1997). Yucatec Mayan Lexicalization Patterns in Time and Space. In M. Biemans, & J. van de Weijer (Eds.), Proceedings of the CLS opening of the academic year '97-'98. Tilburg, The Netherlands: University Center for Language Studies.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Defining multiple output models in psycholinguistic theory. Working Papers in Linguistic, 45, 1-10. Retrieved from http://hdl.handle.net/2066/15768.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition and as interactive within the domain of sentence processing. We suggest that the apparent internal confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Multiple Output models and the inadequacy of the Great Divide. Cognition, 58, 309-320. doi:10.1016/0010-0277(95)00684-2.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution - but not generation - involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition but as interactive within the domain of sentence processing. We suggest that the apparent confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field. The contradiction serves to highlight the inadequacy of a simple autonomy/interaction dichotomy for characterizing the architectures of current processing models.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Bornkessel-Schlesewsky, I., Alday, P. M., Kretzschmar, F., Grewe, T., Gumpert, M., Schumacher, P. B., & Schlesewsky, M. (2015). Age-related changes in predictive capacity versus internal model adaptability: Electrophysiological evidence that individual differences outweigh effects of age. Frontiers in Aging Neuroscience, 7: 217. doi:10.3389/fnagi.2015.00217.

    Abstract

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
  • Bosker, H. R. (2021). Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies. Behavior Research Methods, 53(5), 1945-1953. doi:10.3758/s13428-021-01542-4.

    Abstract

    Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means
    of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often
    expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring
    the participants’ raw responses for words correctly identified from the target sentence is a time-
    consuming task, and hence resource-intensive. Moreover, there is no consensus among speech
    scientists about what specific protocol to use for the human scoring, limiting the reliability of
    human scores. The present paper evaluates various forms of fuzzy string matching between
    participants’ responses and target sentences, as automated metrics of listener transcript accuracy.
    We demonstrate that one particular metric, the Token Sort Ratio, is a consistent, highly efficient,
    and accurate metric for automated assessment of listener transcripts, as evidenced by high
    correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to
    acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for
    assessment of listener transcript accuracy in large-scale speech intelligibility studies. See
    https://tokensortratio.netlify.app for an online implementation.
  • Bosker, H. R., Badaya, E., & Corley, M. (2021). Discourse markers activate their, like, cohort competitors. Discourse Processes, 58(9), 837-851. doi:10.1080/0163853X.2021.1924000.

    Abstract

    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
  • Bosker, H. R., & Peeters, D. (2021). Beat gestures influence which speech sounds you hear. Proceedings of the Royal Society B: Biological Sciences, 288: 20202419. doi:10.1098/rspb.2020.2419.

    Abstract

    Beat gestures—spontaneously produced biphasic movements of the hand—
    are among the most frequently encountered co-speech gestures in human
    communication. They are closely temporally aligned to the prosodic charac-
    teristics of the speech signal, typically occurring on lexically stressed
    syllables. Despite their prevalence across speakers of the world’s languages,
    how beat gestures impact spoken word recognition is unclear. Can these
    simple ‘flicks of the hand’ influence speech perception? Across a range
    of experiments, we demonstrate that beat gestures influence the explicit
    and implicit perception of lexical stress (e.g. distinguishing OBject from
    obJECT), and in turn can influence what vowels listeners hear. Thus, we pro-
    vide converging evidence for a manual McGurk effect: relatively simple and
    widely occurring hand movements influence which speech sounds we hear

    Additional information

    example stimuli and experimental data
  • Bosker, H. R., Tjiong, V., Quené, H., Sanders, T., & De Jong, N. H. (2015). Both native and non-native disfluencies trigger listeners' attention. In Disfluency in Spontaneous Speech: DISS 2015: An ICPhS Satellite Meeting. Edinburgh: DISS2015.

    Abstract

    Disfluencies, such as uh and uhm, are known to help the listener in speech comprehension. For instance, disfluencies may elicit prediction of less accessible referents and may trigger listeners’ attention to the following word. However, recent work suggests differential processing of disfluencies in native and non-native speech. The current study investigated whether the beneficial effects of disfluencies on listeners’ attention are modulated by the (non-)native identity of the speaker. Using the Change Detection Paradigm, we investigated listeners’ recall accuracy for words presented in disfluent and fluent contexts, in native and non-native speech. We observed beneficial effects of both native and non-native disfluencies on listeners’ recall accuracy, suggesting that native and non-native disfluencies trigger listeners’ attention in a similar fashion.
  • Bosker, H. R. (2017). Accounting for rate-dependent category boundary shifts in speech perception. Attention, Perception & Psychophysics, 79, 333-343. doi:10.3758/s13414-016-1206-4.

    Abstract

    The perception of temporal contrasts in speech is known to be influenced by the speech rate in the surrounding context. This rate-dependent perception is suggested to involve general auditory processes since it is also elicited by non-speech contexts, such as pure tone sequences. Two general auditory mechanisms have been proposed to underlie rate-dependent perception: durational contrast and neural entrainment. The present study compares the predictions of these two accounts of rate-dependent speech perception by means of four experiments in which participants heard tone sequences followed by Dutch target words ambiguous between /ɑs/ “ash” and /a:s/ “bait”. Tone sequences varied in the duration of tones (short vs. long) and in the presentation rate of the tones (fast vs. slow). Results show that the duration of preceding tones did not influence target perception in any of the experiments, thus challenging durational contrast as explanatory mechanism behind rate-dependent perception. Instead, the presentation rate consistently elicited a category boundary shift, with faster presentation rates inducing more /a:s/ responses, but only if the tone sequence was isochronous. Therefore, this study proposes an alternative, neurobiologically plausible, account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2017). Cognitive load makes speech sound fast, but does not modulate acoustic context effects. Journal of Memory and Language, 94, 166-176. doi:10.1016/j.jml.2016.12.002.

    Abstract

    In natural situations, speech perception often takes place during the concurrent execution of other cognitive tasks, such as listening while viewing a visual scene. The execution of a dual task typically has detrimental effects on concurrent speech perception, but how exactly cognitive load disrupts speech encoding is still unclear. The detrimental effect on speech representations may consist of either a general reduction in the robustness of processing of the speech signal (‘noisy encoding’), or, alternatively it may specifically influence the temporal sampling of the sensory input, with listeners missing temporal pulses, thus underestimating segmental durations (‘shrinking of time’). The present study investigated whether and how spectral and temporal cues in a precursor sentence that has been processed under high vs. low cognitive load influence the perception of a subsequent target word. If cognitive load effects are implemented through ‘noisy encoding’, increasing cognitive load during the precursor should attenuate the encoding of both its temporal and spectral cues, and hence reduce the contextual effect that these cues can have on subsequent target sound perception. However, if cognitive load effects are expressed as ‘shrinking of time’, context effects should not be modulated by load, but a main effect would be expected on the perceived duration of the speech signal. Results from two experiments indicate that increasing cognitive load (manipulated through a secondary visual search task) did not modulate temporal (Experiment 1) or spectral context effects (Experiment 2). However, a consistent main effect of cognitive load was found: increasing cognitive load during the precursor induced a perceptual increase in its perceived speech rate, biasing the perception of a following target word towards longer durations. This finding suggests that cognitive load effects in speech perception are implemented via ‘shrinking of time’, in line with a temporal sampling framework. In addition, we argue that our results align with a model in which early (spectral and temporal) normalization is unaffected by attention but later adjustments may be attention-dependent.
  • Bosker, H. R., & Kösem, A. (2017). An entrained rhythm's frequency, not phase, influences temporal sampling of speech. In Proceedings of Interspeech 2017 (pp. 2416-2420). doi:10.21437/Interspeech.2017-73.

    Abstract

    Brain oscillations have been shown to track the slow amplitude fluctuations in speech during comprehension. Moreover, there is evidence that these stimulus-induced cortical rhythms may persist even after the driving stimulus has ceased. However, how exactly this neural entrainment shapes speech perception remains debated. This behavioral study investigated whether and how the frequency and phase of an entrained rhythm would influence the temporal sampling of subsequent speech. In two behavioral experiments, participants were presented with slow and fast isochronous tone sequences, followed by Dutch target words ambiguous between as /ɑs/ “ash” (with a short vowel) and aas /a:s/ “bait” (with a long vowel). Target words were presented at various phases of the entrained rhythm. Both experiments revealed effects of the frequency of the tone sequence on target word perception: fast sequences biased listeners to more long /a:s/ responses. However, no evidence for phase effects could be discerned. These findings show that an entrained rhythm’s frequency, but not phase, influences the temporal sampling of subsequent speech. These outcomes are compatible with theories suggesting that sensory timing is evaluated relative to entrained frequency. Furthermore, they suggest that phase tracking of (syllabic) rhythms by theta oscillations plays a limited role in speech parsing.
  • Bosker, H. R., & Reinisch, E. (2017). Foreign languages sound fast: evidence from implicit rate normalization. Frontiers in Psychology, 8: 1063. doi:10.3389/fpsyg.2017.01063.

    Abstract

    Anecdotal evidence suggests that unfamiliar languages sound faster than one’s native language. Empirical evidence for this impression has, so far, come from explicit rate judgments. The aim of the present study was to test whether such perceived rate differences between native and foreign languages have effects on implicit speech processing. Our measure of implicit rate perception was “normalization for speaking rate”: an ambiguous vowel between short /a/ and long /a:/ is interpreted as /a:/ following a fast but as /a/ following a slow carrier sentence. That is, listeners did not judge speech rate itself; instead, they categorized ambiguous vowels whose perception was implicitly affected by the rate of the context. We asked whether a bias towards long /a:/ might be observed when the context is not actually faster but simply spoken in a foreign language. A fully symmetrical experimental design was used: Dutch and German participants listened to rate matched (fast and slow) sentences in both languages spoken by the same bilingual speaker. Sentences were followed by nonwords that contained vowels from an /a-a:/ duration continuum. Results from Experiments 1 and 2 showed a consistent effect of rate normalization for both listener groups. Moreover, for German listeners, across the two experiments, foreign sentences triggered more /a:/ responses than (rate matched) native sentences, suggesting that foreign sentences were indeed perceived as faster. Moreover, this Foreign Language effect was modulated by participants’ ability to understand the foreign language: those participants that scored higher on a foreign language translation task showed less of a Foreign Language effect. However, opposite effects were found for the Dutch listeners. For them, their native rather than the foreign language induced more /a:/ responses. Nevertheless, this reversed effect could be reduced when additional spectral properties of the context were controlled for. Experiment 3, using explicit rate judgments, replicated the effect for German but not Dutch listeners. We therefore conclude that the subjective impression that foreign languages sound fast may have an effect on implicit speech processing, with implications for how language learners perceive spoken segments in a foreign language.

    Additional information

    data sheet 1.docx
  • Bosker, H. R. (2017). How our own speech rate influences our perception of others. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1225-1238. doi:10.1037/xlm0000381.

    Abstract

    In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through six experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing pre-recorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings.
  • Bosker, H. R., & Reinisch, E. (2015). Normalization for speechrate in native and nonnative speech. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Speech perception involves a number of processes that deal with variation in the speech signal. One such process is normalization for speechrate: local temporal cues are perceived relative to the rate in the surrounding context. It is as yet unclear whether and how this perceptual effect interacts with higher level impressions of rate, such as a speaker’s nonnative identity. Nonnative speakers typically speak more slowly than natives, an experience that listeners take into account when explicitly judging the rate of nonnative speech. The present study investigated whether this is also reflected in implicit rate normalization. Results indicate that nonnative speech is implicitly perceived as faster than temporally-matched native speech, suggesting that the additional cognitive load of listening to an accent speeds up rate perception. Therefore, rate perception in speech is not dependent on syllable durations alone but also on the ease of processing of the temporal signal.
  • Bosker, H. R. (2021). The contribution of amplitude modulations in speech to perceived charisma. In B. Weiss, J. Trouvain, M. Barkat-Defradas, & J. J. Ohala (Eds.), Voice attractiveness: Prosody, phonology and phonetics (pp. 165-181). Singapore: Springer. doi:10.1007/978-981-15-6627-1_10.

    Abstract

    Speech contains pronounced amplitude modulations in the 1–9 Hz range, correlating with the syllabic rate of speech. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition and has beneficial effects on language processing. Here, we investigated the contribution of amplitude modulations to the subjective impression listeners have of public speakers. The speech from US presidential candidates Hillary Clinton and Donald Trump in the three TV debates of 2016 was acoustically analyzed by means of modulation spectra. These indicated that Clinton’s speech had more pronounced amplitude modulations than Trump’s speech, particularly in the 1–9 Hz range. A subsequent perception experiment, with listeners rating the perceived charisma of (low-pass filtered versions of) Clinton’s and Trump’s speech, showed that more pronounced amplitude modulations (i.e., more ‘rhythmic’ speech) increased perceived charisma ratings. These outcomes highlight the important contribution of speech rhythm to charisma perception.
  • Bosker, H. R. (2017). The role of temporal amplitude modulations in the political arena: Hillary Clinton vs. Donald Trump. In Proceedings of Interspeech 2017 (pp. 2228-2232). doi:10.21437/Interspeech.2017-142.

    Abstract

    Speech is an acoustic signal with inherent amplitude modulations in the 1-9 Hz range. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition. Moreover, rhythmic amplitude modulations have been shown to have beneficial effects on language processing and the subjective impression listeners have of the speaker. This study investigated the role of amplitude modulations in the political arena by comparing the speech produced by Hillary Clinton and Donald Trump in the three presidential debates of 2016. Inspection of the modulation spectra, revealing the spectral content of the two speakers’ amplitude envelopes after matching for overall intensity, showed considerably greater power in Clinton’s modulation spectra (compared to Trump’s) across the three debates, particularly in the 1-9 Hz range. The findings suggest that Clinton’s speech had a more pronounced temporal envelope with rhythmic amplitude modulations below 9 Hz, with a preference for modulations around 3 Hz. This may be taken as evidence for a more structured temporal organization of syllables in Clinton’s speech, potentially due to more frequent use of preplanned utterances. Outcomes are interpreted in light of the potential beneficial effects of a rhythmic temporal envelope on intelligibility and speaker perception.
  • Bosking, W. H., Sun, P., Ozker, M., Pei, X., Foster, B. L., Beauchamp, M. S., & Yoshor, D. (2017). Saturation in phosphene size with increasing current levels delivered to human visual cortex. The Journal of Neuroscience, 37(30), 7188-7197. doi:10.1523/JNEUROSCI.2896-16.2017.

    Abstract

    Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices.
  • Bosman, A., Moisik, S. R., Dediu, D., & Waters-Rist, A. (2017). Talking heads: Morphological variation in the human mandible over the last 500 years in the Netherlands. HOMO - Journal of Comparative Human Biology, 68(5), 329-342. doi:10.1016/j.jchb.2017.08.002.

    Abstract

    The primary aim of this paper is to assess patterns of morphological variation in the mandible to investigate changes during the last 500 years in the Netherlands. Three-dimensional geometric morphometrics is used on data collected from adults from three populations living in the Netherlands during three time-periods. Two of these samples come from Dutch archaeological sites (Alkmaar, 1484-1574, n = 37; and Middenbeemster, 1829-1866, n = 51) and were digitized using a 3D laser scanner. The third is a modern sample obtained from MRI scans of 34 modern Dutch individuals. Differences between mandibles are dominated by size. Significant differences in size are found among samples, with on average, males from Alkmaar having the largest mandibles and females from Middenbeemster having the smallest. The results are possibly linked to a softening of the diet, due to a combination of differences in food types and food processing that occurred between these time-periods. Differences in shape are most noticeable between males from Alkmaar and Middenbeemster. Shape differences between males and females are concentrated in the symphysis and ramus, which is mostly the consequence of sexual dimorphism. The relevance of this research is a better understanding of the anatomical variation of the mandible that can occur over an evolutionarily short time, as well as supporting research that has shown plasticity of the mandibular form related to diet and food processing. This plasticity of form must be taken into account in phylogenetic research and when the mandible is used in sex estimation of skeletons.
  • Böttner, M. (1998). A collective extension of relational grammar. Logic Journal of the IGPL, 6(2), 175-793. doi:10.1093/jigpal/6.2.175.

    Abstract

    Relational grammar was proposed in Suppes (1976) as a semantical grammar for natural language. Fragments considered so far are restricted to distributive notions. In this article, relational grammar is extended to collective notions.
  • Böttner, M. (1997). Natural Language. In C. Brink, W. Kahl, & G. Schmidt (Eds.), Relational Methods in computer science (pp. 229-249). Vienna, Austria: Springer-Verlag.
  • Böttner, M. (1997). Visiting some relatives of Peirce's. In 3rd International Seminar on The use of Relational Methods in Computer Science.

    Abstract

    The notion of relational grammar is extented to ternary relations and illustrated by a fragment of English. Some of Peirce's terms for ternary relations are shown to be incorrect and corrected.
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Bouman, M. A., & Levelt, W. J. M. (1994). Werner E. Reichardt: Levensbericht. In H. W. Pleket (Ed.), Levensberichten en herdenkingen 1993 (pp. 75-80). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Bowden, J. (1997). The meanings of Directionals in Taba. In G. Senft (Ed.), Referring to Space: Studies in Austronesian and Papuan Languages (pp. 251-268). New York, NJ: Oxford University Press.
  • Bowerman, M. (1985). Beyond communicative adequacy: From piecemeal knowledge to an integrated system in the child's acquisition of language. In K. Nelson (Ed.), Children's language (pp. 369-398). Hillsdale, N.J.: Lawrence Erlbaum.

    Abstract

    (From the chapter) the first section considers very briefly the kinds of processes that can be inferred to underlie errors that do not set in until after a period of correct usage acquisition often seems to be a more extended process than we have envisioned summarize a currently influential model of how linguistic forms, meaning, and communication are interrelated in the acquisition of language, point out some challenging problems for this model, and suggest that the notion of "meaning" in language must be reconceptualized before we can hope to solve these problems evidence from several types of late errors is marshalled in support of these arguments (From the preface) provides many examples of new errors that children introduce at relatively advanced stages of mastery of semantics and syntax Bowerman views these seemingly backwards steps as indications of definite steps forward by the child achieving reflective, flexible and integrated systems of semantics and syntax (
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (1994). From universal to language-specific in early grammatical development. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 346, 34-45. doi:10.1098/rstb.1994.0126.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with direct motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M. (1976). Le relazioni strutturali nel linguaggio infantile: sintattiche o semantiche? [Reprint]. In F. Antinucci, & C. Castelfranchi (Eds.), Psicolinguistica: Percezione, memoria e apprendimento del linguaggio (pp. 303-321). Bologna: Il Mulino.

    Abstract

    Reprinted from Bowerman, M. (1973). Structural relationships in children's utterances: Semantic or syntactic? In T. Moore (Ed.), Cognitive development and the acquisition of language (pp. 197 213). New York: Academic Press
  • Bowerman, M. (1994). Learning a semantic system: What role do cognitive predispositions play? [Reprint]. In P. Bloom (Ed.), Language acquisition: Core readings (pp. 329-363). Cambridge, MA: MIT Press.

    Abstract

    Reprint from: Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Li, P., & Bowerman, M. (1998). The acquisition of lexical and grammatical aspect in Chinese. First Language, 18, 311-350. doi:10.1177/014272379801805404.

    Abstract

    This study reports three experiments on how children learning Mandarin Chinese comprehend and use aspect markers. These experiments examine the role of lexical aspect in children's acquisition of grammatical aspect. Results provide converging evidence for children's early sensitivity to (1) the association between atelic verbs and the imperfective aspect markers zai, -zhe, and -ne, and (2) the association between telic verbs and the perfective aspect marker -le. Children did not show a sensitivity in their use or understanding of aspect markers to the difference between stative and activity verbs or between semelfactive and activity verbs. These results are consistent with Slobin's (1985) basic child grammar hypothesis that the contrast between process and result is important in children's early acquisition of temporal morphology. In contrast, they are inconsistent with Bickerton's (1981, 1984) language bioprogram hypothesis that the distinctions between state and process and between punctual and nonpunctual are preprogrammed into language learners. We suggest new ways of looking at the results in the light of recent probabilistic hypotheses that emphasize the role of input, prototypes and connectionist representations.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Bowerman, M. (1976). Semantic factors in the acquisition of rules for word use and sentence construction. In D. Morehead, & A. Morehead (Eds.), Directions in normal and deficient language development (pp. 99-179). Baltimore: University Park Press.
  • Bowerman, M. (1985). What shapes children's grammars? In D. Slobin (Ed.), The crosslinguistic study of language acquisition (pp. 1257-1319). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M., de León, L., & Choi, S. (1995). Verbs, particles, and spatial semantics: Learning to talk about spatial actions in typologically different languages. In E. V. Clark (Ed.), Proceedings of the Twenty-seventh Annual Child Language Research Forum (pp. 101-110). Stanford, CA: Center for the Study of Language and Information.
  • Braden, R. O., Amor, D. J., Fisher, S. E., Mei, C., Myers, C. T., Mefford, H., Gill, D., Srivastava, S., Swanson, L. C., Goel, H., Scheffer, I. E., & Morgan, A. T. (2021). Severe speech impairment is a distinguishing feature of FOXP1-related disorder. Developmental Medicine & Child Neurology, 63(12), 1417-1426. doi:10.1111/dmcn.14955.

    Abstract

    Aim
    To delineate the speech and language phenotype of a cohort of individuals with FOXP1-related disorder.

    Method
    We administered a standardized test battery to examine speech and oral motor function, receptive and expressive language, non-verbal cognition, and adaptive behaviour. Clinical history and cognitive assessments were analysed together with speech and language findings.

    Results
    Twenty-nine patients (17 females, 12 males; mean age 9y 6mo; median age 8y [range 2y 7mo–33y]; SD 6y 5mo) with pathogenic FOXP1 variants (14 truncating, three missense, three splice site, one in-frame deletion, eight cytogenic deletions; 28 out of 29 were de novo variants) were studied. All had atypical speech, with 21 being verbal and eight minimally verbal. All verbal patients had dysarthric and apraxic features, with phonological deficits in most (14 out of 16). Language scores were low overall. In the 21 individuals who carried truncating or splice site variants and small deletions, expressive abilities were relatively preserved compared with comprehension.

    Interpretation
    FOXP1-related disorder is characterized by a complex speech and language phenotype with prominent dysarthria, broader motor planning and programming deficits, and linguistic-based phonological errors. Diagnosis of the speech phenotype associated with FOXP1-related dysfunction will inform early targeted therapy.

    Additional information

    figure S1 table S1
  • Brand, S., & Ernestus, M. (2021). Reduction of word-final obstruent-liquid-schwa clusters in Parisian French. Corpus Linguistics and Linguistic Theory, 17(1), 249-285. doi:10.1515/cllt-2017-0067.

    Abstract

    This corpus study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in nouns in casual Parisian French. Results showed that at least one phoneme was absent in 80.7% of the 291 noun tokens in the dataset, and that the whole cluster was absent (e.g., [mis] for ministre) in no less than 15.5% of the tokens. We demonstrate that phonemes are not always completely absent, but that they may leave traces on neighbouring phonemes. Further, the clusters display undocumented voice assimilation patterns. Statistical modelling showed that a phoneme is most likely to be absent if the following phoneme is also absent. The durations of the phonemes are conditioned particularly by the position of the word in the prosodic phrase. We argue, on the basis of three different types of evidence, that in French word-final OLS clusters, the absence of obstruents is mainly due to gradient reduction processes, whereas the absence of schwa and liquids may also be due to categorical deletion processes.
  • Brand, S., & Ernestus, M. (2015). Reduction of obstruent-liquid-schwa clusters in casual French. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in casual French and the variables predicting the absence of the phonemes in these clusters. In a dataset of 291 noun tokens extracted from a corpus of casual conversations, we observed that in 80.7% of the tokens, at least one phoneme was absent and that in no less than 15.5% the whole cluster was absent (e.g., /mis/ for ministre). Importantly, the probability of a phoneme being absent was higher if the following phoneme was absent as well. These data show that reduction can affect several phonemes at once and is not restricted to just a handful of (function) words. Moreover, our results demonstrate that the absence of each single phoneme is affected by the speaker's tendency to increase ease of articulation and to adapt a word's pronunciation variant to the time available.
  • Brand, S. (2017). The processing of reduced word pronunciation variants by natives and learners: Evidence from French casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Brandt, S., Nitschke, S., & Kidd, E. (2017). Priming the comprehension of German object relative clauses. Language Learning and Development, 13(3), 241-261. doi:10.1080/15475441.2016.1235500.

    Abstract

    Structural priming is a useful laboratory-based technique for investigating how children respond to temporary changes in the distribution of structures in their input. In the current study we investigated whether increasing the number of object relative clauses (RCs) in German-speaking children’s input changes their processing preferences for ambiguous RCs. Fifty-one 6-year-olds and 54 9-year-olds participated in a priming task that (i) gauged their baseline interpretations for ambiguous RC structures, (ii) primed an object-RC interpretation of ambiguous RCs, and (iii) determined whether priming persevered beyond immediate prime-target pairs. The 6-year old children showed no priming effect, whereas the 9-year-old group showed robust priming that was long lasting. Unlike in studies of priming in production, priming did not increase in magnitude when there was lexical overlap between prime and target. Overall, the results suggest that increased exposure to object RCs facilitates children’s interpretation of this otherwise infrequent structure, but only in older children. The implications for acquisition theory are discussed.
  • Brascamp, J., Klink, P., & Levelt, W. J. M. (2015). The ‘laws’ of binocular rivalry: 50 years of Levelt’s propositions. Vision Research, 109, 20-37. doi:10.1016/j.visres.2015.02.019.

    Abstract

    It has been fifty years since Levelt’s monograph On Binocular Rivalry (1965) was published, but its four propositions that describe the relation between stimulus strength and the phenomenology of binocular rivalry remain a benchmark for theorists and experimentalists even today. In this review, we will revisit the original conception of the four propositions and the scientific landscape in which this happened. We will also provide a brief update concerning distributions of dominance durations, another aspect of Levelt’s monograph that has maintained a prominent presence in the field. In a critical evaluation of Levelt’s propositions against current knowledge of binocular rivalry we will then demonstrate that the original propositions are not completely compatible with what is known today, but that they can, in a straightforward way, be modified to encapsulate the progress that has been made over the past fifty years. The resulting modified, propositions are shown to apply to a broad range of bistable perceptual phenomena, not just binocular rivalry, and they allow important inferences about the underlying neural systems. We argue that these inferences reflect canonical neural properties that play a role in visual perception in general, and we discuss ways in which future research can build on the work reviewed here to attain a better understanding of these properties
  • Brehm, L., & Meyer, A. S. (2021). Planning when to say: Dissociating cue use in utterance initiation using cross-validation. Journal of Experimental Psychology: General, 150(9), 1772-1799. doi:10.1037/xge0001012.

    Abstract

    In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner’s turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate’s utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt’s length, and whether within a block of trials, the confederate prompt’s length was predictable. We measured how these factors affected the gap between turns and the participants’ allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate’s stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner’s speech than corepresentation of their utterance content.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2021). Probabilistic online processing of sentence anomalies. Language, Cognition and Neuroscience, 36(8), 959-983. doi:10.1080/23273798.2021.1900579.

    Abstract

    Listeners can successfully interpret the intended meaning of an utterance even when it contains errors or other unexpected anomalies. The present work combines an online measure of attention to sentence referents (visual world eye-tracking) with offline judgments of sentence meaning to disclose how the interpretation of anomalous sentences unfolds over time in order to explore mechanisms of non-literal processing. We use a metalinguistic judgment in Experiment 1 and an elicited imitation task in Experiment 2. In both experiments, we focus on one morphosyntactic anomaly (Subject-verb agreement; The key to the cabinets literally *were … ) and one semantic anomaly (Without; Lulu went to the gym without her hat ?off) and show that non-literal referents to each are considered upon hearing the anomalous region of the sentence. This shows that listeners understand anomalies by overwriting or adding to an initial interpretation and that this occurs incrementally and adaptively as the sentence unfolds.
  • Brehm, L., & Goldrick, M. (2017). Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions. Journal of Experimental Psychology: Learning, Memory, and Cognition., 43(10), 1537-1556. doi:10.1037/xlm0000390.

    Abstract

    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing.
  • Brehm, L., & Bock, K. (2017). Referential and lexical forces in number agreement. Language, Cognition and Neuroscience, 32(2), 129-146. doi:10.1080/23273798.2016.1234060.

    Abstract

    In work on grammatical agreement in sentence production, there are accounts of verb number formulation that emphasise the role of whole-structure properties and accounts that emphasise the role of word-driven properties. To evaluate these alternatives, we carried out two experiments that examined a referential (wholistic) contributor to agreement along with two lexical-semantic (local) factors. Both experiments gauged the accuracy and latency of inflected-verb production in order to assess how variations in grammatical number interacted with the other factors. The accuracy of verb production was modulated both by the referential effect of notional number and by the lexical-semantic effects of relatedness and category membership. As an index of agreement difficulty, latencies were little affected by either factor. The findings suggest that agreement is sensitive to referential as well as lexical forces and highlight the importance of lexical-structural integration in the process of sentence production.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S., & Bradlow, A. R. (2015). The effect of target-background synchronicity on speech-in-speech recognition. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The aim of the present study was to investigate whether speech-in-speech recognition is affected by variation in the target-background timing relationship. Specifically, we examined whether within trial synchronous or asynchronous onset and offset of the target and background speech influenced speech-in-speech recognition. Native English listeners were presented with English target sentences in the presence of English or Dutch background speech. Importantly, only the short-term temporal context –in terms of onset and offset synchrony or asynchrony of the target and background speech– varied across conditions. Participants’ task was to repeat back the English target sentences. The results showed an effect of synchronicity for English-in-English but not for English-in-Dutch recognition, indicating that familiarity with the English background lead in the asynchronous English-in-English condition might have attracted attention towards the English background. Overall, this study demonstrated that speech-in-speech recognition is sensitive to the target-background timing relationship, revealing an important role for variation in the local context of the target-background relationship as it extends beyond the limits of the time-frame of the to-be-recognized target sentence.
  • Brouwer, S., & Bradlow, A. R. (2015). The temporal dynamics of spoken word recognition in adverse listening conditions. Journal of Psycholinguistic Research. Advanced online publication. doi:10.1007/s10936-015-9396-9.

    Abstract

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P., Sicoli, M. A., & Le Guen, O. (2021). Cross-speaker repetition and epistemic stance in Tzeltal, Yucatec, and Zapotec conversations. Journal of Pragmatics, 183, 256-272. doi:10.1016/j.pragma.2021.07.005.

    Abstract

    As a turn-design strategy, repeating another has been described for English as a fairly restricted way of constructing a response, which, through re-saying what another speaker just said, is exploitable for claiming epistemic primacy, and thus avoided when a second speaker has no direct experience. Conversations in Mesoamerican languages present a challenge to the generality of this claim. This paper examines the epistemics of dialogic repetition in video-recordings of conversations in three Indigenous languages of Mexico: Tzeltal and Yucatec Maya, both spoken in southeastern Mexico, and Lachixío Zapotec, spoken in Oaxaca. We develop a typology of repetition in different sequential environments. We show that while the functions of repeats in Mesoamerica overlap with the range of repeat functions described for English, there is an additional epistemic environment in the Mesoamerican routine of repeating for affirmation: a responding speaker can repeat to affirm something introduced by another speaker of which s/he has no prior knowledge. We argue that, while dialogic repetition is a universally available turn-design strategy that makes epistemics potentially relevant, cross-cultural comparison reveals that cultural preferences intervene such that, in Mesoamerican conversations, repetition co-constructs knowledge as collective process over which no individual participant has final authority or ownership.

    Files private

    Request files
  • Brown, A. R., Pouw, W., Brentari, D., & Goldin-Meadow, S. (2021). People are less susceptible to illusion when they use their hands to communicate rather than estimate. Psychological Science, 32, 1227-1237. doi:10.1177/0956797621991552.

    Abstract

    When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.

    Additional information

    supplementary material data via OSF
  • Brown, P. (1998). Children's first verbs in Tzeltal: Evidence for an early verb category. Linguistics, 36(4), 713-753.

    Abstract

    A major finding in studies of early vocabulary acquisition has been that children tend to learn a lot of nouns early but make do with relatively few verbs, among which semantically general-purpose verbs like do, make, get, have, give, come, go, and be play a prominent role. The preponderance of nouns is explained in terms of nouns labelling concrete objects beings “easier” to learn than verbs, which label relational categories. Nouns label “natural categories” observable in the world, verbs label more linguistically and culturally specific categories of events linking objects belonging to such natural categories (Gentner 1978, 1982; Clark 1993). This view has been challenged recently by data from children learning certain non-Indo-European languges like Korean, where children have an early verb explosion and verbs dominate in early child utterances. Children learning the Mayan language Tzeltal also acquire verbs early, prior to any noun explosion as measured by production. Verb types are roughly equivalent to noun types in children’s beginning production vocabulary and soon outnumber them. At the one-word stage children’s verbs mostly have the form of a root stripped of affixes, correctly segmented despite structural difficulties. Quite early (before the MLU 2.0 point) there is evidence of productivity of some grammatical markers (although they are not always present): the person-marking affixes cross-referencing core arguments, and the completive/incompletive aspectual distinctions. The Tzeltal facts argue against a natural-categories explanation for childre’s early vocabulary, in favor of a view emphasizing the early effects of language-specific properties of the input. They suggest that when and how a child acquires a “verb” category is centrally influenced by the structural properties of the input, and that the semantic structure of the language - where the referential load is concentrated - plays a fundamental role in addition to distributional facts.
  • Brown, P. (1998). Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology, 8(2), 197-221. doi:10.1525/jlin.1998.8.2.197.

    Abstract

    When Tzeltal children in the Mayan community of Tenejapa, in southern Mexico, begin speaking, their production vocabulary consists predominantly of verb roots, in contrast to the dominance of nouns in the initial vocabulary of first‐language learners of Indo‐European languages. This article proposes that a particular Tzeltal conversational feature—known in the Mayanist literature as "dialogic repetition"—provides a context that facilitates the early analysis and use of verbs. Although Tzeltal babies are not treated by adults as genuine interlocutors worthy of sustained interaction, dialogic repetition in the speech the children are exposed to may have an important role in revealing to them the structural properties of the language, as well as in socializing the collaborative style of verbal interaction adults favor in this community.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (1998). [Review of the book by A.J. Wootton, Interaction and the development of mind]. Journal of the Royal Anthropological Institute, 4(4), 816-817.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (1998). La identificación de las raíces verbales en Tzeltal (Maya): Cómo lo hacen los niños? Función, 17-18, 121-146.

    Abstract

    This is a Spanish translation of Brown 1997.
  • Brown, P. (2015). Language, culture, and spatial cognition. In F. Sharifian (Ed.), Routledge Handbook on Language and Culture (pp. 294-309). London: Routledge.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (1997). Isolating the CVC root in Tzeltal Mayan: A study of children's first verbs. In E. V. Clark (Ed.), Proceedings of the 28th Annual Child Language Research Forum (pp. 41-52). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    How do children isolate the semantic package contained in verb roots in the Mayan language Tzeltal? One might imagine that the canonical CVC shape of roots characteristic of Mayan languages would make the job simple, but the root is normally preceded and followed by affixes which mask its identity. Pye (1983) demonstrated that, in Kiche' Mayan, prosodic salience overrides semantic salience, and children's first words in Kiche' are often composed of only the final (stressed) syllable constituted by the final consonant of the CVC root and a 'meaningless' termination suffix. Intonation thus plays a crucial role in early Kiche' morphological development. Tzeltal presents a rather different picture: The first words of children around the age of 1;6 are bare roots, children strip off all prefixes and suffixes which are obligatory in adult speech. They gradually add them, starting with the suffixes (which receive the main stress), but person prefixes are omitted in some contexts past a child's third birthday, and one obligatory aspectual prefix (x-) is systematically omitted by the four children in my longitudinal study even after they are four years old. Tzeltal children's first verbs generally show faultless isolation of the root. An account in terms of intonation or stress cannot explain this ability (the prefixes are not all syllables; the roots are not always stressed). This paper suggests that probable clues include the fact that the CVC root stays constant across contexts (with some exceptions) whereas the affixes vary, that there are some linguistic contexts where the root occurs without any prefixes (relatively frequent in the input), and that the Tzeltal discourse convention of responding by repeating with appropriate deictic alternation (e.g., "I see it." "Oh, you see it.") highlights the root.
  • Brown, P. (2015). Space: Linguistic expression of. In J. D. Wright (Ed.), International Encyclopedia of the Social and Behavioral Sciences (2nd ed.) Vol. 23 (pp. 89-93). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57017-2.
  • Brown, P. (2017). Politeness and impoliteness. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 383-399). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.16.

    Abstract

    This article selectively reviews the literature on politeness across different disciplines—linguistics, anthropology, communications, conversation analysis, social psychology, and sociology—and critically assesses how both theoretical approaches to politeness and research on linguistic politeness phenomena have evolved over the past forty years. Major new developments include a shift from predominantly linguistic approaches to those examining politeness and impoliteness as processes that are embedded and negotiated in interactional and cultural contexts, as well as a greater focus on how both politeness and interactional confrontation and conflict fit into our developing understanding of human cooperation and universal aspects of human social interaction.

    Files private

    Request files
  • Brown, P. (2015). Politeness and language. In J. D. Wright (Ed.), The International Encyclopedia of the Social and Behavioural Sciences (IESBS), (2nd ed.) (pp. 326-330). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53072-4.
  • Brown, P. (1995). Politeness strategies and the attribution of intentions: The case of Tzeltal irony. In E. Goody (Ed.), Social intelligence and interaction (pp. 153-174). Cambridge: Cambridge University Press.

    Abstract

    In this paper I take up the idea that human thinking is systematically biased in the direction of interactive thinking (E. Goody's anticipatory interactive planning), that is, that humans are peculiarly good at, and inordinately prone to, attributing intentions and goals to one other (as well as to non-humans), and that they routinely orient to presumptions about each other's intentions in what they say and do. I explore the implications of that idea for an understanding of politeness in interaction, taking as a starting point the Brown and Levinson (1987) model of politeness, which assumes interactive thinking, a notion implicit in the formulation of politeness as strategic orientation to face. Drawing on an analysis of the phenomenon of conventionalized ‘irony’ in Tzeltal, I emphasize that politeness does not inhere in linguistic form per se but is a matter of conveying a polite intention, and argue that Tzeltal irony provides a prime example of one way in which humans' highly-developed intellectual machinery for inferring alter's intentions is put to the service of social relationships.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P. (1994). The INs and ONs of Tzeltal locative expressions: The semantics of static descriptions of location. Linguistics, 32, 743-790.

    Abstract

    This paper explores how static topological spatial relations such as contiguity, contact, containment, and support are expressed in the Mayan language Tzeltal. Three distinct Tzeltal systems for describing spatial relationships - geographically anchored (place names, geographical coordinates), viewer-centered (deictic), and object-centered (body parts, relational nouns, and dispositional adjectives) - are presented, but the focus here is on the object-centered system of dispositional adjectives in static locative expressions. Tzeltal encodes shape/position/configuration gestalts in verb roots; predicates formed from these are an essential element in locative descriptions. Specificity of shape in the predicate allows spatial reltaions between figure and ground objects to be understood by implication. Tzeltal illustrates an alternative stragegy to that of prepositional languages like English: rather than elaborating shape distinctions in the nouns and minimizing them in the locatives, Tzeltal encodes shape and configuration very precisely in verb roots, leaving many object nouns unspecified for shape. The Tzeltal case thus presents a direct challenge to cognitive science claims that, in both languge and cognition, WHAT is kept distinct from WHERE.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brown-Schmidt, S., & Konopka, A. E. (2015). Processes of incremental message planning during conversation. Psychonomic Bulletin & Review, 22, 833-843. doi:10.3758/s13423-014-0714-2.

    Abstract

    Speaking begins with the formulation of an intended preverbal message and linguistic encoding of this information. The transition from thought to speech occurs incrementally, with cascading planning at subsequent levels of production. In this article, we aim to specify the mechanisms that support incremental message preparation. We contrast two hypotheses about the mechanisms responsible for incorporating message-level information into a linguistic plan. According to the Initial Preparation view, messages can be encoded as fluent utterances if all information is ready before speaking begins. By contrast, on the Continuous Incrementality view, messages can be continually prepared and updated throughout the production process, allowing for fluent production even if new information is added to the message while speaking is underway. Testing these hypotheses, eye-tracked speakers in two experiments produced unscripted, conjoined noun phrases with modifiers. Both experiments showed that new message elements can be incrementally incorporated into the utterance even after articulation begins, consistent with a Continuous Incrementality view of message planning, in which messages percolate to linguistic encoding immediately as that information becomes available in the mind of the speaker. We conclude by discussing the functional role of incremental message planning in conversational speech and the situations in which this continuous incremental planning would be most likely to be observed
  • Brucato, N., Guadalupe, T., Franke, B., Fisher, S. E., & Francks, C. (2015). A schizophrenia-associated HLA locus affects thalamus volume and asymmetry. Brain, Behavior, and Immunity, 46, 311-318. doi:10.1016/j.bbi.2015.02.021.

    Abstract

    Genes of the Major Histocompatibility Complex (MHC) have recently been shown to have neuronal functions in the thalamus and hippocampus. Common genetic variants in the Human Leukocyte Antigens (HLA) region, human homologue of the MHC locus, are associated with small effects on susceptibility to schizophrenia, while volumetric changes of the thalamus and hippocampus have also been linked to schizophrenia. We therefore investigated whether common variants of the HLA would affect volumetric variation of the thalamus and hippocampus. We analyzed thalamus and hippocampus volumes, as measured using structural magnetic resonance imaging, in 1.265 healthy participants. These participants had also been genotyped using genome-wide single nucleotide polymorphism (SNP) arrays. We imputed genotypes for single nucleotide polymorphisms at high density across the HLA locus, as well as HLA allotypes and HLA amino acids, by use of a reference population dataset that was specifically targeted to the HLA region. We detected a significant association of the SNP rs17194174 with thalamus volume (nominal P=0.0000017, corrected P=0.0039), as well as additional SNPs within the same region of linkage disequilibrium. This effect was largely lateralized to the left thalamus and is localized within a genomic region previously associated with schizophrenia. The associated SNPs are also clustered within a potential regulatory element, and a region of linkage disequilibrium that spans genes expressed in the thalamus, including HLA-A. Our data indicate that genetic variation within the HLA region influences the volume and asymmetry of the human thalamus. The molecular mechanisms underlying this association may relate to HLA influences on susceptibility to schizophrenia
  • Bruggeman, L., & Janse, E. (2015). Older listeners' decreased flexibility in adjusting to changes in speech signal reliability. In M. Wolters, J. Linvingstone, B. Beattie, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Under noise or speech reductions, young adult listeners flexibly adjust the parameters of lexical activation and competition to allow for speech signal unreliability. Consequently, mismatches in the input are treated more leniently such that lexical candidates are not immediately deactivated. Using eyetracking, we assessed whether this modulation of recognition dynamics also occurs for older listeners. Dutch participants (aged 60+) heard Dutch sentences containing a critical word while viewing displays of four line drawings. The name of one picture shared either onset or rhyme with the critical word (i.e., was a phonological competitor). Sentences were either clear and noise-free, or had several phonemes replaced by bursts of noise. A larger preference for onset competitors than for rhyme competitors was observed in both clear and noise conditions; performance did not alter across condition. This suggests that dynamic adjustment of spoken-word recognition parameters in response to noise is less available to older listeners.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.

Share this page