Publications

Displaying 101 - 200 of 2095
  • Bauer, B. L. M. (2009). Strategies of definiteness in Latin: Implications for early Indo-European. In V. Bubenik, J. Hewson, & S. Rose (Eds.), Grammatical change in Indo-European languages: Papers presented at the workshop on Indo-European Linguistics at the XVIIIth International Conference on Historical Linguistics, Montreal, 2007 (pp. 71-87). Amsterdam: Benjamins.
  • Bauer, B. L. M. (2009). Word order. In P. Baldi, & P. Cuzzolin (Eds.), New Perspectives on Historical Latin Syntax: Vol 1: Syntax of the Sentence (pp. 241-316). Berlin: Mouton de Gruyter.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2015). A chromosomal rearrangement in a child with severe speech and language disorder separates FOXP2 from a functional enhancer. Molecular Cytogenetics, 8: 69. doi:10.1186/s13039-015-0173-0.

    Abstract

    Mutations of FOXP2 in 7q31 cause a rare disorder involving speech apraxia, accompanied by expressive and receptive language impairments. A recent report described a child with speech and language deficits, and a genomic rearrangement affecting chromosomes 7 and 11. One breakpoint mapped to 7q31 and, although outside its coding region, was hypothesised to disrupt FOXP2 expression. We identified an element 2 kb downstream of this breakpoint with epigenetic characteristics of an enhancer. We show that this element drives reporter gene expression in human cell-lines. Thus, displacement of this element by translocation may disturb gene expression, contributing to the observed language phenotype.
  • Becker, R., Pefkou, M., Michel, C. M., & Hervais-Adelman, A. (2013). Left temporal alpha-band activity reflects single word intelligibility. Frontiers in Systems Neuroscience, 7: 121. doi:10.3389/fnsys.2013.00121.

    Abstract

    The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
  • Begeer, S., Malle, B. F., Nieuwland, M. S., & Keysar, B. (2010). Using theory of mind to represent and take part in social interactions: Comparing individuals with high-functioning autism and typically developing controls. European Journal of Developmental Psychology, 7(1), 104-122. doi:10.1080/17405620903024263.

    Abstract

    The literature suggests that individuals with autism spectrum disorders (ASD) are deficient in their Theory of Mind (ToM) abilities. They sometimes do not seem to appreciate that behaviour is motivated by underlying mental states. If this is true, then individuals with ASD should also be deficient when they use their ToM to represent and take part in dyadic interactions. In the current study we compared the performance of normally intelligent adolescents and adults with ASD to typically developing controls. In one task they heard a narrative about an interaction and then retold it. In a second task they played a communication game that required them to take into account another person's perspective. We found that when they described people's behaviour the ASD individuals used fewer mental terms in their story narration, suggesting a lower tendency to represent interactions in mentalistic terms. Surprisingly, ASD individuals and control participants showed the same level of performance in the communication game that required them to distinguish between their beliefs and the other's beliefs. Given that ASD individuals show no deficiency in using their ToM in real interaction, it is unlikely that they have a systematically deficient ToM.
  • Behrens, B., Flecken, M., & Carroll, M. (2013). Progressive Attraction: On the Use and Grammaticalization of Progressive Aspect in Dutch, Norwegian, and German. Journal of Germanic linguistics, 25(2), 95-136. doi:10.1017/S1470542713000020.

    Abstract

    This paper investigates the use of aspectual constructions in Dutch, Norwegian, and German, languages in which aspect marking that presents events explicitly as ongoing, is optional. Data were elicited under similar conditions with native speakers in the three countries. We show that while German speakers make insignificant use of aspectual constructions, usage patterns in Norwegian and Dutch present an interesting case of overlap, as well as differences, with respect to a set of factors that attract or constrain the use of different constructions. The results indicate that aspect marking is grammaticalizing in Dutch, but there are no clear signs of a similar process in Norwegian.*
  • Belke, E., Brysbaert, M., Meyer, A. S., & Ghyselinck, M. (2005). Age of acquisition effects in picture naming: Evidence for a lexical-semantic competition hypothesis. Cognition, 96, B45-B54. doi:10.1016/j.cognition.2004.11.006.

    Abstract

    In many tasks the effects of frequency and age of acquisition (AoA) on reaction latencies are similar in size. However, in picture naming the AoA-effect is often significantly larger than expected on the basis of the frequency-effect. Previous explanations of this frequency-independent AoA-effect have attributed it to the organisation of the semantic system or to the way phonological word forms are stored in the mental lexicon. Using a semantic blocking paradigm, we show that semantic context effects on naming latencies are more pronounced for late-acquired than for early-acquired words. This interaction between AoA and naming context is likely to arise during lexical-semantic encoding, which we put forward as the locus for the frequency-independent AoA-effect.
  • Belke, E., Meyer, A. S., & Damian, M. F. (2005). Refractory effects in picture naming as assessed in a semantic blocking paradigm. The Quarterly Journal of Experimental Psychology Section A, 58, 667-692. doi:10.1080/02724980443000142.

    Abstract

    In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In , the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system.
  • Berck, P., Bibiko, H.-J., Kemps-Snijders, M., Russel, A., & Wittenburg, P. (2006). Ontology-based language archive utilization. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2295-2298).
  • Berends, S., Veenstra, A., & Van Hout, A. (2010). 'Nee, ze heeft er twee': Acquisition of the Dutch quantitative 'er'. Groninger Arbeiten zur Germanistischen Linguistik, 51, 1-7. Retrieved from http://irs.ub.rug.nl/dbi/4ef4a0b3eafcb.

    Abstract

    We present the first study on the acquisition of the Dutch quantitative pronoun er in sentences such as de vrouw draagt er drie ‘the woman is carrying three.’ There is a large literature on Dutch children’s interpretation of pronouns and a few recent production studies, all specifically looking at 3rd person singular pronouns and the so-called Delay of Principle B effect (Coopmans & Philip, 1996; Koster, 1993; Spenader, Smits and Hendriks, 2009). However, no one has studied children’s use of quantitative er. Dutch is the only Germanic language with such a pronoun.
  • Berghuis, B., De Kovel, C. G. F., van Iterson, L., Lamberts, R. J., Sander, J. W., Lindhout, D., & Koeleman, B. P. C. (2015). Complex SCN8A DNA-abnormalities in an individual with therapy resistant absence epilepsy. Epilepsy Research, 115, 141-144. doi:10.1016/j.eplepsyres.2015.06.007.

    Abstract

    Background De novo SCN8A missense mutations have been identified as a rare dominant cause of epileptic encephalopathy. We described a person with epileptic encephalopathy associated with a mosaic deletion of the SCN8A gene. Methods Array comparative genome hybridization was used to identify chromosomal abnormalities. Next Generation Sequencing was used to screen for variants in known and candidate epilepsy genes. A single nucleotide polymorphism array was used to test whether the SCN8A variants were in cis or in trans. Results We identified a de novo mosaic deletion of exons 2–14 of SCN8A, and a rare maternally inherited missense variant on the other allele in a woman presenting with absence seizures, challenging behavior, intellectual disability and QRS-fragmentation on the ECG. We also found a variant in SCN5A. Conclusions The combination of a rare missense variant with a de novo mosaic deletion of a large part of the SCN8A gene suggests that other possible mechanisms for SCN8A mutations may cause epilepsy; loss of function, genetic modifiers and cellular interference may play a role. This case expands the phenotype associated with SCN8A mutations, with absence epilepsy and regression in language and memory skills.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C., Paulus, M., & Fikkert, J. (2010). A closer look at pronoun comprehension: Comparing different methods. In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 53-61). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    1. Introduction External input is necessary to acquire language. Consequently, the comprehension of various constituents of language, such as lexical items or syntactic and semantic structures should emerge at the same time as or even precede their production. However, in the case of pronouns this general assumption does not seem to hold. On the contrary, while children at the age of four use pronouns and reflexives appropriately during production (de Villiers, et al. 2006), a number of comprehension studies across different languages found chance performance in pronoun trials up to the age of seven, which co-occurs with a high level of accuracy in reflexive trials (for an overview see e.g. Conroy, et al. 2009; Elbourne 2005).
  • Bergmann, C., Gubian, M., & Boves, L. (2010). Modelling the effect of speaker familiarity and noise on infant word recognition. In Proceedings of the 11th Annual Conference of the International Speech Communication Association [Interspeech 2010] (pp. 2910-2913). ISCA.

    Abstract

    In the present paper we show that a general-purpose word learning model can simulate several important findings from recent experiments in language acquisition. Both the addition of background noise and varying the speaker have been found to influence infants’ performance during word recognition experiments. We were able to replicate this behaviour in our artificial word learning agent. We use the results to discuss both advantages and limitations of computational models of language acquisition.
  • Bergmann, C., Bosch, L. t., Fikkert, P., & Boves, L. (2015). Modelling the Noise-Robustness of Infants’ Word Representations: The Impact of Previous Experience. PLoS One, 10(7): e0132245. doi:10.1371/journal.pone.0132245.

    Abstract

    During language acquisition, infants frequently encounter ambient noise. We present a computational model to address whether specific acoustic processing abilities are necessary to detect known words in moderate noise—an ability attested experimentally in infants. The model implements a general purpose speech encoding and word detection procedure. Importantly, the model contains no dedicated processes for removing or cancelling out ambient noise, and it can replicate the patterns of results obtained in several infant experiments. In addition to noise, we also addressed the role of previous experience with particular target words: does the frequency of a word matter, and does it play a role whether that word has been spoken by one or multiple speakers? The simulation results show that both factors affect noise robustness. We also investigated how robust word detection is to changes in speaker identity by comparing words spoken by known versus unknown speakers during the simulated test. This factor interacted with both noise level and past experience, showing that an increase in exposure is only helpful when a familiar speaker provides the test material. Added variability proved helpful only when encountering an unknown speaker. Finally, we addressed whether infants need to recognise specific words, or whether a more parsimonious explanation of infant behaviour, which we refer to as matching, is sufficient. Recognition involves a focus of attention on a specific target word, while matching only requires finding the best correspondence of acoustic input to a known pattern in the memory. Attending to a specific target word proves to be more noise robust, but a general word matching procedure can be sufficient to simulate experimental data stemming from young infants. A change from acoustic matching to targeted recognition provides an explanation of the improvements observed in infants around their first birthday. In summary, we present a computational model incorporating only the processes infants might employ when hearing words in noise. Our findings show that a parsimonious interpretation of behaviour is sufficient and we offer a formal account of emerging abilities.
  • Bethard, S., Lai, V. T., & Martin, J. (2009). Topic model analysis of metaphor frequency for psycholinguistic stimuli. In Proceedings of the NAACL HLT Workshop on Computational Approaches to Linguistic Creativity, Boulder, Colorado, June 4, 2009 (pp. 9-16). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    Psycholinguistic studies of metaphor processing must control their stimuli not just for word frequency but also for the frequency with which a term is used metaphorically. Thus, we consider the task of metaphor frequency estimation, which predicts how often target words will be used metaphorically. We develop metaphor classifiers which represent metaphorical domains through Latent Dirichlet Allocation, and apply these classifiers to the target words, aggregating their decisions to estimate the metaphorical frequencies. Training on only 400 sentences, our models are able to achieve 61.3 % accuracy on metaphor classification and 77.8 % accuracy on HIGH vs. LOW metaphorical frequency estimation.
  • Bien, H., Levelt, W. J. M., & Baayen, R. H. (2005). Frequency effects in compound production. Proceedings of the National Academy of Sciences of the United States of America, 102(49), 17876-17881.

    Abstract

    Four experiments investigated the role of frequency information in compound production by independently varying the frequencies of the first and second constituent as well as the frequency of the compound itself. Pairs of Dutch noun-noun compounds were selected such that there was a maximal contrast for one frequency while matching the other two frequencies. In a position-response association task, participants first learned to associate a compound with a visually marked position on a computer screen. In the test phase, participants had to produce the associated compound in response to the appearance of the position mark, and we measured speech onset latencies. The compound production latencies varied significantly according to factorial contrasts in the frequencies of both constituting morphemes but not according to a factorial contrast in compound frequency, providing further evidence for decompositional models of speech production. In a stepwise regression analysis of the joint data of Experiments 1-4, however, compound frequency was a significant nonlinear predictor, with facilitation in the low-frequency range and a trend toward inhibition in the high-frequency range. Furthermore, a combination of structural measures of constituent frequencies and entropies explained significantly more variance than a strict decompositional model, including cumulative root frequency as the only measure of constituent frequency, suggesting a role for paradigmatic relations in the mental lexicon.
  • Blackwell, N. L., Perlman, M., & Fox Tree, J. E. (2015). Quotation as a multimodal construction. Journal of Pragmatics, 81, 1-7. doi:10.1016/j.pragma.2015.03.004.

    Abstract

    Quotations are a means to report a broad range of events in addition to speech, and often involve both vocal and bodily demonstration. The present study examined the use of quotation to report a variety of multisensory events (i.e., containing salient visible and audible elements) as participants watched and then described a set of video clips including human speech and animal vocalizations. We examined the relationship between demonstrations conveyed through the vocal versus bodily modality, comparing them across four common quotation devices (be like, go, say, and zero quotatives), as well as across direct and non-direct quotations and retellings. We found that direct quotations involved high levels of both vocal and bodily demonstration, while non-direct quotations involved lower levels in both these channels. In addition, there was a strong positive correlation between vocal and bodily demonstration for direct quotation. This result supports a Multimodal Hypothesis where information from the two channels arises from one central concept.
  • Blythe, J. (2010). From ethical datives to number markers in Murriny Patha. In R. Hendery, & J. Hendriks (Eds.), Grammatical change: Theory and description (pp. 157-187). Canberra: Pacific Linguistics.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Blythe, J. (2010). Self-association in Murriny Patha talk-in-interaction. In I. Mushin, & R. Gardner (Eds.), Studies in Australian Indigenous Conversation [Special issue] (pp. 447-469). Australian Journal of Linguistics. doi:10.1080/07268602.2010.518555.

    Abstract

    When referring to persons in talk-in-interaction, interlocutors recruit the particular referential expressions that best satisfy both cultural and interactional contingencies, as well as the speaker’s own personal objectives. Regular referring practices reveal cultural preferences for choosing particular classes of reference forms for engaging in particular types of activities. When speakers of the northern Australian language Murriny Patha refer to each other, they display a clear preference for associating the referent to the current conversation’s participants. This preference for Association is normally achieved through the use of triangular reference forms such as kinterms. Triangulations are reference forms that link the person being spoken about to another specified person (e.g. Bill’s doctor). Triangulations are frequently used to associate the referent to the current speaker (e.g.my father), to an addressed recipient (your uncle) or co-present other (this bloke’s cousin). Murriny Patha speakers regularly associate key persons to themselves when making authoritative claims about items of business and important events. They frequently draw on kinship links when attempting to bolster their epistemic position. When speakers demonstrate their relatedness to the event’s protagonists, they ground their contribution to the discussion as being informed by appropriate genealogical connections (effectively, ‘I happen to know something about that. He was after all my own uncle’).
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bock, K., Butterfield, S., Cutler, A., Cutting, J. C., Eberhard, K. M., & Humphreys, K. R. (2006). Number agreement in British and American English: Disagreeing to agree collectively. Language, 82(1), 64-113.

    Abstract

    British andAmerican speakers exhibit different verb number agreement patterns when sentence subjects have collective headnouns. From linguistic andpsycholinguistic accounts of how agreement is implemented, three alternative hypotheses can be derived to explain these differences. The hypotheses involve variations in the representation of notional number, disparities in how notional andgrammatical number are used, and inequalities in the grammatical number specifications of collective nouns. We carriedout a series of corpus analyses, production experiments, andnorming studies to test these hypotheses. The results converge to suggest that British and American speakers are equally sensitive to variations in notional number andimplement subjectverb agreement in much the same way, but are likely to differ in the lexical specifications of number for collectives. The findings support a psycholinguistic theory that explains verb and pronoun agreement within a parallel architecture of lexical andsyntactic formulation.
  • Bod, R., Fitz, H., & Zuidema, W. (2006). On the structural ambiguity in natural language that the neural architecture cannot deal with [Commentary]. Behavioral and Brain Sciences, 29, 71-72. doi:10.1017/S0140525X06239025.

    Abstract

    We argue that van der Velde's & de Kamps's model does not solve the binding problem but merely shifts the burden of constructing appropriate neural representations of sentence structure to unexplained preprocessing of the linguistic input. As a consequence, their model is not able to explain how various neural representations can be assigned to sentences that are structurally ambiguous.
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D. J., & Kerkhofs, R. (2010). The interplay between prosody and syntax in sentence processing: The case of subject- and object-control verbs. Journal of Cognitive Neuroscience, 22(5), 1036-1053. doi:10.1162/jocn.2009.21269.

    Abstract

    This study addresses the question whether prosodic information can affect the choice for a syntactic analysis in auditory sentence processing. We manipulated the prosody (in the form of a prosodic break; PB) of locally ambiguous Dutch sentences to favor one of two interpretations. The experimental items contained two different types of so-called control verbs (subject and object control) in the matrix clause and were syntactically disambiguated by a transitive or by an intransitive verb. In Experiment 1, we established the default off-line preference of the items for a transitive or an intransitive disambiguating verb with a visual and an auditory fragment completion test. The results suggested that subject- and object-control verbs differently affect the syntactic structure that listeners expect. In Experiment 2, we investigated these two types of verbs separately in an on-line ERP study. Consistent with the literature, the PB elicited a closure positive shift. Furthermore, in subject-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found, both for sentences with and for sentences without a PB. This result suggests that the default preference for subject-control verbs goes in the same direction as the effect of the PB. In object-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found for sentences with a PB but no effect in the absence of a PB. This indicates that a PB can affect the syntactic analysis that listeners pursue.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Bonte, M. L., Mitterer, H., Zellagui, N., Poelmans, H., & Blomert, L. (2005). Auditory cortical tuning to statistical regularities in phonology. Clinical Neurophysiology, 16(12), 2765-2774. doi:10.1016/j.clinph.2005.08.012.

    Abstract

    Objective: Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of phoneme combinations. Methods: We employed an ERP measure indicative of experience-dependent auditory memory traces, the mismatch negativity (MMN). We presented pairs of non-words that differed by the degree of phonotactic probability in a codified passive oddball design that minimizes the contribution of acoustic processes. Results: In Experiment 1 the non-word with high phonotactic probability (notsel) elicited a significantly enhanced MMN as compared to the non-word with low phonotactic probability (notkel). In Experiment 2 this finding was replicated with a non-word pair with a smaller acoustic difference (notsel–notfel). An MMN enhancement was not observed in a third acoustic control experiment with stimuli having comparable phonotactic probability (so–fo). Conclusions: Our data suggest that auditory cortical responses to phoneme clusters are modulated by statistical regularities of phoneme combinations. Significance: This study indicates that the language environment is relevant in shaping the neural processing of speech. Furthermore, it provides a potentially useful design for investigating implicit phonological processing in children with anomalous language functions like dyslexia.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2005). Onset entropy matters: Letter-to-phoneme mappings in seven languages. Reading and Writing, 18, 211-229. doi:10.1007/s11145-005-3001-9.
  • Bornkessel-Schlesewsky, I., Alday, P. M., Kretzschmar, F., Grewe, T., Gumpert, M., Schumacher, P. B., & Schlesewsky, M. (2015). Age-related changes in predictive capacity versus internal model adaptability: Electrophysiological evidence that individual differences outweigh effects of age. Frontiers in Aging Neuroscience, 7: 217. doi:10.3389/fnagi.2015.00217.

    Abstract

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
  • Bosker, H. R., Tjiong, V., Quené, H., Sanders, T., & De Jong, N. H. (2015). Both native and non-native disfluencies trigger listeners' attention. In Disfluency in Spontaneous Speech: DISS 2015: An ICPhS Satellite Meeting. Edinburgh: DISS2015.

    Abstract

    Disfluencies, such as uh and uhm, are known to help the listener in speech comprehension. For instance, disfluencies may elicit prediction of less accessible referents and may trigger listeners’ attention to the following word. However, recent work suggests differential processing of disfluencies in native and non-native speech. The current study investigated whether the beneficial effects of disfluencies on listeners’ attention are modulated by the (non-)native identity of the speaker. Using the Change Detection Paradigm, we investigated listeners’ recall accuracy for words presented in disfluent and fluent contexts, in native and non-native speech. We observed beneficial effects of both native and non-native disfluencies on listeners’ recall accuracy, suggesting that native and non-native disfluencies trigger listeners’ attention in a similar fashion.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R., & Reinisch, E. (2015). Normalization for speechrate in native and nonnative speech. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Speech perception involves a number of processes that deal with variation in the speech signal. One such process is normalization for speechrate: local temporal cues are perceived relative to the rate in the surrounding context. It is as yet unclear whether and how this perceptual effect interacts with higher level impressions of rate, such as a speaker’s nonnative identity. Nonnative speakers typically speak more slowly than natives, an experience that listeners take into account when explicitly judging the rate of nonnative speech. The present study investigated whether this is also reflected in implicit rate normalization. Results indicate that nonnative speech is implicitly perceived as faster than temporally-matched native speech, suggesting that the additional cognitive load of listening to an accent speeds up rate perception. Therefore, rate perception in speech is not dependent on syllable durations alone but also on the ease of processing of the temporal signal.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Bosker, H. R., Briaire, J., Heeren, W., van Heuven, V. J., & Jongman, S. R. (2010). Whispered speech as input for cochlear implants. In J. Van Kampen, & R. Nouwen (Eds.), Linguistics in the Netherlands 2010 (pp. 1-14).
  • De Bot, K., Broersma, M., & Isurin, L. (2009). Sources of triggering in code-switching. In L. Isurin, D. Winford, & K. De Bot (Eds.), Multidisciplinary approaches to code switching (pp. 103-128). Amsterdam: Benjamins.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 152-162). Berlin Heidelberg: Springer.

    Abstract

    How are space and time represented in the human mind? Here we evaluate two theoretical proposals, one suggesting a symmetric relationship between space and time (ATOM theory) and the other an asymmetric relationship (metaphor theory). In Experiment 1, Dutch-speakers saw 7-letter nouns that named concrete objects of various spatial lengths (tr. pencil, bench, footpath) and estimated how much time they remained on the screen. In Experiment 2, participants saw nouns naming temporal events of various durations (tr. blink, party, season) and estimated the words’ spatial length. Nouns that named short objects were judged to remain on the screen for a shorter time, and nouns that named longer objects to remain for a longer time. By contrast, variations in the duration of the event nouns’ referents had no effect on judgments of the words’ spatial length. This asymmetric pattern of cross-dimensional interference supports metaphor theory and challenges ATOM.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1348-1353). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Boves, L., Carlson, R., Hinrichs, E., House, D., Krauwer, S., Lemnitzer, L., Vainio, M., & Wittenburg, P. (2009). Resources for speech research: Present and future infrastructure needs. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1803-1806).

    Abstract

    This paper introduces the EU-FP7 project CLARIN, a joint effort of over 150 institutions in Europe, aimed at the creation of a sustainable language resources and technology infrastructure for the humanities and social sciences research community. The paper briefly introduces the vision behind the project and how it relates to speech research with a focus on the contributions that CLARIN can and will make to research in spoken language processing.
  • Bowerman, M. (2005). Why can't you "open" a nut or "break" a cooked noodle? Learning covert object categories in action word meanings. In L. Gershkoff-Stowe, & D. H. Rakison (Eds.), Building object categories in developmental time (pp. 209-243). Mahwah, NJ: Erlbaum.
  • Bowerman, M. (2005). Linguistics. In B. Hopkins (Ed.), The Cambridge encyclopedia of child development (pp. 497-501). Cambridge: Cambridge University Press.
  • Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M. L. Rice, & R. L. Schiefelbusch (Eds.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (2009). Introduction (Part IV: Language and cognition: Universals and typological comparisons). In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 443-449).
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164-173. doi:10.2174/1874440001004010164.

    Abstract

    To clarify whether the neural pathways concerning color processing are the same for natural objects, for artifacts objects and for non-sense objects we examined functional magnetic resonance imaging (FMRI) responses during a covert naming task including the factors color (color vs. black&white (B&W)) and stimulus type (natural vs. artifacts vs. non-sense objects). Our results indicate that the superior parietal lobule and precuneus (BA 7) bilaterally, the right hippocampus and the right fusifom gyrus (V4) make part of a network responsible for color processing both for natural and artifacts objects, but not for non-sense objects. The recognition of non-sense colored objects compared to the recognition of color objects activated the posterior cingulate/precuneus (BA 7/23/31), suggesting that color attribute induces the mental operation of trying to associate a non-sense composition with a familiar objects. When color objects (both natural and artifacts) were contrasted with color nonobjects we observed activations in the right parahippocampal gyrus (BA 35/36), the superior parietal lobule (BA 7) bilaterally, the left inferior middle temporal region (BA 20/21) and the inferior and superior frontal regions (BA 10/11/47). These additional activations suggest that colored objects recruit brain regions that are related to visual semantic information/retrieval and brain regions related to visuo-spatial processing. Overall, the results suggest that color information is an attribute that improve object recognition (based on behavioral results) and activate a specific neural network related to visual semantic information that is more extensive than for B&W objects during object recognition
  • Bramão, I., Faísca, L., Forkstam, C., Inácio, K., Petersson, K. M., & Reis, A. (2009). Interaction between perceptual color and color knowledge information in object recognition: Behavioral and electrophysiological evidence. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 39). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2010). The influence of surface color information and color knowledge information in object recognition. American Journal of Psychology, 123, 437-466. Retrieved from http://www.jstor.org/stable/10.5406/amerjpsyc.123.4.0437.

    Abstract

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name—object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.
  • Brand, S., & Ernestus, M. (2015). Reduction of obstruent-liquid-schwa clusters in casual French. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in casual French and the variables predicting the absence of the phonemes in these clusters. In a dataset of 291 noun tokens extracted from a corpus of casual conversations, we observed that in 80.7% of the tokens, at least one phoneme was absent and that in no less than 15.5% the whole cluster was absent (e.g., /mis/ for ministre). Importantly, the probability of a phoneme being absent was higher if the following phoneme was absent as well. These data show that reduction can affect several phonemes at once and is not restricted to just a handful of (function) words. Moreover, our results demonstrate that the absence of each single phoneme is affected by the speaker's tendency to increase ease of articulation and to adapt a word's pronunciation variant to the time available.
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Brandt, S., Kidd, E., Lieven, E., & Tomasello, M. (2009). The discourse bases of relativization: An investigation of young German and English-speaking children's comprehension of relative clauses. Cognitive Linguistics, 20(3), 539-570. doi:10.1515/COGL.2009.024.

    Abstract

    In numerous comprehension studies, across different languages, children have performed worse on object relatives (e.g., the dog that the cat chased) than on subject relatives (e.g., the dog that chased the cat). One possible reason for this is that the test sentences did not exactly match the kinds of object relatives that children typically experience. Adults and children usually hear and produce object relatives with inanimate heads and pronominal subjects (e.g., the car that we bought last year) (cf. Kidd et al., Language and Cognitive Processes 22: 860–897, 2007). We tested young 3-year old German- and English-speaking children with a referential selection task. Children from both language groups performed best in the condition where the experimenter described inanimate referents with object relatives that contained pronominal subjects (e.g., Can you give me the sweater that he bought?). Importantly, when the object relatives met the constraints identified in spoken discourse, children understood them as well as subject relatives, or even better. These results speak against a purely structural explanation for children's difficulty with object relatives as observed in previous studies, but rather support the usage-based account, according to which discourse function and experience with language shape the representation of linguistic structures.
  • Brascamp, J., Klink, P., & Levelt, W. J. M. (2015). The ‘laws’ of binocular rivalry: 50 years of Levelt’s propositions. Vision Research, 109, 20-37. doi:10.1016/j.visres.2015.02.019.

    Abstract

    It has been fifty years since Levelt’s monograph On Binocular Rivalry (1965) was published, but its four propositions that describe the relation between stimulus strength and the phenomenology of binocular rivalry remain a benchmark for theorists and experimentalists even today. In this review, we will revisit the original conception of the four propositions and the scientific landscape in which this happened. We will also provide a brief update concerning distributions of dominance durations, another aspect of Levelt’s monograph that has maintained a prominent presence in the field. In a critical evaluation of Levelt’s propositions against current knowledge of binocular rivalry we will then demonstrate that the original propositions are not completely compatible with what is known today, but that they can, in a straightforward way, be modified to encapsulate the progress that has been made over the past fifty years. The resulting modified, propositions are shown to apply to a broad range of bistable perceptual phenomena, not just binocular rivalry, and they allow important inferences about the underlying neural systems. We argue that these inferences reflect canonical neural properties that play a role in visual perception in general, and we discuss ways in which future research can build on the work reviewed here to attain a better understanding of these properties
  • Braun, B. (2006). Phonetics and phonology of thematic contrast in German. Language and Speech, 49(4), 451-493.

    Abstract

    It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or topic accent) is realized differently when signaling contrast than when not. In this article, the acoustic basis for the reported impressionistic differences is investigated in terms of the scaling (height) and alignment (positioning) of tonal targets.

    Subjects read target sentences in a contrastive and a noncontrastive context (Experiment 1). Prosodic annotation revealed that thematic accents were not realized with different accent types in the two contexts but acoustic comparison showed that themes in contrastive context exhibited a higher and later peak. The alignment and scaling of accents can hence be controlled in a linguistically meaningful way, which has implications for intonational phonology. In Experiment 2, nonlinguists' perception of a subset of the production data was assessed. They had to choose whether, in a contrastive context, the presumed contrastive or noncontrastive realization of a sentence was more appropriate. For some sentence pairs only, subjects had a clear preference. For Experiment 3, a group of linguists annotated the thematic accents of the contrastive and noncontrastive versions of the same data as used in Experiment 2. There was considerable disagreement in labels, but different accent types were consistently used when the two versions differed strongly in F0 excursion. Although themes in contrastive contexts were clearly produced differently than themes in noncontrastive contexts, this difference is not easily perceived or annotated.
  • Braun, B. (2005). Production and perception of thematic contrast in German. Oxford: Lang.
  • Braun, B., Kochanski, G., Grabe, E., & Rosner, B. S. (2006). Evidence for attractors in English intonation. Journal of the Acoustical Society of America, 119(6), 4006-4015. doi:10.1121/1.2195267.

    Abstract

    Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
  • Braun, B., Weber, A., & Crocker, M. (2005). Does narrow focus activate alternative referents? In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 1709-1712).

    Abstract

    Narrow focus refers to accent placement that forces one interpretation of a sentence, which is then often perceived contrastively. Narrow focus is formalised in terms of alternative sets, i.e. contextually or situationally salient alternatives. In this paper, we investigate whether this model is valid also in human utterance processing. We present an eye-tracking experiment to study listeners’ expectations (i.e. eye-movements) with respect to upcoming referents. Some of the objects contrast in colour with objects that were previously referred to, others do not; the objects are referred to with either a narrow focus on the colour adjective or with broad focus on the noun. Results show that narrow focus on the adjective increases early fixations to contrastive referents. Narrow focus hence activates alternative referents in human utterance processing
  • Braun, B., & Chen, A. (2010). Intonation of 'now' in resolving scope ambiguity in English and Dutch. Journal of Phonetics, 38, 431-444. doi:10.1016/j.wocn.2010.04.002.

    Abstract

    The adverb now in English (nu in Dutch) can draw listeners’ attention to an upcoming contrast (e.g., ‘Put X in Y. Now put X in Z’). In Dutch, but not English, the position of this sequential adverb may disambiguate which constituent is contrasted. We investigated whether and how the intonational realization of now/nu is varied to signal different scopes and whether it interacts with word order. Three contrast conditions (contrast in object, location, or both) were produced by eight Dutch and eight English speakers. Results showed no consistent use of word order for scope disambiguation in Dutch. Importantly, independent of language, an unaccented now/nu signaled a contrasting object while an accented now/nu signaled a contrast in the location. Since these intonational patterns were independent of word order, we interpreted the results in the framework of grammatical saliency: now/nu appears to be unmarked when the contrast lies in a salient constituent (the object) but marked with a prominent rise when a less salient constituent is contrasted (the location).

    Files private

    Request files
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. Language and Cognitive Processes, 25, 1024 -1043. doi:10.1080/01690960903036836.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. In D. G. Watson, M. Wagner, & E. Gibson (Eds.), Experimental and theoretical advances in prosody (pp. 1024-1043). Hove: Psychology Press.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Broeder, D., Offenga, F., Wittenburg, P., Van de Kamp, P., Nathan, D., & Strömqvist, S. (2006). Technologies for a federation of language resource archive. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2291-2294).
  • Broeder, D., & Wittenburg, P. (2006). The IMDI metadata framework, its current application and future direction. International Journal of Metadata, Semantics and Ontologies, 1(2), 119-132. doi:10.1504/IJMSO.2006.011008.

    Abstract

    The IMDI Framework offers next to a suitable set of metadata descriptors for language resources, a set of tools and an infrastructure to use these. This paper gives an overview of all these aspects and at the end describes the intentions and hopes for ensuring the interoperability of the IMDI framework within more general ones in development. An evaluation of the current state of the IMDI Framework is presented with an analysis of the benefits and more problematic issues. Finally we describe work on issues of long-term stability for IMDI by linking up to the work done within the ISO TC37/SC4 subcommittee (TC37/SC4).
  • Broeder, D., Auer, E., & Wittenburg, P. (2006). Unique resource identifiers. Language Archive Newsletter, no. 8, 8-9.
  • Broeder, D., Kemps-Snijders, M., Van Uytvanck, D., Windhouwer, M., Withers, P., Wittenburg, P., & Zinn, C. (2010). A data category registry- and component-based metadata framework. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 43-47). European Language Resources Association (ELRA).

    Abstract

    We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.
  • Broeder, D., Brugman, H., & Senft, G. (2005). Documentation of languages and archiving of language data at the Max Planck Institute for Psycholinguistics in Nijmegen. Linguistische Berichte, no. 201, 89-103.
  • Broeder, D., Van Veenendaal, R., Nathan, D., & Strömqvist, S. (2006). A grid of language resource repositories. In Proceedings of the 2nd IEEE International Conference on e-Science and Grid Computing.
  • Broeder, D., Claus, A., Offenga, F., Skiba, R., Trilsbeek, P., & Wittenburg, P. (2006). LAMUS: The Language Archive Management and Upload System. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2291-2294).
  • Broersma, M. (2005). Phonetic and lexical processing in a second language. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58294.
  • Broersma, M. (2005). Perception of familiar contrasts in unfamiliar positions. Journal of the Acoustical Society of America, 117(6), 3890-3901. doi:10.1121/1.1906060.
  • Broersma, M. (2006). Nonnative listeners rely less on phonetic information for phonetic categorization than native listeners. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 109-110).
  • Broersma, M., & De Bot, K. (2006). Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and Cognition, 9(1), 1-13. doi:10.1017/S1366728905002348.

    Abstract

    In this article the triggering hypothesis for codeswitching proposed by Michael Clyne is discussed and tested. According to this hypothesis, cognates can facilitate codeswitching of directly preceding or following words. It is argued that the triggering hypothesis in its original form is incompatible with language production models, as it assumes that language choice takes place at the surface structure of utterances, while in bilingual production models language choice takes place along with lemma selection. An adjusted version of the triggering hypothesis is proposed in which triggering takes place during lemma selection and the scope of triggering is extended to basic units in language production. Data from a Dutch–Moroccan Arabic corpus are used for a statistical test of the original and the adjusted triggering theory. The codeswitching patterns found in the data support part of the original triggering hypothesis, but they are best explained by the adjusted triggering theory.
  • Broersma, M., Aoyagi, M., & Weber, A. (2010). Cross-linguistic production and perception of Japanese- and Dutch-accented English. Journal of the Phonetic Society of Japan, 14(1), 60-75.
  • Broersma, M. (2010). Dutch listener's perception of Korean fortis, lenis, and aspirated stops: First exposure. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 49-54).
  • Broersma, M. (2006). Accident - execute: Increased activation in nonnative listening. In Proceedings of Interspeech 2006 (pp. 1519-1522).

    Abstract

    Dutch and English listeners’ perception of English words with partially overlapping onsets (e.g., accident- execute) was investigated. Partially overlapping words remained active longer for nonnative listeners, causing an increase of lexical competition in nonnative compared with native listening.
  • Broersma, M. (2010). Korean lenis, fortis, and aspirated stops: Effect of place of articulation on acoustic realization. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan. (pp. 941-944).

    Abstract

    Unlike most of the world's languages, Korean distinguishes three types of voiceless stops, namely lenis, fortis, and aspirated stops. All occur at three places of articulation. In previous work, acoustic measurements are mostly collapsed over the three places of articulation. This study therefore provides acoustic measurements of Korean lenis, fortis, and aspirated stops at all three places of articulation separately. Clear differences are found among the acoustic characteristics of the stops at the different places of articulation
  • Broersma, M., & Scharenborg, O. (2010). Native and non-native listeners’ perception of English consonants in different types of noise. Speech Communication, 52, 980-995. doi:10.1016/j.specom.2010.08.010.

    Abstract

    This paper shows that the effect of different types of noise on recognition of different phonemes by native versus non-native listeners is highly variable, even within classes of phonemes with the same manner or place of articulation. In a phoneme identification experiment, English and Dutch listeners heard all 24 English consonants in VCV stimuli in quiet and in three types of noise: competing talker, speech-shaped noise, and modulated speech-shaped noise (all with SNRs of −6 dB). Differential effects of noise type for English and Dutch listeners were found for eight consonants (/p t k g m n ŋ r/) but not for the other 16 consonants. For those eight consonants, effects were again highly variable: each noise type hindered non-native listeners more than native listeners for some of the target sounds, but none of the noise types did so for all of the target sounds, not even for phonemes with the same manner or place of articulation. The results imply that the noise types employed will strongly affect the outcomes of any study of native and non-native speech perception in noise.
  • Broersma, M. (2010). Perception of final fricative voicing: Native and nonnative listeners’ use of vowel duration. Journal of the Acoustical Society of America, 127, 1636-1644. doi:10.1121/1.3292996.
  • Broersma, M., Isurin, L., Bultena, S., & De Bot, K. (2009). Triggered code-switching: Evidence from Dutch-English and Russian-English bilinguals. In L. Isurin, D. Winford, & K. De Bot (Eds.), Multidisciplinary approaches to code switching (pp. 85-102). Amsterdam: Benjamins.
  • Broersma, M. (2009). Triggered codeswitching between cognate languages. Bilingualism: Language and Cognition, 12(4), 447-462. doi:10.1017/S1366728909990204.
  • Brookshire, G., Casasanto, D., & Ivry, R. (2010). Modulation of motor-meaning congruity effects for valenced words. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1940-1945). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the extent to which emotionally valenced words automatically cue spatio-motor representations. Participants made speeded button presses, moving their hand upward or downward while viewing words with positive or negative valence. Only the color of the words was relevant to the response; on target trials, there was no requirement to read the words or process their meaning. In Experiment 1, upward responses were faster for positive words, and downward for negative words. This effect was extinguished, however, when words were repeated. In Experiment 2, participants performed the same primary task with the addition of distractor trials. Distractors either oriented attention toward the words’ meaning or toward their color. Congruity effects were increased with orientation to meaning, but eliminated with orientation to color. When people read words with emotional valence, vertical spatio-motor representations are activated highly automatically, but this automaticity is modulated by repetition and by attentional orientation to the words’ form or meaning.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2010). Shadowing reduced speech and alignment. Journal of the Acoustical Society of America, 128(1), EL32-EL37. doi:10.1121/1.3448022.

    Abstract

    This study examined whether listeners align to reduced speech. Participants were asked to shadow sentences from a casual speech corpus containing canonical and reduced targets. Participants' productions showed alignment: durations of canonical targets were longer than durations of reduced targets; and participants often imitated the segment types (canonical versus reduced) in both targets. The effect sizes were similar to previous work on alignment. In addition, shadowed productions were overall longer in duration than the original stimuli and this effect was larger for reduced than canonical targets. A possible explanation for this finding is that listeners reconstruct canonical forms from reduced forms.
  • Brouwer, S. (2010). Processing strongly reduced forms in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brouwer, H., Fitz, H., & Hoeks, J. C. (2010). Modeling the noun phrase versus sentence coordination ambiguity in Dutch: Evidence from Surprisal Theory. In Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics, ACL 2010 (pp. 72-80). Association for Computational Linguistics.

    Abstract

    This paper investigates whether surprisal theory can account for differential processing difficulty in the NP-/S-coordination ambiguity in Dutch. Surprisal is estimated using a Probabilistic Context-Free Grammar (PCFG), which is induced from an automatically annotated corpus. We find that our lexicalized surprisal model can account for the reading time data from a classic experiment on this ambiguity by Frazier (1987). We argue that syntactic and lexical probabilities, as specified in a PCFG, are sufficient to account for what is commonly referred to as an NP-coordination preference.
  • Brouwer, G. J., Tong, F., Hagoort, P., & Van Ee, R. (2009). Perceptual incongruence influences bistability and cortical activation. Plos One, 4(3): e5056. doi:10.1371/journal.pone.0005056.

    Abstract

    We employed a parametric psychophysical design in combination with functional imaging to examine the influence of metric changes in perceptual incongruence on perceptual alternation rates and cortical responses. Subjects viewed a bistable stimulus defined by incongruent depth cues; bistability resulted from incongruence between binocular disparity and monocular perspective cues that specify different slants (slant rivalry). Psychophysical results revealed that perceptual alternation rates were positively correlated with the degree of perceived incongruence. Functional imaging revealed systematic increases in activity that paralleled the psychophysical results within anterior intraparietal sulcus, prior to the onset of perceptual alternations. We suggest that this cortical activity predicts the frequency of subsequent alternations, implying a putative causal role for these areas in initiating bistable perception. In contrast, areas implicated in form and depth processing (LOC and V3A) were sensitive to the degree of slant, but failed to show increases in activity when these cues were in conflict.
  • Brouwer, S., & Bradlow, A. R. (2015). The effect of target-background synchronicity on speech-in-speech recognition. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The aim of the present study was to investigate whether speech-in-speech recognition is affected by variation in the target-background timing relationship. Specifically, we examined whether within trial synchronous or asynchronous onset and offset of the target and background speech influenced speech-in-speech recognition. Native English listeners were presented with English target sentences in the presence of English or Dutch background speech. Importantly, only the short-term temporal context –in terms of onset and offset synchrony or asynchrony of the target and background speech– varied across conditions. Participants’ task was to repeat back the English target sentences. The results showed an effect of synchronicity for English-in-English but not for English-in-Dutch recognition, indicating that familiarity with the English background lead in the asynchronous English-in-English condition might have attracted attention towards the English background. Overall, this study demonstrated that speech-in-speech recognition is sensitive to the target-background timing relationship, revealing an important role for variation in the local context of the target-background relationship as it extends beyond the limits of the time-frame of the to-be-recognized target sentence.
  • Brouwer, S., & Bradlow, A. R. (2015). The temporal dynamics of spoken word recognition in adverse listening conditions. Journal of Psycholinguistic Research. Advanced online publication. doi:10.1007/s10936-015-9396-9.

    Abstract

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition
  • Brown, P. (2005). What does it mean to learn the meaning of words? [Review of the book How children learn the meanings of words by Paul Bloom]. Journal of the Learning Sciences, 14(2), 293-300. doi:10.1207/s15327809jls1402_6.
  • Brown, A., & Gullberg, M. (2010). Changes in encoding of path of motion after acquisition of a second language. Cognitive Linguistics, 21(2), 263-286. doi:10.1515/COGL.2010.010.

    Abstract

    Languages vary typologically in their lexicalization of Path of motion (Talmy 1991). Furthermore, lexicalization patterns are argued to affect syntactic packaging at the level of the clause (e.g. Slobin 1996b) and tend to transfer from a first (L1) to a second language (L2) in second language acquisition (e.g. Cadierno 2004). From this crosslinguistic and developmental evidence, typological preferences for Path expression appear highly robust features of a first language. The current study examines the extent to which preferences for Path encoding really are as enduring as they seem by investigating (1) whether Japanese follows patterns identified for other verb-framed languages like Spanish, and (2) whether patterns established in one’s first language can change after acquisition of a second language. L1 performance of native speakers of Japanese with intermediate-level knowledge of English was compared to that of monolingual speakers of Japanese and English. Results showed that monolingual Japanese speakers followed basic lexicalization patterns typical of other verb-framed languages, but with different realizations of Path packaging within the clause. Moreover, non-monolingual Japanese speakers displayed both English- and Japanese-like patterns for lexicalization with significantly more Path information per clause than either group of monolinguals. Implications for typology and second language acquisition are discussed.

Share this page