Publications

Displaying 101 - 200 of 1939
  • Blasi, D. E., Christiansen, M. H., Wichmann, S., Hammarström, H., & Stadler, P. F. (2014). Sound symbolism and the origins of language. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 391-392). Singapore: World Scientific.
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Blokland, A., Ten Oever, S., Van Gorp, D., Van Draanen, M., Schmidt, T., Nguyen, E., Krugliak, A., Napoletano, A., Keuter, S., & Klinkenberg, I. (2012). The use of a test battery assessing affective behavior in rats: Order effects. Behavioural Brain Research, 228(1), 16-21. doi:10.1016/j.bbr.2011.11.042.

    Abstract

    Many studies have used test batteries for the evaluation of affective behavior in rodents. This has the advantage that treatment effects can be examined on different aspects of the affective domain. However, the behavior in one test may affect the behavior in following test. The present study examined possible order effects in rats that were tested in three different tests: Open Field (OF), Zero Maze (ZM) and Forced Swim Test (FST). The data of the present study indicated that the behavior in ZM was the least affected by the order of testing. In contrast, the behavior in the FST (and to a less extend the OF) was dependent on the order of the test in the test battery. Repeated testing in the same test did not change the behavior in the ZM. However, the behavior in the OF and FST changed with repeated testing. The present study indicates that the performance of rats in a test can be dependent on the order in a test battery. Consequently, these data caution the interpretation of treatment effects in studies in which test batteries are used. (C) 2011 Elsevier B.V. All rights reserved.
  • Blythe, J. (2010). From ethical datives to number markers in Murriny Patha. In R. Hendery, & J. Hendriks (Eds.), Grammatical change: Theory and description (pp. 157-187). Canberra: Pacific Linguistics.
  • Blythe, J. (2012). From passing-gesture to ‘true’ romance: Kin-based teasing in Murriny Patha conversation. Journal of Pragmatics, 44, 508-528. doi:10.1016/j.pragma.2011.11.005.

    Abstract

    Just as interlocutors can manipulate physical objects for performing certain types of social action, they can also perform different social actions by manipulating symbolic objects. A kinship system can be thought of as an abstract collection of lexical mappings and associated cultural conventions. It is a sort of cognitive object that can be readily manipulated for special purposes. For example, the relationship between pairs of individuals can be momentarily re-construed in constructing jokes or teases. Murriny Patha speakers associate certain parts of the body with particular classes of kin. When a group of Murriny Patha women witness a cultural outsider performing a forearm-holding gesture that is characteristically associated with brothers-in-law, they re-associate the gesture to the husband–wife relationship, thus setting up an extended teasing episode. Many of these teases call on gestural resources. Although the teasing is at times repetitive, and the episode is only thinly populated with the telltale “off-record” markers that characterize teasing proposals as non-serious, the proposal is sufficiently far-fetched as to ensure that the teases come off as more bonding than biting.
  • Blythe, J. (2010). Self-association in Murriny Patha talk-in-interaction. In I. Mushin, & R. Gardner (Eds.), Studies in Australian Indigenous Conversation [Special issue] (pp. 447-469). Australian Journal of Linguistics. doi:10.1080/07268602.2010.518555.

    Abstract

    When referring to persons in talk-in-interaction, interlocutors recruit the particular referential expressions that best satisfy both cultural and interactional contingencies, as well as the speaker’s own personal objectives. Regular referring practices reveal cultural preferences for choosing particular classes of reference forms for engaging in particular types of activities. When speakers of the northern Australian language Murriny Patha refer to each other, they display a clear preference for associating the referent to the current conversation’s participants. This preference for Association is normally achieved through the use of triangular reference forms such as kinterms. Triangulations are reference forms that link the person being spoken about to another specified person (e.g. Bill’s doctor). Triangulations are frequently used to associate the referent to the current speaker (e.g.my father), to an addressed recipient (your uncle) or co-present other (this bloke’s cousin). Murriny Patha speakers regularly associate key persons to themselves when making authoritative claims about items of business and important events. They frequently draw on kinship links when attempting to bolster their epistemic position. When speakers demonstrate their relatedness to the event’s protagonists, they ground their contribution to the discussion as being informed by appropriate genealogical connections (effectively, ‘I happen to know something about that. He was after all my own uncle’).
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bocanegra, B. R., Poletiek, F. H., & Zwaan, R. A. (2014). Asymmetrical feature binding across language and perception. In Proceedings of the 7th annual Conference on Embodied and Situated Language Processing (ESLP 2014).
  • Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.

    Abstract

    Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • De Boer, B., & Perlman, M. (2014). Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]. Behavioral and Brain Sciences, 37(6), 552-553. doi:10.1017/S0140525X13004007.

    Abstract

    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes.
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2012). Are superfluous prosodic breaks harder to process than missing ones? ERP data on auditory sentence comprehension [Abstract]. International Journal of Psychophysiology, 85(3), 352. doi:10.1016/j.ijpsycho.2012.06.167.

    Abstract

    PROCEEDINGS OF THE 16TH WORLD CONGRESS OF PSYCHOPHYSIOLOGY of the International Organization of Psychophysiology (IOP) Pisa, Italy September 13-17, 2012
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D. J., & Kerkhofs, R. (2010). The interplay between prosody and syntax in sentence processing: The case of subject- and object-control verbs. Journal of Cognitive Neuroscience, 22(5), 1036-1053. doi:10.1162/jocn.2009.21269.

    Abstract

    This study addresses the question whether prosodic information can affect the choice for a syntactic analysis in auditory sentence processing. We manipulated the prosody (in the form of a prosodic break; PB) of locally ambiguous Dutch sentences to favor one of two interpretations. The experimental items contained two different types of so-called control verbs (subject and object control) in the matrix clause and were syntactically disambiguated by a transitive or by an intransitive verb. In Experiment 1, we established the default off-line preference of the items for a transitive or an intransitive disambiguating verb with a visual and an auditory fragment completion test. The results suggested that subject- and object-control verbs differently affect the syntactic structure that listeners expect. In Experiment 2, we investigated these two types of verbs separately in an on-line ERP study. Consistent with the literature, the PB elicited a closure positive shift. Furthermore, in subject-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found, both for sentences with and for sentences without a PB. This result suggests that the default preference for subject-control verbs goes in the same direction as the effect of the PB. In object-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found for sentences with a PB but no effect in the absence of a PB. This indicates that a PB can affect the syntactic analysis that listeners pursue.
  • Bohnemeyer, J. (1999). A questionnaire on event integration. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 87-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002691.

    Abstract

    How do we decide where events begin and end? Like the ECOM clips, this questionnaire is designed to investigate how a language divides and/or integrates complex scenarios into sub-events and macro-events. The questionnaire focuses on events of motion, caused state change (e.g., breaking), and transfer (e.g., giving). It provides a checklist of scenarios that give insight into where a language “draws the line” in event integration, based on known cross-linguistic differences.
  • Bohnemeyer, J. (1999). Event representation and event complexity: General introduction. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 69-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002741.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). This document introduces issues concerning the linguistic and cognitive representations of event complexity and integration, and provides an overview of tasks that are relevant to this topic, including the ECOM clips, the Questionnaire on Event integration, and the Questionnaire on motion lexicalisation and motion description.
  • Bohnemeyer, J., & Caelen, M. (1999). The ECOM clips: A stimulus for the linguistic coding of event complexity. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 74-86). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874627.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). The “Event Complexity” (ECOM) clips are designed to explore how languages differ in dividing and/or integrating complex scenarios into sub-events and macro-events. The stimuli consist of animated clips of geometric shapes that participate in different scenarios (e.g., a circle “hits” a triangle and “breaks” it). Consultants are asked to describe the scenes, and then to comment on possible alternative descriptions.

    Additional information

    1999_The_ECOM_clips.zip
  • Bolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E. and 37 moreBolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E., Widen, E., Palotie, A., Eriksson, J. G., Kaakinen, M., Järvelin, M.-R., Timpson, N. J., Davey Smith, G., Ring, S. M., Evans, D. M., St Pourcain, B., Tanaka, T., Milaneschi, Y., Bandinelli, S., Ferrucci, L., van der Harst, P., Rosmalen, J. G. M., Bakker, S. J. L., Verweij, N., Dullaart, R. P. F., Mahajan, A., Lindgren, C. M., Morris, A., Lind, L., Ingelsson, E., Anderson, L. N., Pennell, C. E., Lye, S. J., Matthews, S. G., Eriksson, J., Mellstrom, D., Ohlsson, C., Price, J. F., Strachan, M. W. J., Reynolds, R. M., Tiemeier, H., Walker, B. R., & CORtisol NETwork (CORNET) Consortium (2014). Genome Wide Association Identifies Common Variants at the SERPINA6/SERPINA1 Locus Influencing Plasma Cortisol and Corticosteroid Binding Globulin. PLoS Genetics, 10(7): e1004474. doi:10.1371/journal.pgen.1004474.

    Abstract

    Variation in plasma levels of cortisol, an essential hormone in the stress response, is associated in population-based studies with cardio-metabolic, inflammatory and neuro-cognitive traits and diseases. Heritability of plasma cortisol is estimated at 30-60% but no common genetic contribution has been identified. The CORtisol NETwork (CORNET) consortium undertook genome wide association meta-analysis for plasma cortisol in 12,597 Caucasian participants, replicated in 2,795 participants. The results indicate that <1% of variance in plasma cortisol is accounted for by genetic variation in a single region of chromosome 14. This locus spans SERPINA6, encoding corticosteroid binding globulin (CBG, the major cortisol-binding protein in plasma), and SERPINA1, encoding α1-antitrypsin (which inhibits cleavage of the reactive centre loop that releases cortisol from CBG). Three partially independent signals were identified within the region, represented by common SNPs; detailed biochemical investigation in a nested sub-cohort showed all these SNPs were associated with variation in total cortisol binding activity in plasma, but some variants influenced total CBG concentrations while the top hit (rs12589136) influenced the immunoreactivity of the reactive centre loop of CBG. Exome chip and 1000 Genomes imputation analysis of this locus in the CROATIA-Korcula cohort identified missense mutations in SERPINA6 and SERPINA1 that did not account for the effects of common variants. These findings reveal a novel common genetic source of variation in binding of cortisol by CBG, and reinforce the key role of CBG in determining plasma cortisol levels. In turn this genetic variation may contribute to cortisol-associated degenerative diseases.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Bordulk, D., Dalak, N., Tukumba, M., Bennett, L., Bordro Tingey, R., Katherine, M., Cutfield, S., Pamkal, M., & Wightman, G. (2012). Dalabon plants and animals: Aboriginal biocultural knowledge from southern Arnhem Land, north Australia. Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory Government.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not. Journal of Memory and Language, 75, 104-116. doi:10.1016/j.jml.2014.05.004.

    Abstract

    Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Bosker, H. R. (2014). The processing and evaluation of fluency in native and non-native speech. PhD Thesis, Utrecht University, Utrecht.

    Abstract

    Disfluency is a common characteristic of spontaneously produced speech. Disfluencies (e.g., silent pauses, filled pauses [uh’s and uhm’s], corrections, repetitions, etc.) occur in both native and non-native speech. There appears to be an apparent contradiction between claims from the evaluative and cognitive approach to fluency. On the one hand, the evaluative approach shows that non-native disfluencies have a negative effect on listeners’ subjective fluency impressions. On the other hand, the cognitive approach reports beneficial effects of native disfluencies on cognitive processes involved in speech comprehension, such as prediction and attention.

    This dissertation aims to resolve this apparent contradiction by combining the evaluative and cognitive approach. The reported studies target both the evaluation (Chapters 2 and 3) and the processing of fluency (Chapters 4 and 5) in native and non-native speech. Thus, it provides an integrative account of native and non-native fluency perception, informative to both language testing practice and cognitive psycholinguists. The proposed account of fluency perception testifies to the notion that speech performance matters: communication through spoken language does not only depend on what is said, but also on how it is said and by whom.
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Bosker, H. R., Briaire, J., Heeren, W., van Heuven, V. J., & Jongman, S. R. (2010). Whispered speech as input for cochlear implants. In J. Van Kampen, & R. Nouwen (Eds.), Linguistics in the Netherlands 2010 (pp. 1-14).
  • Bosman, C., Schoffelen, J.-M., Brunet, N., Oostenveld, R., Bastos, A., Womelsdorf, T., Rubehn, B., Stieglitz, T., De Weerd, P., & Fries, P. (2012). Attentional stimulus selection through selective synchronization between monkey visual areas. Neuron, 75(5), 875-888. doi:10.1016/j.neuron.2012.06.037.

    Abstract

    A central motif in neuronal networks is convergence, linking several input neurons to one target neuron. In visual cortex, convergence renders target neurons responsive to complex stimuli. Yet, convergence typically sends multiple stimuli to a target, and the behaviorally relevant stimulus must be selected. We used two stimuli, activating separate electrocorticographic V1 sites, and both activating an electrocorticographic V4 site equally strongly. When one of those stimuli activated one V1 site, it gamma synchronized (60-80 Hz) to V4. When the two stimuli activated two V1 sites, primarily the relevant one gamma synchronized to V4. Frequency bands of gamma activities showed substantial overlap containing the band of interareal coherence. The relevant V1 site had its gamma peak frequency 2-3 Hz higher than the irrelevant V1 site and 4-6 Hz higher than V4. Gamma-mediated interareal influences were predominantly directed from V1 to V4. We propose that selective synchronization renders relevant input effective, thereby modulating effective connectivity.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 152-162). Berlin Heidelberg: Springer.

    Abstract

    How are space and time represented in the human mind? Here we evaluate two theoretical proposals, one suggesting a symmetric relationship between space and time (ATOM theory) and the other an asymmetric relationship (metaphor theory). In Experiment 1, Dutch-speakers saw 7-letter nouns that named concrete objects of various spatial lengths (tr. pencil, bench, footpath) and estimated how much time they remained on the screen. In Experiment 2, participants saw nouns naming temporal events of various durations (tr. blink, party, season) and estimated the words’ spatial length. Nouns that named short objects were judged to remain on the screen for a shorter time, and nouns that named longer objects to remain for a longer time. By contrast, variations in the duration of the event nouns’ referents had no effect on judgments of the words’ spatial length. This asymmetric pattern of cross-dimensional interference supports metaphor theory and challenges ATOM.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1348-1353). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Bouckaert, R., Lemey, P., Dunn, M., Greenhill, S. J., Alekseyenko, A. V., Drummond, A. J., Gray, R. D., Suchard, M. A., & Atkinson, Q. D. (2012). Mapping the origins and expansion of the Indo-European language family. Science, 337(6097), 957-960. doi:10.1126/science.1219669.

    Abstract

    There are two competing hypotheses for the origin of the Indo-European language family. The conventional view places the homeland in the Pontic steppes about 6000 years ago. An alternative hypothesis claims that the languages spread from Anatolia with the expansion of farming 8000 to 9500 years ago. We used Bayesian phylogeographic approaches, together with basic vocabulary data from 103 ancient and contemporary Indo-European languages, to explicitly model the expansion of the family and test these hypotheses. We found decisive support for an Anatolian origin over a steppe origin. Both the inferred timing and root location of the Indo-European language trees fit with an agricultural expansion from Anatolia beginning 8000 to 9500 years ago. These results highlight the critical role that phylogeographic inference can play in resolving debates about human prehistory.
  • Bowerman, M. (1975). Cross linguistic similarities at two stages of syntactic development. In E. Lenneberg, & E. Lenneberg (Eds.), Foundations of language development: A multidisciplinary approach (pp. 267-282). New York: Academic Press.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (1971). [Review of A. Bar Adon & W.F. Leopold (Eds.), Child language: A book of readings (Prentice Hall, 1971)]. Contemporary Psychology: APA Review of Books, 16, 808-809.
  • Bowerman, M., & Meyer, A. (1991). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.12 1991. Nijmegen: MPI for Psycholinguistics.
  • Bowerman, M. (1976). Le relazioni strutturali nel linguaggio infantile: sintattiche o semantiche? [Reprint]. In F. Antinucci, & C. Castelfranchi (Eds.), Psicolinguistica: Percezione, memoria e apprendimento del linguaggio (pp. 303-321). Bologna: Il Mulino.

    Abstract

    Reprinted from Bowerman, M. (1973). Structural relationships in children's utterances: Semantic or syntactic? In T. Moore (Ed.), Cognitive development and the acquisition of language (pp. 197 213). New York: Academic Press
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Bowerman, M. (1976). Semantic factors in the acquisition of rules for word use and sentence construction. In D. Morehead, & A. Morehead (Eds.), Directions in normal and deficient language development (pp. 99-179). Baltimore: University Park Press.
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164-173. doi:10.2174/1874440001004010164.

    Abstract

    To clarify whether the neural pathways concerning color processing are the same for natural objects, for artifacts objects and for non-sense objects we examined functional magnetic resonance imaging (FMRI) responses during a covert naming task including the factors color (color vs. black&white (B&W)) and stimulus type (natural vs. artifacts vs. non-sense objects). Our results indicate that the superior parietal lobule and precuneus (BA 7) bilaterally, the right hippocampus and the right fusifom gyrus (V4) make part of a network responsible for color processing both for natural and artifacts objects, but not for non-sense objects. The recognition of non-sense colored objects compared to the recognition of color objects activated the posterior cingulate/precuneus (BA 7/23/31), suggesting that color attribute induces the mental operation of trying to associate a non-sense composition with a familiar objects. When color objects (both natural and artifacts) were contrasted with color nonobjects we observed activations in the right parahippocampal gyrus (BA 35/36), the superior parietal lobule (BA 7) bilaterally, the left inferior middle temporal region (BA 20/21) and the inferior and superior frontal regions (BA 10/11/47). These additional activations suggest that colored objects recruit brain regions that are related to visual semantic information/retrieval and brain regions related to visuo-spatial processing. Overall, the results suggest that color information is an attribute that improve object recognition (based on behavioral results) and activate a specific neural network related to visual semantic information that is more extensive than for B&W objects during object recognition
  • Bramão, I., Francisco, A., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2012). Electrophysiological evidence for colour effects on the naming of colour diagnostic and noncolour diagnostic objects. Visual Cognition, 20, 1164-1185. doi:10.1080/13506285.2012.739215.

    Abstract

    In this study, we investigated the level of visual processing at which surface colour information improves the naming of colour diagnostic and noncolour diagnostic objects. Continuous electroencephalograms were recorded while participants performed a visual object naming task in which coloured and black-and-white versions of both types of objects were presented. The black-and-white and the colour presentations were compared in two groups of event-related potentials (ERPs): (1) The P1 and N1 components, indexing early visual processing; and (2) the N300 and N400 components, which index late visual processing. A colour effect was observed in the P1 and N1 components, for both colour and noncolour diagnostic objects. In addition, for colour diagnostic objects, a colour effect was observed in the N400 component. These results suggest that colour information is important for the naming of colour and noncolour diagnostic objects at different levels of visual processing. It thus appears that the visual system uses colour information, during naming of both object types, at early visual stages; however, for the colour diagnostic objects naming, colour information is also recruited during the late visual processing stages.
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2012). The contribution of color to object recognition. In I. Kypraios (Ed.), Advances in object recognition systems (pp. 73-88). Rijeka, Croatia: InTech. Retrieved from http://www.intechopen.com/books/advances-in-object-recognition-systems/the-contribution-of-color-in-object-recognition.

    Abstract

    The cognitive processes involved in object recognition remain a mystery to the cognitive
    sciences. We know that the visual system recognizes objects via multiple features, including
    shape, color, texture, and motion characteristics. However, the way these features are
    combined to recognize objects is still an open question. The purpose of this contribution is to
    review the research about the specific role of color information in object recognition. Given
    that the human brain incorporates specialized mechanisms to handle color perception in the
    visual environment, it is a fair question to ask what functional role color might play in
    everyday vision.
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2010). The influence of surface color information and color knowledge information in object recognition. American Journal of Psychology, 123, 437-466. Retrieved from http://www.jstor.org/stable/10.5406/amerjpsyc.123.4.0437.

    Abstract

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name—object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.
  • Bramão, I., Faísca, L., Forkstam, C., Inácio, F., Araújo, S., Petersson, K. M., & Reis, A. (2012). The interaction between surface color and color knowledge: Behavioral and electrophysiological evidence. Brain and Cognition, 78, 28-37. doi:10.1016/j.bandc.2011.10.004.

    Abstract

    In this study, we used event-related potentials (ERPs) to evaluate the contribution of surface color and color knowledge information in object identification. We constructed two color-object verification tasks – a surface and a knowledge verification task – using high color diagnostic objects; both typical and atypical color versions of the same object were presented. Continuous electroencephalogram was recorded from 26 subjects. A cluster randomization procedure was used to explore the differences between typical and atypical color objects in each task. In the color knowledge task, we found two significant clusters that were consistent with the N350 and late positive complex (LPC) effects. Atypical color objects elicited more negative ERPs compared to typical color objects. The color effect found in the N350 time window suggests that surface color is an important cue that facilitates the selection of a stored object representation from long-term memory. Moreover, the observed LPC effect suggests that surface color activates associated semantic knowledge about the object, including color knowledge representations. We did not find any significant differences between typical and atypical color objects in the surface color verification task, which indicates that there is little contribution of color knowledge to resolve the surface color verification. Our main results suggest that surface color is an important visual cue that triggers color knowledge, thereby facilitating object identification.
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Brandmeyer, A., Desain, P. W., & McQueen, J. M. (2012). Effects of native language on perceptual sensitivity to phonetic cues. Neuroreport, 23, 653-657. doi:10.1097/WNR.0b013e32835542cd.

    Abstract

    The present study used electrophysiological and behavioral measures to investigate the perception of an English stop consonant contrast by native English listeners and by native Dutch listeners who were highly proficient in English. A /ba/-/pa/ continuum was created from a naturally produced /pa/ token by removing successive periods of aspiration, thus reducing the voice onset time. Although aspiration is a relevant cue for distinguishing voiced and unvoiced labial stop consonants (/b/ and /p/) in English, prevoicing is the primary cue used to distinguish between these categories in Dutch. In the electrophysiological experiment, participants listened to oddball sequences containing the standard /pa/ stimulus and one of three deviant stimuli while the mismatch-negativity response was measured. Participants then completed an identification task on the same stimuli. The results showed that native English participants were more sensitive to reductions in aspiration than native Dutch participants, as indicated by shifts in the category boundary, by differing within-group patterns of mismatch-negativity responses, and by larger mean evoked potential amplitudes in the native English group for two of the three deviant stimuli. This between-group difference in the sensorineural processing of aspiration cues indicates that native language experience alters the way in which the acoustic features of speech are processed in the auditory brain, even following extensive second-language training.

    Files private

    Request files
  • Brandt, M., Nitschke, S., & Kidd, E. (2012). Experience and processing of relative clauses in German. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th annual Boston University Conference on Language Development (BUCLD 36) (pp. 87-100). Boston, MA: Cascadilla Press.
  • Braun, B., & Chen, A. (2010). Intonation of 'now' in resolving scope ambiguity in English and Dutch. Journal of Phonetics, 38, 431-444. doi:10.1016/j.wocn.2010.04.002.

    Abstract

    The adverb now in English (nu in Dutch) can draw listeners’ attention to an upcoming contrast (e.g., ‘Put X in Y. Now put X in Z’). In Dutch, but not English, the position of this sequential adverb may disambiguate which constituent is contrasted. We investigated whether and how the intonational realization of now/nu is varied to signal different scopes and whether it interacts with word order. Three contrast conditions (contrast in object, location, or both) were produced by eight Dutch and eight English speakers. Results showed no consistent use of word order for scope disambiguation in Dutch. Importantly, independent of language, an unaccented now/nu signaled a contrasting object while an accented now/nu signaled a contrast in the location. Since these intonational patterns were independent of word order, we interpreted the results in the framework of grammatical saliency: now/nu appears to be unmarked when the contrast lies in a salient constituent (the object) but marked with a prominent rise when a less salient constituent is contrasted (the location).

    Files private

    Request files
  • Braun, B., & Chen, A. (2012). Now for something completely different: Anticipatory effects of intonation. In O. Niebuhr (Ed.), Understanding prosody: The role of context, function and communication (pp. 289-311). Berlin: de Gruyter.

    Abstract

    INTRODUCTION It is nowadays well established that spoken sentence processing is achieved in an incremental manner. As a sentence unfolds over time, listeners rapidly process incoming information to eliminate local ambiguity and make predictions on the most plausible interpretation of the sentence. Previous research has shown that these predictions are based on all kinds of linguistic information, explicitly or implicitly in combination with world knowledge.1 A substantial amount of evidence comes from studies on online referential processing conducted in the visual-world paradigm (Cooper 1974; Eberhard, Spivey-Knowlton, Sedivy, and Tanenhaus 1995; Tanenhaus, Sedivy- Knowlton, Eberhard, and Sedivy 1995; Sedivy, Tanenhaus, Chambers, Carlson 1999).
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. Language and Cognitive Processes, 25, 1024 -1043. doi:10.1080/01690960903036836.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. In D. G. Watson, M. Wagner, & E. Gibson (Eds.), Experimental and theoretical advances in prosody (pp. 1024-1043). Hove: Psychology Press.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Brehm, L. (2014). Speed limits and red flags: Why number agreement accidents happen. PhD Thesis, University of Illinois at Urbana-Champaign, Urbana-Champaign, Il.
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Broeder, D., Kemps-Snijders, M., Van Uytvanck, D., Windhouwer, M., Withers, P., Wittenburg, P., & Zinn, C. (2010). A data category registry- and component-based metadata framework. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 43-47). European Language Resources Association (ELRA).

    Abstract

    We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.
  • Broeder, D., & Lannom, L. (2014). Data Type Registries: A Research Data Alliance Working Group. D-Lib Magazine, 20, 1. doi:10.1045/january2014-broeder.

    Abstract

    Automated processing of large amounts of scientific data, especially across domains, requires that the data can be selected and parsed without human intervention. Precise characterization of that data, as in typing, is needed once the processing goes beyond the realm of domain specific or local research group assumptions. The Research Data Alliance (RDA) Data Type Registries Working Group (DTR-WG) was assembled to address this issue through the creation of a Data Type Registry methodology, data model, and prototype. The WG was approved by the RDA Council during March of 2013 and will complete its work in mid-2014, in between the third and fourth RDA Plenaries.
  • Broeder, D., Van Uytvanck, D., & Senft, G. (2012). Citing on-line language resources. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 1391-1394). European Language Resources Association (ELRA).

    Abstract

    Although the possibility of referring or citing on-line data from publications is seen at least theoretically as an important means to provide immediate testable proof or simple illustration of a line of reasoning, the practice has not been wide-spread yet and no extensive experience has been gained about the possibilities and problems of referring to raw data-sets. This paper makes a case to investigate the possibility and need of persistent data visualization services that facilitate the inspection and evaluation of the cited data.
  • Broeder, D., & Van Uytvanck, D. (2014). Metadata formats. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 150-165). Oxford: Oxford University Press.
  • Broeder, D., Schuurman, I., & Windhouwer, M. (2014). Experiences with the ISOcat Data Category Registry. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 4565-4568).
  • Broeder, D., Van Uytvanck, D., Gavrilidou, M., Trippel, T., & Windhouwer, M. (2012). Standardizing a component metadata infrastructure. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 1387-1390). European Language Resources Association (ELRA).

    Abstract

    This paper describes the status of the standardization efforts of a Component Metadata approach for describing Language Resources with metadata. Different linguistic and Language & Technology communities as CLARIN, META-SHARE and NaLiDa use this component approach and see its standardization of as a matter for cooperation that has the possibility to create a large interoperable domain of joint metadata. Starting with an overview of the component metadata approach together with the related semantic interoperability tools and services as the ISOcat data category registry and the relation registry we explain the standardization plan and efforts for component metadata within ISO TC37/SC4. Finally, we present information about uptake and plans of the use of component metadata within the three mentioned linguistic and L&T communities.
  • Broersma, M., Aoyagi, M., & Weber, A. (2010). Cross-linguistic production and perception of Japanese- and Dutch-accented English. Journal of the Phonetic Society of Japan, 14(1), 60-75.
  • Broersma, M. (2010). Dutch listener's perception of Korean fortis, lenis, and aspirated stops: First exposure. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 49-54).
  • Broersma, M. (2012). Increased lexical activation and reduced competition in second-language listening. Language and Cognitive Processes, 27(7-8), 1205-1224. doi:10.1080/01690965.2012.660170.

    Abstract

    This study investigates how inaccurate phoneme processing affects recognition of partially onset-overlapping pairs like DAFFOdil-DEFIcit and of minimal pairs like flash-flesh in second-language listening. Two cross-modal priming experiments examined differences between native (L1) and second-language (L2) listeners at two stages of lexical processing: first, the activation of intended and mismatching lexical representations and second, the competition between those lexical representations. Experiment 1 shows that truncated primes like daffo- and defi- activated lexical representations of mismatching words (either deficit or daffodil) more for L2 listeners than for L1 listeners. Experiment 2 shows that for minimal pairs, matching primes (prime: flash, target: FLASH) facilitated recognition of visual targets for L1 and L2 listeners alike, whereas mismatching primes (flesh, FLASH) inhibited recognition consistently for L1 listeners but only in a minority of cases for L2 listeners; in most cases, for them, primes facilitated recognition of both words equally strongly. Thus, L1 and L2 listeners' results differed both at the stages of lexical activation and competition. First, perceptually difficult phonemes activated mismatching words more for L2 listeners than for L1 listeners, and second, lexical competition led to efficient inhibition of mismatching competitors for L1 listeners but in most cases not for L2 listeners.
  • Broersma, M. (2010). Korean lenis, fortis, and aspirated stops: Effect of place of articulation on acoustic realization. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan. (pp. 941-944).

    Abstract

    Unlike most of the world's languages, Korean distinguishes three types of voiceless stops, namely lenis, fortis, and aspirated stops. All occur at three places of articulation. In previous work, acoustic measurements are mostly collapsed over the three places of articulation. This study therefore provides acoustic measurements of Korean lenis, fortis, and aspirated stops at all three places of articulation separately. Clear differences are found among the acoustic characteristics of the stops at the different places of articulation
  • Broersma, M. (2012). Lexical representation of perceptually difficult second-language words [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 2053.

    Abstract

    This study investigates the lexical representation of second-language words that contain difficult to distinguish phonemes. Dutch and English listeners' perception of partially onset-overlapping word pairs like DAFFOdil-DEFIcit and minimal pairs like flash-flesh, was assessed with two cross-modal priming experiments, examining two stages of lexical processing: activation of intended and mismatching lexical representations (Exp.1) and competition between those lexical representations (Exp.2). Exp.1 shows that truncated primes like daffo- and defi- activated lexical representations of mismatching words (either deficit or daffodil) more for L2 than L1 listeners. Exp.2 shows that for minimal pairs, matching primes (prime: flash, target: FLASH) facilitated recognition of visual targets for L1 and L2 listeners alike, whereas mismatching primes (flesh, FLASH) inhibited recognition consistently for L1 listeners but only in a minority of cases for L2 listeners; in most cases, for them, primes facilitated recognition of both words equally strongly. Importantly, all listeners experienced a combination of facilitation and inhibition (and all items sometimes caused facilitation and sometimes inhibition). These results suggest that for all participants, some of the minimal pairs were represented with separate, native-like lexical representations, whereas other pairs were stored as homophones. The nature of the L2 lexical representations thus varied strongly even within listeners.
  • Broersma, M., & Scharenborg, O. (2010). Native and non-native listeners’ perception of English consonants in different types of noise. Speech Communication, 52, 980-995. doi:10.1016/j.specom.2010.08.010.

    Abstract

    This paper shows that the effect of different types of noise on recognition of different phonemes by native versus non-native listeners is highly variable, even within classes of phonemes with the same manner or place of articulation. In a phoneme identification experiment, English and Dutch listeners heard all 24 English consonants in VCV stimuli in quiet and in three types of noise: competing talker, speech-shaped noise, and modulated speech-shaped noise (all with SNRs of −6 dB). Differential effects of noise type for English and Dutch listeners were found for eight consonants (/p t k g m n ŋ r/) but not for the other 16 consonants. For those eight consonants, effects were again highly variable: each noise type hindered non-native listeners more than native listeners for some of the target sounds, but none of the noise types did so for all of the target sounds, not even for phonemes with the same manner or place of articulation. The results imply that the noise types employed will strongly affect the outcomes of any study of native and non-native speech perception in noise.
  • Broersma, M. (2010). Perception of final fricative voicing: Native and nonnative listeners’ use of vowel duration. Journal of the Acoustical Society of America, 127, 1636-1644. doi:10.1121/1.3292996.
  • Brookshire, G., Casasanto, D., & Ivry, R. (2010). Modulation of motor-meaning congruity effects for valenced words. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci 2010) (pp. 1940-1945). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the extent to which emotionally valenced words automatically cue spatio-motor representations. Participants made speeded button presses, moving their hand upward or downward while viewing words with positive or negative valence. Only the color of the words was relevant to the response; on target trials, there was no requirement to read the words or process their meaning. In Experiment 1, upward responses were faster for positive words, and downward for negative words. This effect was extinguished, however, when words were repeated. In Experiment 2, participants performed the same primary task with the addition of distractor trials. Distractors either oriented attention toward the words’ meaning or toward their color. Congruity effects were increased with orientation to meaning, but eliminated with orientation to color. When people read words with emotional valence, vertical spatio-motor representations are activated highly automatically, but this automaticity is modulated by repetition and by attentional orientation to the words’ form or meaning.
  • Brookshire, G., & Casasanto, D. (2012). Motivation and motor control: Hemispheric specialization for approach motivation reverses with handedness. PLoS One, 7(4), e36036. doi:10.1371/journal.pone.0036036.

    Abstract

    Background: According to decades of research on affective motivation in the human brain, approach motivational states are supported primarily by the left hemisphere and avoidance states by the right hemisphere. The underlying cause of this specialization, however, has remained unknown. Here we conducted a first test of the Sword and Shield Hypothesis (SSH), according to which the hemispheric laterality of affective motivation depends on the laterality of motor control for the dominant hand (i.e., the "sword hand," used preferentially to perform approach actions) and the nondominant hand (i.e., the "shield hand," used preferentially to perform avoidance actions). Methodology/Principal Findings: To determine whether the laterality of approach motivation varies with handedness, we measured alpha-band power (an inverse index of neural activity) in right- and left-handers during resting-state electroencephalography and analyzed hemispheric alpha-power asymmetries as a function of the participants' trait approach motivational tendencies. Stronger approach motivation was associated with more left-hemisphere activity in right-handers, but with more right-hemisphere activity in left-handers. Conclusions: The hemispheric correlates of approach motivation reversed between right- and left-handers, consistent with the way they typically use their dominant and nondominant hands to perform approach and avoidance actions. In both right- and left-handers, approach motivation was lateralized to the same hemisphere that controls the dominant hand. This covariation between neural systems for action and emotion provides initial support for the SSH
  • Brouwer, S., Mitterer, H., & Huettig, F. (2010). Shadowing reduced speech and alignment. Journal of the Acoustical Society of America, 128(1), EL32-EL37. doi:10.1121/1.3448022.

    Abstract

    This study examined whether listeners align to reduced speech. Participants were asked to shadow sentences from a casual speech corpus containing canonical and reduced targets. Participants' productions showed alignment: durations of canonical targets were longer than durations of reduced targets; and participants often imitated the segment types (canonical versus reduced) in both targets. The effect sizes were similar to previous work on alignment. In addition, shadowed productions were overall longer in duration than the original stimuli and this effect was larger for reduced than canonical targets. A possible explanation for this finding is that listeners reconstruct canonical forms from reduced forms.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571. doi:10.1080/01690965.2011.555268.

    Abstract

    Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter’’ for ‘‘computer’’) modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer’’), a competitor similar to the canonical form (e.g., ‘‘companion’’), one similar to the reduced form (e.g.,
    ‘‘pupil’’), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed
    their attention to a similar degree to both competitors independent of the
    target’s spoken form. In Experiment 2, we excluded reduced forms and
    presented canonical forms only. In such a listening situation, participants
    showed a clear preference for the ‘‘canonical form’’ competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude
    that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system.
  • Brouwer, S. (2010). Processing strongly reduced forms in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Brouwer, S., & Bradlow, A. R. (2014). Contextual variability during speech-in-speech recognition. The Journal of the Acoustical Society of America, 136(1), EL26-EL32. doi:10.1121/1.4881322.

    Abstract

    This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either “pure” background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2012). Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations. Quarterly Journal of Experimental Psychology, 65, 2193-2220. doi:10.1080/17470218.2012.693109.

    Abstract

    In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.
  • Brouwer, H., Fitz, H., & Hoeks, J. (2012). Getting real about semantic illusions: Rethinking the functional role of the P600 in language comprehension. Brain Research, 1446, 127-143. doi:10.1016/j.brainres.2012.01.055.

    Abstract

    In traditional theories of language comprehension, syntactic and semantic processing are inextricably linked. This assumption has been challenged by the ‘Semantic Illusion Effect’ found in studies using Event Related brain Potentials. Semantically anomalous sentences did not produce the expected increase in N400 amplitude but rather one in P600 amplitude. To explain these findings, complex models have been devised in which an independent semantic processing stream can arrive at a sentence interpretation that may differ from the interpretation prescribed by the syntactic structure of the sentence. We review five such multi-stream models and argue that they do not account for the full range of relevant results because they assume that the amplitude of the N400 indexes some form of semantic integration. Based on recent evidence we argue that N400 amplitude might reflect the retrieval of lexical information from memory. On this view, the absence of an N400-effect in Semantic Illusion sentences can be explained in terms of priming. Furthermore, we suggest that semantic integration, which has previously been linked to the N400 component, might be reflected in the P600 instead. When combined, these functional interpretations result in a single-stream account of language processing that can explain all of the Semantic Illusion data.
  • Brouwer, H., Fitz, H., & Hoeks, J. C. (2010). Modeling the noun phrase versus sentence coordination ambiguity in Dutch: Evidence from Surprisal Theory. In Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics, ACL 2010 (pp. 72-80). Association for Computational Linguistics.

    Abstract

    This paper investigates whether surprisal theory can account for differential processing difficulty in the NP-/S-coordination ambiguity in Dutch. Surprisal is estimated using a Probabilistic Context-Free Grammar (PCFG), which is induced from an automatically annotated corpus. We find that our lexicalized surprisal model can account for the reading time data from a classic experiment on this ambiguity by Frazier (1987). We argue that syntactic and lexical probabilities, as specified in a PCFG, are sufficient to account for what is commonly referred to as an NP-coordination preference.
  • Brouwer, S., Van Engen, K. J., Calandruccio, L., & Bradlow, A. R. (2012). Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content. The Journal of the Acoustical Society of America, 131(2), 1449-1464. doi:10.1121/1.3675943.

    Abstract

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language
  • Brown, A., & Gullberg, M. (2010). Changes in encoding of path of motion after acquisition of a second language. Cognitive Linguistics, 21(2), 263-286. doi:10.1515/COGL.2010.010.

    Abstract

    Languages vary typologically in their lexicalization of Path of motion (Talmy 1991). Furthermore, lexicalization patterns are argued to affect syntactic packaging at the level of the clause (e.g. Slobin 1996b) and tend to transfer from a first (L1) to a second language (L2) in second language acquisition (e.g. Cadierno 2004). From this crosslinguistic and developmental evidence, typological preferences for Path expression appear highly robust features of a first language. The current study examines the extent to which preferences for Path encoding really are as enduring as they seem by investigating (1) whether Japanese follows patterns identified for other verb-framed languages like Spanish, and (2) whether patterns established in one’s first language can change after acquisition of a second language. L1 performance of native speakers of Japanese with intermediate-level knowledge of English was compared to that of monolingual speakers of Japanese and English. Results showed that monolingual Japanese speakers followed basic lexicalization patterns typical of other verb-framed languages, but with different realizations of Path packaging within the clause. Moreover, non-monolingual Japanese speakers displayed both English- and Japanese-like patterns for lexicalization with significantly more Path information per clause than either group of monolinguals. Implications for typology and second language acquisition are discussed.
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, P. (2010). Cognitive anthropology. In L. Cummings (Ed.), The pragmatics encyclopedia (pp. 43-46). London: Routledge.

    Abstract

    This is an encyclopedia entry surveying anthropological approaches to cognition and culture.
  • Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.

    Abstract

    In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P. (2014). Gestures in native Mexico and Central America. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill, & J. Bressem (Eds.), Body -language – communication: An international handbook on multimodality in human interaction. Volume 2 (pp. 1206-1215). Berlin: Mouton de Gruyter.

    Abstract

    The systematic study of kinesics, gaze, and gestural aspects of communication in Central American cultures is a recent phenomenon, most of it focussing on the Mayan cultures of southern Mexico, Guatemala, and Belize. This article surveys ethnographic observations and research reports on bodily aspects of speaking in three domains: gaze and kinesics in social interaction, indexical pointing in adult and caregiver-child interactions, and co-speech gestures associated with “absolute” (geographically-based) systems of spatial reference. In addition, it reports how the indigenous co-speech gesture repertoire has provided the basis for developing village sign languages in the region. It is argued that studies of the embodied aspects of speech in the Mayan areas of Mexico and Central America have contributed to the typology of gestures and of spatial frames of reference. They have refined our understanding of how spatial frames of reference are invoked, communicated, and switched in conversational interaction and of the importance of co-speech gestures in understanding language use, language acquisition, and the transmission of culture-specific cognitive styles.
  • Brown, A., & Gullberg, M. (2012). Multicompetence and native speaker variation in clausal packaging in Japanese. Second Language Research, 28, 415-442. doi:10.1177/0267658312455822.

    Abstract

    This work was supported by the Max Planck Institute for Psycholinguistics and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to M Gullberg and P Indefrey).
  • Brown, A., & Gullberg, M. (2013). L1–L2 convergence in clausal packaging in Japanese and English. Bilingualism: Language and Cognition, 16, 477-494. doi:10.1017/S1366728912000491.

    Abstract

    This research received technical and financial support from Syracuse University, the Max Planck Institute for Psycholinguistics, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to Marianne Gullberg and Peter Indefrey).
  • Brown, P. (2013). La estructura conversacional y la adquisición del lenguaje: El papel de la repetición en el habla de los adultos y niños tzeltales. In L. de León Pasquel (Ed.), Nuevos senderos en el studio de la adquisición de lenguas mesoamericanas: Estructura, narrativa y socialización (pp. 35-82). Mexico: CIESAS-UNAM.

    Abstract

    This is a translation of the Brown 1998 article in Journal of Linguistic Anthropology, 'Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech'.

    Files private

    Request files
  • Brown, P., & Gaskins, S. (2014). Language acquisition and language socialization. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), Cambridge handbook of linguistic anthropology (pp. 187-226). Cambridge: Cambridge University Press.
  • Brown, P. (2010). Questions and their responses in Tzeltal. Journal of Pragmatics, 42, 2627-2648. doi:10.1016/j.pragma.2010.04.003.

    Abstract

    This paper reports the results of a study of Tzeltal questions and their responses, based on a collection of 419 question/response sequences drawn from video recordings of ‘maximally casual’ naturally occurring face-to-face interactions in a Tzeltal (Mayan) community. I describe the lexical and grammatical resources for formulating content and polar questions in Tzeltal, the different kinds of social actions that questions can be used to perform and their relative frequency in the data, and the characteristic properties of responses to questions. This is part of a large-scale comparative study of questions in 10 different languages, and we find that Tzeltal is like most others in making much more use of polar than of content questions, and in the strong tendency for confirming answers to polar questions. Tzeltal is however unusual in three respects: in the comparatively minimal use of gaze to select next speaker, in the frequency with which answers take the form of repeats, and in the complete absence of visible-only responses (e.g., nods or head-shakes). There are also some language-specific properties of question–answer sequences that reveal cultural shaping of sequencing in conversation.
  • Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.

    Abstract

    This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition.
  • Brown, P., Pfeiler, B., de León, L., & Pye, C. (2013). The acquisition of agreement in four Mayan languages. In E. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 271-306). Amsterdam: Benjamins.

    Abstract

    This paper presents results of a comparative project documenting the development of verbal agreement inflections in children learning four different Mayan languages: K’iche’, Tzeltal, Tzotzil, and Yukatek. These languages have similar inflectional paradigms: they have a generally agglutinative morphology, with transitive verbs obligatorily marked with separate cross-referencing inflections for the two core arguments (‘ergative’ and ‘absolutive’). Verbs are also inflected for aspect and mood, and they carry a ‘status suffix’ which generally marks verb transitivity and mood. At a more detailed level, the four languages differ strikingly in the realization of cross-reference marking. For each language, we examined longitudinal language production data from two children at around 2;0, 2;6, 3;0, and 3;6 years of age. We relate differences in the acquisition patterns of verbal morphology in the languages to 1) the placement of affixes, 2) phonological and prosodic prominence, 3) language-specific constraints on the various forms of the affixes, and 4) consistent vs. split ergativity, and conclude that prosodic salience accounts provide th ebest explanation for the acquisition patterns in these four languages.

    Files private

    Request files
  • Brown, C. M., & Hagoort, P. (1999). The cognitive neuroscience of language: Challenges and future directions. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 3-14). Oxford: Oxford University Press.

Share this page