Publications

Displaying 101 - 200 of 1949
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C., & Cristia, A. (2018). Environmental influences on infants’ native vowel discrimination: The case of talker number in daily life. Infancy, 23(4), 484-501. doi:10.1111/infa.12232.

    Abstract

    Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4-month-olds (before attunement to native vowels), 6-month-olds (at the cusp of native vowel attunement) and 12-month-olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4- and 6-month-olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.
  • Bergmann, C., Bosch, L. t., Fikkert, P., & Boves, L. (2015). Modelling the Noise-Robustness of Infants’ Word Representations: The Impact of Previous Experience. PLoS One, 10(7): e0132245. doi:10.1371/journal.pone.0132245.

    Abstract

    During language acquisition, infants frequently encounter ambient noise. We present a computational model to address whether specific acoustic processing abilities are necessary to detect known words in moderate noise—an ability attested experimentally in infants. The model implements a general purpose speech encoding and word detection procedure. Importantly, the model contains no dedicated processes for removing or cancelling out ambient noise, and it can replicate the patterns of results obtained in several infant experiments. In addition to noise, we also addressed the role of previous experience with particular target words: does the frequency of a word matter, and does it play a role whether that word has been spoken by one or multiple speakers? The simulation results show that both factors affect noise robustness. We also investigated how robust word detection is to changes in speaker identity by comparing words spoken by known versus unknown speakers during the simulated test. This factor interacted with both noise level and past experience, showing that an increase in exposure is only helpful when a familiar speaker provides the test material. Added variability proved helpful only when encountering an unknown speaker. Finally, we addressed whether infants need to recognise specific words, or whether a more parsimonious explanation of infant behaviour, which we refer to as matching, is sufficient. Recognition involves a focus of attention on a specific target word, while matching only requires finding the best correspondence of acoustic input to a known pattern in the memory. Attending to a specific target word proves to be more noise robust, but a general word matching procedure can be sufficient to simulate experimental data stemming from young infants. A change from acoustic matching to targeted recognition provides an explanation of the improvements observed in infants around their first birthday. In summary, we present a computational model incorporating only the processes infants might employ when hearing words in noise. Our findings show that a parsimonious interpretation of behaviour is sufficient and we offer a formal account of emerging abilities.
  • Bergmann, C., Tsuji, S., Piccinini, P. E., Lewis, M. L., Braginsky, M. B., Frank, M. C., & Cristia, A. (2018). Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Development, 89(6), 1996-2009. doi:10.1111/cdev.13079.

    Abstract

    Previous work suggests key factors for replicability, a necessary feature for theory
    building, include statistical power and appropriate research planning. These factors are examined by analyzing a collection of 12 standardized meta-analyses on language development between birth and 5 years. With a median effect size of Cohen's d= 0.45 and typical sample size of 18 participants, most research is underpowered (range: 6%-99%;
    median 44%); and calculating power based on seminal publications is not a suitable strategy.
    Method choice can be improved, as shown in analyses on exclusion rates and effect size as a
    function of method. The article ends with a discussion on how to increase replicability in both language acquisition studies specifically and developmental research more generally.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Blackwell, N. L., Perlman, M., & Fox Tree, J. E. (2015). Quotation as a multimodal construction. Journal of Pragmatics, 81, 1-7. doi:10.1016/j.pragma.2015.03.004.

    Abstract

    Quotations are a means to report a broad range of events in addition to speech, and often involve both vocal and bodily demonstration. The present study examined the use of quotation to report a variety of multisensory events (i.e., containing salient visible and audible elements) as participants watched and then described a set of video clips including human speech and animal vocalizations. We examined the relationship between demonstrations conveyed through the vocal versus bodily modality, comparing them across four common quotation devices (be like, go, say, and zero quotatives), as well as across direct and non-direct quotations and retellings. We found that direct quotations involved high levels of both vocal and bodily demonstration, while non-direct quotations involved lower levels in both these channels. In addition, there was a strong positive correlation between vocal and bodily demonstration for direct quotation. This result supports a Multimodal Hypothesis where information from the two channels arises from one central concept.
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • Blumstein, S., & Cutler, A. (2003). Speech perception: Phonetic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 151-154). Oxford: Oxford University Press.
  • Blythe, J. (2018). Genesis of the trinity: The convergent evolution of trirelational kinterms. In P. McConvell, & P. Kelly (Eds.), Skin, kin and clan: The dynamics of social categories in Indigenous Australia (pp. 431-471). Canberra: ANU EPress.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Bode, S., Feuerriegel, D., Bennett, D., & Alday, P. M. (2019). The Decision Decoding ToolBOX (DDTBOX) -- A Multivariate Pattern Analysis Toolbox for Event-Related Potentials. Neuroinformatics, 17(1), 27-42. doi:10.1007/s12021-018-9375-z.

    Abstract

    In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.
  • De Boer, B., & Thompson, B. (2018). Biology-culture co-evolution in finite populations. Scientific Reports, 8: 1209. doi:10.1038/s41598-017-18928-0.

    Abstract

    Language is the result of two concurrent evolutionary processes: Biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: The speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: A generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: In realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another. © 2018 The Author(s).

    Additional information

    41598_2017_18928_MOESM1_ESM.pdf
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bögels, S., Casillas, M., & Levinson, S. C. (2018). Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question. Neuropsychologia, 109, 295-310. doi:10.1016/j.neuropsychologia.2017.12.028.

    Abstract

    Rapid response latencies in conversation suggest that responders start planning before the ongoing turn is finished. Indeed, an earlier EEG study suggests that listeners start planning their responses to questions as soon as they can (Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5, 12881). The present study aimed to (1) replicate this early planning effect and (2) investigate whether such early response planning incurs a cost on participants’ concurrent comprehension of the ongoing turn. During the experiment participants answered questions from a confederate partner. To address aim (1), the questions were designed such that response planning could start either early or late in the turn. Our results largely replicate Bögels et al. (2015) showing a large positive ERP effect and an oscillatory alpha/beta reduction right after participants could have first started planning their verbal response, again suggesting an early start of response planning. To address aim (2), the confederate's questions also contained either an expected word or an unexpected one to elicit a differential N400 effect, either before or after the start of response planning. We hypothesized an attenuated N400 effect after response planning had started. In contrast, the N400 effects before and after planning did not differ. There was, however, a positive correlation between participants' response time and their N400 effect size after planning had started; quick responders showed a smaller N400 effect, suggesting reduced attention to comprehension and possibly reduced anticipatory processing. We conclude that early response planning can indeed impact comprehension processing.

    Additional information

    mmc1.pdf
  • Bohnemeyer, J. (2003). The unique vector constraint: The impact of direction changes on the linguistic segmentation of motion events. In E. v. d. Zee, & J. Slack (Eds.), Axes and vectors in language and space (pp. 86-110). Oxford: Oxford University Press.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bohnemeyer, J. (2003). Fictive motion questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 81-85). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877601.

    Abstract

    Fictive Motion is the metaphoric use of path relators in the expression of spatial relations or configurations that are static, or at any rate do not in any obvious way involve physical entities moving in real space. The goal is to study the expression of such relations or configurations in the target language, with an eye particularly on whether these expressions exclusively/preferably/possibly involve motion verbs and/or path relators, i.e., Fictive Motion. Section 2 gives Talmy’s (2000: ch. 2) phenomenology of Fictive Motion construals. The researcher’s task is to “distill” the intended spatial relations/configurations from Talmy’s description of the particular Fictive Motion metaphors and elicit as many different examples of the relations/configurations as (s)he deems necessary to obtain a basic sense of whether and how much Fictive Motion the target language offers or prescribes for the encoding of the particular type of relation/configuration. As a first stab, the researcher may try to elicit natural translations of culturally appropriate adaptations of the examples Talmy provides with each type of Fictive Motion metaphor.
  • Bohnemeyer, J., Burenhult, N., Levinson, S. C., & Enfield, N. J. (2003). Landscape terms and place names questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 60-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877604.

    Abstract

    Landscape terms reflect the relationship between geographic reality and human cognition. Are ‘mountains’, ‘rivers, ‘lakes’ and the like universally recognised in languages as naturally salient objects to be named? The landscape subproject is concerned with the interrelation between language, cognition and geography. Specifically, it investigates issues relating to how landforms are categorised cross-linguistically as well as the characteristics of place naming.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Bornkessel-Schlesewsky, I., Alday, P. M., Kretzschmar, F., Grewe, T., Gumpert, M., Schumacher, P. B., & Schlesewsky, M. (2015). Age-related changes in predictive capacity versus internal model adaptability: Electrophysiological evidence that individual differences outweigh effects of age. Frontiers in Aging Neuroscience, 7: 217. doi:10.3389/fnagi.2015.00217.

    Abstract

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
  • Bosker, H. R., Tjiong, V., Quené, H., Sanders, T., & De Jong, N. H. (2015). Both native and non-native disfluencies trigger listeners' attention. In Disfluency in Spontaneous Speech: DISS 2015: An ICPhS Satellite Meeting. Edinburgh: DISS2015.

    Abstract

    Disfluencies, such as uh and uhm, are known to help the listener in speech comprehension. For instance, disfluencies may elicit prediction of less accessible referents and may trigger listeners’ attention to the following word. However, recent work suggests differential processing of disfluencies in native and non-native speech. The current study investigated whether the beneficial effects of disfluencies on listeners’ attention are modulated by the (non-)native identity of the speaker. Using the Change Detection Paradigm, we investigated listeners’ recall accuracy for words presented in disfluent and fluent contexts, in native and non-native speech. We observed beneficial effects of both native and non-native disfluencies on listeners’ recall accuracy, suggesting that native and non-native disfluencies trigger listeners’ attention in a similar fashion.
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R., & Reinisch, E. (2015). Normalization for speechrate in native and nonnative speech. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Speech perception involves a number of processes that deal with variation in the speech signal. One such process is normalization for speechrate: local temporal cues are perceived relative to the rate in the surrounding context. It is as yet unclear whether and how this perceptual effect interacts with higher level impressions of rate, such as a speaker’s nonnative identity. Nonnative speakers typically speak more slowly than natives, an experience that listeners take into account when explicitly judging the rate of nonnative speech. The present study investigated whether this is also reflected in implicit rate normalization. Results indicate that nonnative speech is implicitly perceived as faster than temporally-matched native speech, suggesting that the additional cognitive load of listening to an accent speeds up rate perception. Therefore, rate perception in speech is not dependent on syllable durations alone but also on the ease of processing of the temporal signal.
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Bowerman, M. (2003). Rola predyspozycji kognitywnych w przyswajaniu systemu semantycznego [Reprint]. In E. Dabrowska, & W. Kubiński (Eds.), Akwizycja języka w świetle językoznawstwa kognitywnego [Language acquisition from a cognitive linguistic perspective]. Kraków: Uniwersitas.

    Abstract

    Reprinted from; Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M., & Choi, S. (2003). Space under construction: Language-specific spatial categorization in first language acquisition. In D. Gentner, & S. Goldin-Meadow (Eds.), Language in mind: Advances in the study of language and thought (pp. 387-427). Cambridge: MIT Press.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (1996). Argument structure and learnability: Is a solution in sight? In J. Johnson, M. L. Juge, & J. L. Moxley (Eds.), Proceedings of the Twenty-second Annual Meeting of the Berkeley Linguistics Society, February 16-19, 1996. General Session and Parasession on The Role of Learnability in Grammatical Theory (pp. 454-468). Berkeley Linguistics Society.
  • Bowerman, M. (1986). First steps in acquiring conditionals. In E. C. Traugott, A. G. t. Meulen, J. S. Reilly, & C. A. Ferguson (Eds.), On conditionals (pp. 285-308). Cambridge University Press.

    Abstract

    This chapter is about the initial flowering of conditionals, if-(then) constructions, in children's spontaneous speech. It is motivated by two major theoretical interests. The first and most immediate is to understand the acquisition process itself. Conditionals are conceptually, and in many languages morphosyntactically, complex. What aspects of cognitive and grammatical development are implicated in their acquisition? Does learning take place in the context of particular interactions with other speakers? Where do conditionals fit in with the acquisition of other complex sentences? What are the semantic, syntactic and pragmatic properties of the first conditionals? Underlying this first interest is a second, more strictly linguistic one. Research of recent years has found increasing evidence that natural languages are constrained in certain ways. The source of these constraints is not yet clearly understood, but it is widely assumed that some of them derive ultimately from properties of children's capacity for language acquisition.

    Files private

    Request files
  • Bowerman, M. (1988). Inducing the latent structure of language. In F. Kessel (Ed.), The development of language and language researchers: Essays presented to Roger Brown (pp. 23-49). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M., & Majid, A. (2003). Kids’ cut & break. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 70-71). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877607.

    Abstract

    Kids’ Cut & Break is a task inspired by the original Cut & Break task (see MPI L&C Group Field Manual 2001), but designed for use with children as well as adults. There are fewer videoclips to be described (34 as opposed to 61), and they are “friendlier” and more interesting: the actors wear colorful clothes, smile, and act cheerfully. The first 2 items are warm-ups and 4 more items are fillers (interspersed with test items), so only 28 of the items are actually “test items”. In the original Cut & Break, each clip is in a separate file. In Kids’ Cut & Break, all 34 clips are edited into a single file, which plays the clips successively with 5 seconds of black screen between each clip.

    Additional information

    2003_1_Kids_cut_and_break_films.zip
  • Bowerman, M. (1976). Le relazioni strutturali nel linguaggio infantile: sintattiche o semantiche? [Reprint]. In F. Antinucci, & C. Castelfranchi (Eds.), Psicolinguistica: Percezione, memoria e apprendimento del linguaggio (pp. 303-321). Bologna: Il Mulino.

    Abstract

    Reprinted from Bowerman, M. (1973). Structural relationships in children's utterances: Semantic or syntactic? In T. Moore (Ed.), Cognitive development and the acquisition of language (pp. 197 213). New York: Academic Press
  • Bowerman, M. (1996). Learning how to structure space for language: A crosslinguistic perspective. In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.), Language and space (pp. 385-436). Cambridge, MA: MIT press.
  • Bowerman, M. (1988). The 'no negative evidence' problem: How do children avoid constructing an overly general grammar? In J. Hawkins (Ed.), Explaining language universals (pp. 73-101). Oxford: Basil Blackwell.
  • Bowerman, M. (1988). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology [Reprint]. In M. B. Franklin, & S. S. Barten (Eds.), Child language: A reader (pp. 106-117). Oxford: Oxford University Press.

    Abstract

    Reprinted from: Bowerman, M. (1981). The child's expression of meaning: Expanding relationships among lexicon, syntax, and morphology. In H. Winitz (Ed.), Native language and foreign language acquisition (pp. 172 189). New York: New York Academy of Sciences.
  • Bowerman, M. (1976). Semantic factors in the acquisition of rules for word use and sentence construction. In D. Morehead, & A. Morehead (Eds.), Directions in normal and deficient language development (pp. 99-179). Baltimore: University Park Press.
  • Bowerman, M. (1996). The origins of children's spatial semantic categories: Cognitive vs. linguistic determinants. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 145-176). Cambridge University Press.
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brand, S., & Ernestus, M. (2018). Listeners’ processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: evidence from native listeners and learners of French. Quarterly Journal of Experimental Psychology, 71(5), 1240-1259. doi:10.1080/17470218.2017.1313282.

    Abstract

    n casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀəvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners’ exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves
  • Brand, S., & Ernestus, M. (2015). Reduction of obstruent-liquid-schwa clusters in casual French. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This study investigated pronunciation variants of word-final obstruent-liquid-schwa (OLS) clusters in casual French and the variables predicting the absence of the phonemes in these clusters. In a dataset of 291 noun tokens extracted from a corpus of casual conversations, we observed that in 80.7% of the tokens, at least one phoneme was absent and that in no less than 15.5% the whole cluster was absent (e.g., /mis/ for ministre). Importantly, the probability of a phoneme being absent was higher if the following phoneme was absent as well. These data show that reduction can affect several phonemes at once and is not restricted to just a handful of (function) words. Moreover, our results demonstrate that the absence of each single phoneme is affected by the speaker's tendency to increase ease of articulation and to adapt a word's pronunciation variant to the time available.
  • Brand, J., Monaghan, P., & Walker, P. (2018). The changing role of sound‐symbolism for small versus large vocabularies. Cognitive Science, 42(S2), 578-590. doi:10.1111/cogs.12565.

    Abstract

    Natural language contains many examples of sound‐symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound‐symbolism for word learning as vocabulary size varies. Participants learned form‐meaning mappings for words which were either congruent or incongruent with regard to sound‐symbolic relations. For the smaller vocabulary, sound‐symbolism facilitated learning individual words, whereas for larger vocabularies sound‐symbolism supported learning category distinctions. The changing properties of form‐meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development.

    Additional information

    https://git.io/v5BXJ
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Brascamp, J., Klink, P., & Levelt, W. J. M. (2015). The ‘laws’ of binocular rivalry: 50 years of Levelt’s propositions. Vision Research, 109, 20-37. doi:10.1016/j.visres.2015.02.019.

    Abstract

    It has been fifty years since Levelt’s monograph On Binocular Rivalry (1965) was published, but its four propositions that describe the relation between stimulus strength and the phenomenology of binocular rivalry remain a benchmark for theorists and experimentalists even today. In this review, we will revisit the original conception of the four propositions and the scientific landscape in which this happened. We will also provide a brief update concerning distributions of dominance durations, another aspect of Levelt’s monograph that has maintained a prominent presence in the field. In a critical evaluation of Levelt’s propositions against current knowledge of binocular rivalry we will then demonstrate that the original propositions are not completely compatible with what is known today, but that they can, in a straightforward way, be modified to encapsulate the progress that has been made over the past fifty years. The resulting modified, propositions are shown to apply to a broad range of bistable perceptual phenomena, not just binocular rivalry, and they allow important inferences about the underlying neural systems. We argue that these inferences reflect canonical neural properties that play a role in visual perception in general, and we discuss ways in which future research can build on the work reviewed here to attain a better understanding of these properties
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Brennan, J. R., & Martin, A. E. (2019). Phase synchronization varies systematically with linguistic structure composition. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1791): 20190305. doi:10.1098/rstb.2019.0305.

    Abstract

    Computation in neuronal assemblies is putatively reflected in the excitatory and inhibitory cycles of activation distributed throughout the brain. In speech and language processing, coordination of these cycles resulting in phase synchronization has been argued to reflect the integration of information on different timescales (e.g. segmenting acoustics signals to phonemic and syllabic representations; (Giraud and Poeppel 2012 Nat. Neurosci.15, 511 (doi:10.1038/nn.3063)). A natural extension of this claim is that phase synchronization functions similarly to support the inference of more abstract higher-level linguistic structures (Martin 2016 Front. Psychol.7, 120; Martin and Doumas 2017 PLoS Biol. 15, e2000663 (doi:10.1371/journal.pbio.2000663); Martin and Doumas. 2019 Curr. Opin. Behav. Sci.29, 77–83 (doi:10.1016/j.cobeha.2019.04.008)). Hale et al. (Hale et al. 2018 Finding syntax in human encephalography with beam search. arXiv 1806.04127 (http://arxiv.org/abs/1806.04127)) showed that syntactically driven parsing decisions predict electroencephalography (EEG) responses in the time domain; here we ask whether phase synchronization in the form of either inter-trial phrase coherence or cross-frequency coupling (CFC) between high-frequency (i.e. gamma) bursts and lower-frequency carrier signals (i.e. delta, theta), changes as the linguistic structures of compositional meaning (viz., bracket completions, as denoted by the onset of words that complete phrases) accrue. We use a naturalistic story-listening EEG dataset from Hale et al. to assess the relationship between linguistic structure and phase alignment. We observe increased phase synchronization as a function of phrase counts in the delta, theta, and gamma bands, especially for function words. A more complex pattern emerged for CFC as phrase count changed, possibly related to the lack of a one-to-one mapping between ‘size’ of linguistic structure and frequency band—an assumption that is tacit in recent frameworks. These results emphasize the important role that phase synchronization, desynchronization, and thus, inhibition, play in the construction of compositional meaning by distributed neural networks in the brain.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brouwer, S., & Bradlow, A. R. (2015). The effect of target-background synchronicity on speech-in-speech recognition. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    The aim of the present study was to investigate whether speech-in-speech recognition is affected by variation in the target-background timing relationship. Specifically, we examined whether within trial synchronous or asynchronous onset and offset of the target and background speech influenced speech-in-speech recognition. Native English listeners were presented with English target sentences in the presence of English or Dutch background speech. Importantly, only the short-term temporal context –in terms of onset and offset synchrony or asynchrony of the target and background speech– varied across conditions. Participants’ task was to repeat back the English target sentences. The results showed an effect of synchronicity for English-in-English but not for English-in-Dutch recognition, indicating that familiarity with the English background lead in the asynchronous English-in-English condition might have attracted attention towards the English background. Overall, this study demonstrated that speech-in-speech recognition is sensitive to the target-background timing relationship, revealing an important role for variation in the local context of the target-background relationship as it extends beyond the limits of the time-frame of the to-be-recognized target sentence.
  • Brouwer, S., & Bradlow, A. R. (2015). The temporal dynamics of spoken word recognition in adverse listening conditions. Journal of Psycholinguistic Research. Advanced online publication. doi:10.1007/s10936-015-9396-9.

    Abstract

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition
  • Brown, P. (2003). Multimodal multiperson interaction with infants aged 9 to 15 months. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 22-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877610.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, A., & Gullberg, M. (2013). L1–L2 convergence in clausal packaging in Japanese and English. Bilingualism: Language and Cognition, 16, 477-494. doi:10.1017/S1366728912000491.

    Abstract

    This research received technical and financial support from Syracuse University, the Max Planck Institute for Psycholinguistics, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to Marianne Gullberg and Peter Indefrey).
  • Brown, P. (2013). La estructura conversacional y la adquisición del lenguaje: El papel de la repetición en el habla de los adultos y niños tzeltales. In L. de León Pasquel (Ed.), Nuevos senderos en el studio de la adquisición de lenguas mesoamericanas: Estructura, narrativa y socialización (pp. 35-82). Mexico: CIESAS-UNAM.

    Abstract

    This is a translation of the Brown 1998 article in Journal of Linguistic Anthropology, 'Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech'.

    Files private

    Request files
  • Brown, P. (2015). Language, culture, and spatial cognition. In F. Sharifian (Ed.), Routledge Handbook on Language and Culture (pp. 294-309). London: Routledge.
  • Brown, C. M., Hagoort, P., & Swaab, T. Y. (1996). Neurophysiological evidence for a temporal disorganization in aphasic patients with comprehension deficits. In W. Widdig, I. Ohlendorff, T. A. Pollow, & J. Malin (Eds.), Aphasiatherapie im Wandel (pp. 89-122). Freiburg: Hochschul Verlag.
  • Brown, P., Pfeiler, B., de León, L., & Pye, C. (2013). The acquisition of agreement in four Mayan languages. In E. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 271-306). Amsterdam: Benjamins.

    Abstract

    This paper presents results of a comparative project documenting the development of verbal agreement inflections in children learning four different Mayan languages: K’iche’, Tzeltal, Tzotzil, and Yukatek. These languages have similar inflectional paradigms: they have a generally agglutinative morphology, with transitive verbs obligatorily marked with separate cross-referencing inflections for the two core arguments (‘ergative’ and ‘absolutive’). Verbs are also inflected for aspect and mood, and they carry a ‘status suffix’ which generally marks verb transitivity and mood. At a more detailed level, the four languages differ strikingly in the realization of cross-reference marking. For each language, we examined longitudinal language production data from two children at around 2;0, 2;6, 3;0, and 3;6 years of age. We relate differences in the acquisition patterns of verbal morphology in the languages to 1) the placement of affixes, 2) phonological and prosodic prominence, 3) language-specific constraints on the various forms of the affixes, and 4) consistent vs. split ergativity, and conclude that prosodic salience accounts provide th ebest explanation for the acquisition patterns in these four languages.

    Files private

    Request files
  • Brown, P. (2015). Space: Linguistic expression of. In J. D. Wright (Ed.), International Encyclopedia of the Social and Behavioral Sciences (2nd ed.) Vol. 23 (pp. 89-93). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57017-2.
  • Brown, P. (2015). Politeness and language. In J. D. Wright (Ed.), The International Encyclopedia of the Social and Behavioural Sciences (IESBS), (2nd ed.) (pp. 326-330). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53072-4.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brown, P., & Levinson, S. C. (2018). Tzeltal: The demonstrative system. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 150-177). Cambridge: Cambridge University Press.
  • Brown-Schmidt, S., & Konopka, A. E. (2015). Processes of incremental message planning during conversation. Psychonomic Bulletin & Review, 22, 833-843. doi:10.3758/s13423-014-0714-2.

    Abstract

    Speaking begins with the formulation of an intended preverbal message and linguistic encoding of this information. The transition from thought to speech occurs incrementally, with cascading planning at subsequent levels of production. In this article, we aim to specify the mechanisms that support incremental message preparation. We contrast two hypotheses about the mechanisms responsible for incorporating message-level information into a linguistic plan. According to the Initial Preparation view, messages can be encoded as fluent utterances if all information is ready before speaking begins. By contrast, on the Continuous Incrementality view, messages can be continually prepared and updated throughout the production process, allowing for fluent production even if new information is added to the message while speaking is underway. Testing these hypotheses, eye-tracked speakers in two experiments produced unscripted, conjoined noun phrases with modifiers. Both experiments showed that new message elements can be incrementally incorporated into the utterance even after articulation begins, consistent with a Continuous Incrementality view of message planning, in which messages percolate to linguistic encoding immediately as that information becomes available in the mind of the speaker. We conclude by discussing the functional role of incremental message planning in conversational speech and the situations in which this continuous incremental planning would be most likely to be observed
  • Brucato, N., Guadalupe, T., Franke, B., Fisher, S. E., & Francks, C. (2015). A schizophrenia-associated HLA locus affects thalamus volume and asymmetry. Brain, Behavior, and Immunity, 46, 311-318. doi:10.1016/j.bbi.2015.02.021.

    Abstract

    Genes of the Major Histocompatibility Complex (MHC) have recently been shown to have neuronal functions in the thalamus and hippocampus. Common genetic variants in the Human Leukocyte Antigens (HLA) region, human homologue of the MHC locus, are associated with small effects on susceptibility to schizophrenia, while volumetric changes of the thalamus and hippocampus have also been linked to schizophrenia. We therefore investigated whether common variants of the HLA would affect volumetric variation of the thalamus and hippocampus. We analyzed thalamus and hippocampus volumes, as measured using structural magnetic resonance imaging, in 1.265 healthy participants. These participants had also been genotyped using genome-wide single nucleotide polymorphism (SNP) arrays. We imputed genotypes for single nucleotide polymorphisms at high density across the HLA locus, as well as HLA allotypes and HLA amino acids, by use of a reference population dataset that was specifically targeted to the HLA region. We detected a significant association of the SNP rs17194174 with thalamus volume (nominal P=0.0000017, corrected P=0.0039), as well as additional SNPs within the same region of linkage disequilibrium. This effect was largely lateralized to the left thalamus and is localized within a genomic region previously associated with schizophrenia. The associated SNPs are also clustered within a potential regulatory element, and a region of linkage disequilibrium that spans genes expressed in the thalamus, including HLA-A. Our data indicate that genetic variation within the HLA region influences the volume and asymmetry of the human thalamus. The molecular mechanisms underlying this association may relate to HLA influences on susceptibility to schizophrenia
  • Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Speech input causes listeners to activate multiple
    candidate words which then compete with one
    another. These include onset competitors, that share a
    beginning (bumper, butter), but also, counterintuitively,
    rhyme competitors, sharing an ending
    (bumper, jumper). In L1, competition is typically
    stronger for onset than for rhyme. In L2, onset
    competition has been attested but rhyme competition
    has heretofore remained largely unexamined. We
    assessed L1 (Dutch) and L2 (English) word
    recognition by the same late-bilingual individuals. In
    each language, eye gaze was recorded as listeners
    heard sentences and viewed sets of drawings: three
    unrelated, one depicting an onset or rhyme competitor
    of a word in the input. Activation patterns revealed
    substantial onset competition but no significant
    rhyme competition in either L1 or L2. Rhyme
    competition may thus be a “luxury” feature of
    maximally efficient listening, to be abandoned when
    resources are scarcer, as in listening by late
    bilinguals, in either language.
  • Bruggeman, L., & Janse, E. (2015). Older listeners' decreased flexibility in adjusting to changes in speech signal reliability. In M. Wolters, J. Linvingstone, B. Beattie, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Under noise or speech reductions, young adult listeners flexibly adjust the parameters of lexical activation and competition to allow for speech signal unreliability. Consequently, mismatches in the input are treated more leniently such that lexical candidates are not immediately deactivated. Using eyetracking, we assessed whether this modulation of recognition dynamics also occurs for older listeners. Dutch participants (aged 60+) heard Dutch sentences containing a critical word while viewing displays of four line drawings. The name of one picture shared either onset or rhyme with the critical word (i.e., was a phonological competitor). Sentences were either clear and noise-free, or had several phonemes replaced by bursts of noise. A larger preference for onset competitors than for rhyme competitors was observed in both clear and noise conditions; performance did not alter across condition. This suggests that dynamic adjustment of spoken-word recognition parameters in response to noise is less available to older listeners.
  • Buetti, S., Tamietto, M., Hervais-Adelman, A., Kerzel, D., de Gelder, B., & Pegna, A. J. (2013). Dissociation between goal-directed and discrete response localization in a patient with bilateral cortical blindness. Journal of Cognitive Neuroscience, 25(10), 1769-1775. doi:10.1162/jocn_a_00404.

    Abstract

    We investigated localization performance of simple targets in patient TN, who suffered bilateral damage of his primary visual cortex and shows complete cortical blindness. Using a two-alternative forced-choice paradigm, TN was asked to guess the position of left-right targets with goal-directed and discrete manual responses. The results indicate a clear dissociation between goal-directed and discrete responses. TN pointed toward the correct target location in approximately 75% of the trials but was at chance level with discrete responses. This indicates that the residual ability to localize an unseen stimulus depends critically on the possibility to translate a visual signal into a goal-directed motor output at least in certain forms of blindsight.
  • Bull, L. E., Oliver, C., Callaghan, E., & Woodcock, K. A. (2015). Increased Exposure to Rigid Routines can Lead to Increased Challenging Behavior Following Changes to Those Routines. Journal of Autism and Developmental Disorders, 45(6), 1569-1578. doi:10.1007/s10803-014-2308-2.

    Abstract

    Several neurodevelopmental disorders are associated with preference for routine and challenging behavior following changes to routines. We examine individuals with Prader–Willi syndrome, who show elevated levels of this behavior, to better understand how previous experience of a routine can affect challenging behavior elicited by disruption to that routine. Play based challenges exposed 16 participants to routines, which were either adhered to or changed. Temper outburst behaviors, heart rate and movement were measured. As participants were exposed to routines for longer before a change (between 10 and 80 min; within participants), more temper outburst behaviors were elicited by changes. Increased emotional arousal was also elicited, which was indexed by heart rate increases not driven by movement. Further study will be important to understand whether current intervention approaches that limit exposure to changes, may benefit from the structured integration of flexibility to ensure that the opportunity for routine establishment is also limited.

    Additional information

    10803_2014_2308_MOESM1_ESM.docx
  • Bulut, T., Cheng, S. K., Xu, K. Y., Hung, D. L., & Wu, D. H. (2018). Is there a processing preference for object relative clauses in Chinese? Evidence from ERPs. Frontiers in Psychology, 9: 995. doi:10.3389/fpsyg.2018.00995.

    Abstract

    A consistent finding across head-initial languages, such as English, is that subject relative clauses (SRCs) are easier to comprehend than object relative clauses (ORCs). However, several studies in Mandarin Chinese, a head-final language, revealed the opposite pattern, which might be modulated by working memory (WM) as suggested by recent results from self-paced reading performance. In the present study, event-related potentials (ERPs) were recorded when participants with high and low WM spans (measured by forward digit span and operation span tests) read Chinese ORCs and SRCs. The results revealed an N400-P600 complex elicited by ORCs on the relativizer, whose magnitude was modulated by the WM span. On the other hand, a P600 effect was elicited by SRCs on the head noun, whose magnitude was not affected by the WM span. These findings paint a complex picture of relative clause processing in Chinese such that opposing factors involving structural ambiguities and integration of filler-gap dependencies influence processing dynamics in Chinese relative clauses.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Burenkova, O. V., & Fisher, S. E. (2019). Genetic insights into the neurobiology of speech and language. In E. Grigorenko, Y. Shtyrov, & P. McCardle (Eds.), All About Language: Science, Theory, and Practice. Baltimore, MD: Paul Brookes Publishing, Inc.
  • Burra, N., Hervais-Adelman, A., Kerzel, D., Tamietto, M., de Gelder, B., & Pegna, A. J. (2013). Amygdala Activation for Eye Contact Despite Complete Cortical Blindness. The Journal of Neuroscience, 33(25), 10483-10489. doi:10.1523/jneurosci.3994-12.2013.

    Abstract

    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.
  • Burra, N., Hervais-Adelman, A., Celeghin, A., de Gelder, B., & Pegna, A. J. (2019). Affective blindsight relies on low spatial frequencies. Neuropsychologia, 128, 44-49. doi:10.1016/j.neuropsychologia.2017.10.009.

    Abstract

    The human brain can process facial expressions of emotions rapidly and without awareness. Several studies in patients with damage to their primary visual cortices have shown that they may be able to guess the emotional expression on a face despite their cortical blindness. This non-conscious processing, called affective blindsight, may arise through an intact subcortical visual route that leads from the superior colliculus to the pulvinar, and thence to the amygdala. This pathway is thought to process the crude visual information conveyed by the low spatial frequencies of the stimuli.

    In order to investigate whether this is the case, we studied a patient (TN) with bilateral cortical blindness and affective blindsight. An fMRI paradigm was performed in which fearful and neutral expressions were presented using faces that were either unfiltered, or filtered to remove high or low spatial frequencies. Unfiltered fearful faces produced right amygdala activation although the patient was unaware of the presence of the stimuli. More importantly, the low spatial frequency components of fearful faces continued to produce right amygdala activity while the high spatial frequency components did not. Our findings thus confirm that the visual information present in the low spatial frequencies is sufficient to produce affective blindsight, further suggesting that its existence could rely on the subcortical colliculo-pulvino-amygdalar pathway.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Byun, K.-S., & Byun, E.-J. (2015). Becoming Friends with International Sign. Seoul: Sign Language Dandelion.
  • Byun, K.-S., De Vos, C., Bradford, A., Zeshan, U., & Levinson, S. C. (2018). First encounters: Repair sequences in cross-signing. Topics in Cognitive Science, 10(2), 314-334. doi:10.1111/tops.12303.

    Abstract

    Most human communication is between people who speak or sign the same languages. Nevertheless, communication is to some extent possible where there is no language in common, as every tourist knows. How this works is of some theoretical interest (Levinson 2006). A nice arena to explore this capacity is when deaf signers of different languages meet for the first time, and are able to use the iconic affordances of sign to begin communication. Here we focus on Other-Initiated Repair (OIR), that is, where one signer makes clear he or she does not understand, thus initiating repair of the prior conversational turn. OIR sequences are typically of a three-turn structure (Schegloff 2007) including the problem source turn (T-1), the initiation of repair (T0), and the turn offering a problem solution (T+1). These sequences seem to have a universal structure (Dingemanse et al. 2013). We find that in most cases where such OIR occur, the signer of the troublesome turn (T-1) foresees potential difficulty, and marks the utterance with 'try markers' (Sacks & Schegloff 1979, Moerman 1988) which pause to invite recognition. The signers use repetition, gestural holds, prosodic lengthening and eyegaze at the addressee as such try-markers. Moreover, when T-1 is try-marked this allows for faster response times of T+1 with respect to T0. This finding suggests that signers in these 'first encounter' situations actively anticipate potential trouble and, through try-marking, mobilize and facilitate OIRs. The suggestion is that heightened meta-linguistic awareness can be utilized to deal with these problems at the limits of our communicational ability.
  • Byun, K.-S., De Vos, C., Roberts, S. G., & Levinson, S. C. (2018). Interactive sequences modulate the selection of expressive forms in cross-signing. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 67-69). Toruń, Poland: NCU Press. doi:10.12775/3991-1.012.
  • Cai, Z. G., Conell, L., & Holler, J. (2013). Time does not flow without language: Spatial distance affects temporal duration regardless of movement or direction. Psychonomic Bulletin & Review, 20(5), 973-980. doi:10.3758/s13423-013-0414-3.

    Abstract

    Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when reading/writing direction has been evoked. In the present study, English-speaking participants viewed a video where an actor sang a note while gesturing and reproduced the duration of the sung note by pressing a button. Results showed that the perceived duration of the note was increased by a long-distance gesture, relative to a short-distance gesture. This effect was equally strong for gestures moving from left to right and from right to left and was not dependent on gestures depicting movement through space; a weaker version of the effect emerged with static gestures depicting spatial distance. Since both our gesture stimuli and temporal reproduction task were nonlinguistic, we conclude that the spatial representation of time is nondirectional: Movement contributes, but is not necessary, to the representation of temporal information in a transverse timeline.

Share this page