Publications

Displaying 1 - 100 of 1338
  • Acerbi, A., Van Leeuwen, E. J. C., Haun, D. B. M., & Tennie, C. (2018). Reply to 'Sigmoidal acquisition curves are good indicators of conformist transmission'. Scientific Reports, 8(1): 14016. doi:10.1038/s41598-018-30382-0.

    Abstract

    In the Smaldino et al. study ‘Sigmoidal Acquisition Curves are Good Indicators of Conformist Transmission’, our original findings regarding the conditional validity of using population-level sigmoidal acquisition curves as means to evidence individual-level conformity are contested. We acknowledge the identification of useful nuances, yet conclude that our original findings remain relevant for the study of conformist learning mechanisms. Replying to: Smaldino, P. E., Aplin, L. M. & Farine, D. R. Sigmoidal Acquisition Curves Are Good Indicators of Conformist Transmission. Sci. Rep. 8, https://doi.org/10.1038/s41598-018-30248-5 (2018).
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Akita, K., & Dingemanse, M. (2019). Ideophones (Mimetics, Expressives). In Oxford Research Encyclopedia for Linguistics. Oxford: Oxford University Press. doi:10.1093/acrefore/9780199384655.013.477.

    Abstract

    Ideophones, also termed “mimetics” or “expressives,” are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack, but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.

    Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalised depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
  • Alday, P. M. (2019). How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology, 56(12): e13451. doi:10.1111/psyp.13451.

    Abstract

    Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal‐to‐noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM‐based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
  • Alday, P. M. (2019). M/EEG analysis of naturalistic stories: a review from speech to language processing. Language, Cognition and Neuroscience, 34(4), 457-473. doi:10.1080/23273798.2018.1546882.

    Abstract

    M/EEG research using naturally spoken stories as stimuli has focused largely on speech and not
    language processing. The temporal resolution of M/EEG is a two-edged sword, allowing for the
    study of the fine acoustic structure of speech, yet easily overwhelmed by the temporal noise of
    variation in constituent length. Recent theories on the neural encoding of linguistic structure
    require the temporal resolution of M/EEG, yet suffer from confounds when studied on traditional,
    heavily controlled stimuli. Recent methodological advances allow for synthesising naturalistic
    designs and traditional, controlled designs into effective M/EEG research on naturalistic
    language. In this review, we highlight common threads throughout the at-times distinct research
    traditions of speech and language processing. We conclude by examining the tradeoffs and
    successes of three M/EEG studies on fully naturalistic language paradigms and the future
    directions they suggest.
  • Alday, P. M., & Kretzschmar, F. (2019). Speed-accuracy tradeoffs in brain and behavior: Testing the independence of P300 and N400 related processes in behavioral responses to sentence categorization. Frontiers in Human Neuroscience, 13: 285. doi:10.3389/fnhum.2019.00285.

    Abstract

    Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.

    Additional information

    Supplementary material
  • Alhama, R. G., & Zuidema, W. (2019). A review of computational models of basic rule learning: The neural-symbolic debate and beyond. Psychonomic Bulletin & Review, 26(4), 1174-1194. doi:10.3758/s13423-019-01602-z.

    Abstract

    We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77–80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.
  • Alhama, R. G., & Zuidema, W. (2018). Pre-Wiring and Pre-Training: What Does a Neural Network Need to Learn Truly General Identity Rules? Journal of Artificial Intelligence Research, 61, 927-946. doi:10.1613/jair.1.11197.

    Abstract

    In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Vishton claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model. In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network (ESN), and we show that only when combining both techniques the ESN is able to learn truly general identity rules. Finally, we discuss the relation between these cognitively motivated techniques and recent advances in Deep Learning.
  • Alhama, R. G., Siegelman, N., Frost, R., & Armstrong, B. C. (2019). The role of information in visual word recognition: A perceptually-constrained connectionist account. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 83-89). Austin, TX: Cognitive Science Society.

    Abstract

    Proficient readers typically fixate near the center of a word, with a slight bias towards word onset. We explore a novel account of this phenomenon based on combining information-theory with visual perceptual constraints in a connectionist model of visual word recognition. This account posits that the amount of information-content available for word identification varies across fixation locations and across languages, thereby explaining the overall fixation location bias in different languages, making the novel prediction that certain words are more readily identified when fixating at an atypical fixation location, and predicting specific cross-linguistic differences. We tested these predictions across several simulations in English and Hebrew, and in a pilot behavioral experiment. Results confirmed that the bias to fixate closer to word onset aligns with maximizing information in the visual signal, that some words are more readily identified at atypical fixation locations, and that these effects vary to some degree across languages.
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Ambridge, B., & Rowland, C. F. (2013). Experimental methods in studying child language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 149-168. doi:10.1002/wcs.1215.

    Abstract

    This article reviews the some of the most widely used methods used for studying children's language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repetition/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while listening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth-value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, counterbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choosing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215
  • Ambridge, B., Pine, J. M., Rowland, C. F., Chang, F., & Bidgood, A. (2013). The retreat from overgeneralization in child language acquisition: Word learning, morphology, and verb argument structure. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), 47-62. doi:10.1002/wcs.1207.

    Abstract

    This review investigates empirical evidence for different theoretical proposals regarding the retreat from overgeneralization errors in three domains: word learning (e.g., *doggie to refer to all animals), morphology [e.g., *spyer, *cooker (one who spies/cooks), *unhate, *unsqueeze, *sitted; *drawed], and verb argument structure [e.g., *Don't giggle me (c.f. Don't make me giggle); *Don't say me that (c.f. Don't say that to me)]. The evidence reviewed provides support for three proposals. First, in support of the pre-emption hypothesis, the acquisition of competing forms that express the desired meaning (e.g., spy for *spyer, sat for *sitted, and Don't make me giggle for *Don't giggle me) appears to block errors. Second, in support of the entrenchment hypothesis, repeated occurrence of particular items in particular constructions (e.g., giggle in the intransitive construction) appears to contribute to an ever strengthening probabilistic inference that non-attested uses (e.g., *Don't giggle me) are ungrammatical for adult speakers. That is, both the rated acceptability and production probability of particular errors decline with increasing frequency of pre-empting and entrenching forms in the input. Third, learners appear to acquire semantic and morphophonological constraints on particular constructions, conceptualized as properties of slots in constructions [e.g., the (VERB) slot in the morphological un-(VERB) construction or the transitive-causative (SUBJECT) (VERB) (OBJECT) argument-structure construction]. Errors occur as children acquire the fine-grained semantic and morphophonological properties of particular items and construction slots, and so become increasingly reluctant to use items in slots with which they are incompatible. Findings also suggest some role for adult feedback and conventionality; the principle that, for many given meanings, there is a conventional form that is used by all members of the speech community.
  • Ameka, F. K. (1987). A comparative analysis of linguistic routines in two languages: English and Ewe. Journal of Pragmatics, 11(3), 299-326. doi:10.1016/0378-2166(87)90135-4.

    Abstract

    It is very widely acknowledged that linguistic routines are not only embodiments of the sociocultural values of speech communities that use them, but their knowledge and appropriate use also form an essential part of a speaker's communicative/pragmatic competence. Despite this, many studies concentrate more on describing the use of routines rather than explaining the socio-cultural aspects of their meaning and the way they affect their use. It is the contention of this paper that there is the need to go beyond descriptions to explanations and explications of the use and meaning of routines that are culturally and socially revealing. This view is illustrated by a comparative analysis of functionally equivalent formulaic expressions in English and Ewe. The similarities are noted and the differences explained in terms of the socio-cultural traditions associated with the respective languages. It is argued that insights gained from such studies are valuable for crosscultural understanding and communication as well as for second language pedagogy.
  • Ameka, F. K. (1989). [Review of The case for lexicase: An outline of lexicase grammatical theory by Stanley Starosta]. Studies in Language, 13(2), 506-518.
  • Ameka, F. K., & Essegbey, J. (2013). Serialising languages: Satellite-framed, verb-framed or neither. Ghana Journal of Linguistics, 2(1), 19-38.

    Abstract

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner verbs. Verb serializing languages pose a challenge to this typology as they express Path as well as the Co-event of manner in finite verbs that together function as a single predicate in translational motion clause. We argue that these languages do not fit in the typology and constitute a type of their own. We draw on data from Akan and Frog story narrations in Ewe, a Kwa language, and Sranan, a Caribbean Creole with Gbe substrate, to show that in terms of discourse properties verb serializing languages behave like Verb-framed with respect to some properties and like Satellite-framed languages in terms of others. This study fed into the revision of the typology and such languages are now said to be equipollently-framed languages.
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Anastasopoulos, A., Lekakou, M., Quer, J., Zimianiti, E., DeBenedetto, J., & Chiang, D. (2018). Part-of-speech tagging on an endangered language: a parallel Griko-Italian Resource. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018) (pp. 2529-2539).

    Abstract

    Most work on part-of-speech (POS) tagging is focused on high resource languages, or examines low-resource and active learning settings through simulated studies. We evaluate POS tagging techniques on an actual endangered language, Griko. We present a resource that contains 114 narratives in Griko, along with sentence-level translations in Italian, and provides gold annotations for the test set. Based on a previously collected small corpus, we investigate several traditional methods, as well as methods that take advantage of monolingual data or project cross-lingual POS tags. We show that the combination of a semi-supervised method with cross-lingual transfer is more appropriate for this extremely challenging setting, with the best tagger achieving an accuracy of 72.9%. With an applied active learning scheme, which we use to collect sentence-level annotations over the test set, we achieve improvements of more than 21 percentage points
  • Andics, A. (2013). Who is talking? Behavioural and neural evidence for norm-based coding in voice identity learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Andics, A., Gál, V., Vicsi, K., Rudas, G., & Vidnyánszky, Z. (2013). FMRI repetition suppression for voices is modulated by stimulus expectations. NeuroImage, 69, 277-283. doi:10.1016/j.neuroimage.2012.12.033.

    Abstract

    According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing (‘alt’) and identical (‘rep’) voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Araújo, S., Fernandes, T., & Huettig, F. (2019). Learning to read facilitates retrieval of phonological representations in rapid automatized naming: Evidence from unschooled illiterate, ex-illiterate, and schooled literate adults. Developmental Science, 22(4): e12783. doi:10.1111/desc.12783.

    Abstract

    Rapid automatized naming (RAN) of visual items is a powerful predictor of reading skills. However, the direction and locus of the association between RAN and reading is still largely unclear. Here we investigated whether literacy acquisition directly bolsters RAN efficiency for objects, adopting a strong methodological design, by testing three groups of adults matched in age and socioeconomic variables, who differed only in literacy/schooling: unschooled illiterate and ex-illiterate, and schooled literate adults. To investigate in a fine-grained manner whether and how literacy facilitates lexical retrieval, we orthogonally manipulated the word-form frequency (high vs. low) and phonological neighborhood density (dense vs. spare) of the objects’ names. We observed that literacy experience enhances the automaticity with which visual stimuli (e.g., objects) can be retrieved and named: relative to readers (ex-illiterate and literate), illiterate adults performed worse on RAN. Crucially, the group difference was exacerbated and significant only for those items that were of low frequency and from sparse neighborhoods. These results thus suggest that, regardless of schooling and age at which literacy was acquired, learning to read facilitates the access to and retrieval of phonological representations, especially of difficult lexical items.
  • Armeni, K., Willems, R. M., Van den Bosch, A., & Schoffelen, J.-M. (2019). Frequency-specific brain dynamics related to prediction during language comprehension. NeuroImage, 198, 283-295. doi:10.1016/j.neuroimage.2019.04.083.

    Abstract

    The brain's remarkable capacity to process spoken language virtually in real time requires fast and efficient information processing machinery. In this study, we investigated how frequency-specific brain dynamics relate to models of probabilistic language prediction during auditory narrative comprehension. We recorded MEG activity while participants were listening to auditory stories in Dutch. Using trigram statistical language models, we estimated for every word in a story its conditional probability of occurrence. On the basis of word probabilities, we computed how unexpected the current word is given its context (word perplexity) and how (un)predictable the current linguistic context is (word entropy). We then evaluated whether source-reconstructed MEG oscillations at different frequency bands are modulated as a function of these language processing metrics. We show that theta-band source dynamics are increased in high relative to low entropy states, likely reflecting lexical computations. Beta-band dynamics are increased in situations of low word entropy and perplexity possibly reflecting maintenance of ongoing cognitive context. These findings lend support to the idea that the brain engages in the active generation and evaluation of predicted language based on the statistical properties of the input signal.

    Additional information

    Supplementary data
  • Arshamian, A., Iravani, B., Majid, A., & Lundström, J. N. (2018). Respiration modulates olfactory memory consolidation in humans. The Journal of Neuroscience, 38(48), 10286-10294. doi:10.1523/JNEUROSCI.3360-17.2018.

    Abstract

    In mammals, respiratory-locked hippocampal rhythms are implicated in the scaffolding and transfer of information between sensory and memory networks. These oscillations are entrained by nasal respiration and driven by the olfactory bulb. They then travel to the piriform cortex where they propagate further downstream to the hippocampus and modulate neural processes critical for memory formation. In humans, bypassing nasal airflow through mouth-breathing abolishes these rhythms and impacts encoding as well as recognition processes thereby reducing memory performance. It has been hypothesized that similar behavior should be observed for the consolidation process, the stage between encoding and recognition, were memory is reactivated and strengthened. However, direct evidence for such an effect is lacking in human and non-human animals. Here we tested this hypothesis by examining the effect of respiration on consolidation of episodic odor memory. In two separate sessions, female and male participants encoded odors followed by a one hour awake resting consolidation phase where they either breathed solely through their nose or mouth. Immediately after the consolidation phase, memory for odors was tested. Recognition memory significantly increased during nasal respiration compared to mouth respiration during consolidation. These results provide the first evidence that respiration directly impacts consolidation of episodic events, and lends further support to the notion that core cognitive functions are modulated by the respiratory cycle.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Ayub, Q., Yngvadottir, B., Chen, Y., Xue, Y., Hu, M., Vernes, S. C., Fisher, S. E., & Tyler-Smith, C. (2013). FOXP2 targets show evidence of positive selection in European populations. American Journal of Human Genetics, 92, 696-706. doi:10.1016/j.ajhg.2013.03.019.

    Abstract

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction.
  • Azar, Z., Backus, A., & Ozyurek, A. (2019). General and language specific factors influence reference tracking in speech and gesture in discourse. Discourse Processes, 56(7), 553-574. doi:10.1080/0163853X.2018.1519368.

    Abstract

    Referent accessibility influences expressions in speech and gestures in similar ways. Speakers mostly use richer forms as noun phrases (NPs) in speech and gesture more when referents have low accessibility, whereas they use reduced forms such as pronouns more often and gesture less when referents have high accessibility. We investigated the relationships between speech and gesture during reference tracking in a pro-drop language—Turkish. Overt pronouns were not strongly associated with accessibility but with pragmatic context (i.e., marking similarity, contrast). Nevertheless, speakers gestured more when referents were re-introduced versus maintained and when referents were expressed with NPs versus pronouns. Pragmatic context did not influence gestures. Further, pronouns in low-accessibility contexts were accompanied with gestures—possibly for reference disambiguation—more often than previously found for non-pro-drop languages in such contexts. These findings enhance our understanding of the relationships between speech and gesture at the discourse level.
  • Badimala, P., Mishra, C., Venkataramana, R. K. M., Bukhari, S. S., & Dengel, A. (2019). A Study of Various Text Augmentation Techniques for Relation Classification in Free Text. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (pp. 360-367). Setúbal, Portugal: SciTePress Digital Library. doi:10.5220/0007311003600367.

    Abstract

    Data augmentation techniques have been widely used in visual recognition tasks as it is easy to generate new
    data by simple and straight forward image transformations. However, when it comes to text data augmen-
    tations, it is difficult to find appropriate transformation techniques which also preserve the contextual and
    grammatical structure of language texts. In this paper, we explore various text data augmentation techniques
    in text space and word embedding space. We study the effect of various augmented datasets on the efficiency
    of different deep learning models for relation classification in text.
  • Bakker-Marshall, I., Takashima, A., Schoffelen, J.-M., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2018). Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. Journal of Cognitive Neuroscience, 30(5), 621-633. doi:10.1162/jocn_a_01240.

    Abstract

    Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
  • Balakrishnan, B., Verheijen, J., Lupo, A., Raymond, K., Turgeon, C., Yang, Y., Carter, K. L., Whitehead, K. J., Kozicz, T., Morava, E., & Lai, K. (2019). A novel phosphoglucomutase-deficient mouse model reveals aberrant glycosylation and early embryonic lethality. Journal of Inherited Metabolic Disease, 42(5), 998-1007. doi:10.1002/jimd.12110.

    Abstract

    Patients with phosphoglucomutase (PGM1) deficiency, a congenital disorder of glycosylation (CDG) suffer from multiple disease phenotypes. Midline cleft defects are present at birth. Overtime, additional clinical phenotypes, which include severe hypoglycemia, hepatopathy, growth retardation, hormonal deficiencies, hemostatic anomalies, frequently lethal, early-onset of dilated cardiomyopathy and myopathy emerge, reflecting the central roles of the enzyme in (glycogen) metabolism and glycosylation. To delineate the pathophysiology of the tissue-specific disease phenotypes, we constructed a constitutive Pgm2 (mouse ortholog of human PGM1)-knockout (KO) mouse model using CRISPR-Cas9 technology. After multiple crosses between heterozygous parents, we were unable to identify homozygous life births in 78 newborn pups (P = 1.59897E-06), suggesting an embryonic lethality phenotype in the homozygotes. Ultrasound studies of the course of pregnancy confirmed Pgm2-deficient pups succumb before E9.5. Oral galactose supplementation (9 mg/mL drinking water) did not rescue the lethality. Biochemical studies of tissues and skin fibroblasts harvested from heterozygous animals confirmed reduced Pgm2 enzyme activity and abundance, but no change in glycogen content. However, glycomics analyses in serum revealed an abnormal glycosylation pattern in the Pgm2(+/-) animals, similar to that seen in PGM1-CDG.
  • Barendse, M. T., Oort, F. J., Jak, S., & Timmerman, M. E. (2013). Multilevel exploratory factor analysis of discrete data. Netherlands Journal of Psychology, 67(4), 114-121.
  • Baron-Cohen, S., Johnson, D., Asher, J. E., Wheelwright, S., Fisher, S. E., Gregersen, P. K., & Allison, C. (2013). Is synaesthesia more common in autism? Molecular Autism, 4(1): 40. doi:10.1186/2040-2392-4-40.

    Abstract

    BACKGROUND:
    Synaesthesia is a neurodevelopmental condition in which a sensation in one modality triggers a perception in a second modality. Autism (shorthand for Autism Spectrum Conditions) is a neurodevelopmental condition involving social-communication disability alongside resistance to change and unusually narrow interests or activities. Whilst on the surface they appear distinct, they have been suggested to share common atypical neural connectivity.

    METHODS:
    In the present study, we carried out the first prevalence study of synaesthesia in autism to formally test whether these conditions are independent. After exclusions, 164 adults with autism and 97 controls completed a synaesthesia questionnaire, autism spectrum quotient, and test of genuineness-revised (ToG-R) online.

    RESULTS:
    The rate of synaesthesia in adults with autism was 18.9% (31 out of 164), almost three times greater than in controls (7.22%, 7 out of 97, P <0.05). ToG-R proved unsuitable for synaesthetes with autism.

    CONCLUSIONS:
    The significant increase in synaesthesia prevalence in autism suggests that the two conditions may share some common underlying mechanisms. Future research is needed to develop more feasible validation methods of synaesthesia in autism.

    Files private

    Request files
  • Barthel, M., & Sauppe, S. (2019). Speech planning at turn transitions in dialogue is associated with increased processing load. Cognitive Science, 43(7): e12768. doi:10.1111/cogs.12768.

    Abstract

    Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre‐recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence‐final verbs evokes larger task‐evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn‐taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
  • Basnakova, J. (2019). Beyond the language given: The neurobiological infrastructure for pragmatic inferencing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bauer, B. L. M. (2000). Archaic syntax in Indo-European: The spread of transitivity in Latin and French. Berlin: Mouton de Gruyter.

    Abstract

    Several grammatical features in early Indo-European traditionally have not been understood. Although Latin, for example, was a nominative language, a number of its inherited characteristics do not fit that typology and are difficult to account for, such as stative mihi est constructions to express possession, impersonal verbs, or absolute constructions. With time these archaic features have been replaced by transitive structures (e.g. possessive ‘have’). This book presents an extensive comparative and historical analysis of archaic features in early Indo-European languages and their gradual replacement in the history of Latin and early Romance, showing that the new structures feature transitive syntax and fit the patterns of a nominative language.
  • Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (Eds.), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality.

    Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bauer, B. L. M. (1987). L’évolution des structures morphologiques et syntaxiques du latin au français. Travaux de linguistique, 14-15, 95-107.
  • Bauer, B. L. M. (2019). Language contact and language borrowing? Compound verb forms in the Old French translation of the Gospel of St. Mark. Belgian Journal of Linguistics, 33, 210-250. doi:10.1075/bjl.00028.bau.

    Abstract

    This study investigates the potential influence of Latin syntax on the development of analytic verb forms in a well-defined and concrete instance of language contact, the Old French translation of a Latin Gospel. The data show that the formation of verb forms in the Old French was remarkably independent from the Latin original. While the Old French text closely follows the narrative of the Latin Gospel, its usage of compound verb forms is not dictated by the source text, as reflected e.g. in the quasi-omnipresence of the relative sequence finite verb + pp, which – with a few exceptions – all trace back to a different structure in the Latin text. Engels (VerenigdeStaten) Another important innovative difference in the Old French is the widespread use of aveir ‘have’ as an auxiliary, unknown in Latin. The article examines in detail the relation between the verbal forms in the two texts, showing that the translation is in line with of grammar. The usage of compound verb forms in the Old French Gospel is therefore autonomous rather than contact stimulated, let alone contact induced. The results challenge Blatt’s (1957) assumption identifying compound verb forms as a shared feature in European languages that should be ascribed to Latin influence.

    Files private

    Request files
  • Bauer, B. L. M., & Mota, M. (2018). On language, cognition, and the brain: An interview with Peter Hagoort. Sobre linguagem, cognição e cérebro: Uma entrevista com Peter Hagoort. Revista da Anpoll, (45), 291-296. doi:10.18309/anp.v1i45.1179.

    Abstract

    Managing Director of the Max Planck Institute for Psycholinguistics, founding Director of the Donders Centre for Cognitive Neuroimaging (DCCN, 1999), and professor of Cognitive Neuroscience at Radboud University, all located in Nijmegen, the Netherlands, PETER HAGOORT examines how the brain controls language production and comprehension. He was one of the first to integrate psychological theory and models from neuroscience in an attempt to understand how the human language faculty is instantiated in the brain.
  • Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2018). Mapping of Human FOXP2 Enhancers Reveals Complex Regulation. Frontiers in Molecular Neuroscience, 11: 47. doi:10.3389/fnmol.2018.00047.

    Abstract

    Mutations of the FOXP2 gene cause a severe speech and language disorder, providing a molecular window into the neurobiology of language. Individuals with FOXP2 mutations have structural and functional alterations affecting brain circuits that overlap with sites of FOXP2 expression, including regions of the cortex, striatum, and cerebellum. FOXP2 displays complex patterns of expression in the brain, as well as in non-neuronal tissues, suggesting that sophisticated regulatory mechanisms control its spatio-temporal expression. However, to date, little is known about the regulation of FOXP2 or the genomic elements that control its expression. Using chromatin conformation capture (3C), we mapped the human FOXP2 locus to identify putative enhancer regions that engage in long-range interactions with the promoter of this gene. We demonstrate the ability of the identified enhancer regions to drive gene expression. We also show regulation of the FOXP2 promoter and enhancer regions by candidate regulators – FOXP family and TBR1 transcription factors. These data point to regulatory elements that may contribute to the temporal- or tissue-specific expression patterns of human FOXP2. Understanding the upstream regulatory pathways controlling FOXP2 expression will bring new insight into the molecular networks contributing to human language and related disorders.
  • Becker, R., Pefkou, M., Michel, C. M., & Hervais-Adelman, A. (2013). Left temporal alpha-band activity reflects single word intelligibility. Frontiers in Systems Neuroscience, 7: 121. doi:10.3389/fnsys.2013.00121.

    Abstract

    The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
  • Becker, A., & Klein, W. (1984). Notes on the internal organization of a learner variety. In P. Auer, & A. Di Luzio (Eds.), Interpretive sociolinguistics (pp. 215-231). Tübingen: Narr.
  • Beckmann, N. S., Indefrey, P., & Petersen, W. (2018). Words count, but thoughts shift: A frame-based account to conceptual shifts in noun countability. Voprosy Kognitivnoy Lingvistiki (Issues of Cognitive Linguistics ), 2, 79-89. doi:10.20916/1812-3228-2018-2-79-89.

    Abstract

    The current paper proposes a frame-based account to conceptual shifts in the countability do-main. We interpret shifts in noun countability as syntactically driven metonymy. Inserting a noun in an incongruent noun phrase, that is combining it with a determiner of the other countability class, gives rise to a re-interpretation of the noun referent. We assume lexical entries to be three-fold frame com-plexes connecting conceptual knowledge representations with language-specific form representations via a lemma level. Empirical data from a lexical decision experiment are presented, that support the as-sumption of such a lemma level connecting perceptual input of linguistic signs to conceptual knowledge.
  • Behrens, B., Flecken, M., & Carroll, M. (2013). Progressive Attraction: On the Use and Grammaticalization of Progressive Aspect in Dutch, Norwegian, and German. Journal of Germanic linguistics, 25(2), 95-136. doi:10.1017/S1470542713000020.

    Abstract

    This paper investigates the use of aspectual constructions in Dutch, Norwegian, and German, languages in which aspect marking that presents events explicitly as ongoing, is optional. Data were elicited under similar conditions with native speakers in the three countries. We show that while German speakers make insignificant use of aspectual constructions, usage patterns in Norwegian and Dutch present an interesting case of overlap, as well as differences, with respect to a set of factors that attract or constrain the use of different constructions. The results indicate that aspect marking is grammaticalizing in Dutch, but there are no clear signs of a similar process in Norwegian.*
  • Bekemeier, N., Brenner, D., Klepp, A., Biermann-Ruben, K., & Indefrey, P. (2019). Electrophysiological correlates of concept type shifts. PLoS One, 14(3): e0212624. doi:10.1371/journal.pone.0212624.

    Abstract

    A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.

    Additional information

    dataset
  • Belpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E. and 4 moreBelpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E., Mamus, E., Montanier, J.-M., Oranç, C., & Pandey, A. K. (2018). Guidelines for designing social robots as second language tutors. International Journal of Social Robotics, 10(3), 325-341. doi:10.1007/s12369-018-0467-6.

    Abstract

    In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.
  • Benítez-Burraco, A., & Dediu, D. (2018). Ancient DNA and language evolution: A special section. Journal of Language Evolution, 3(1), 47-48. doi:10.1093/jole/lzx024.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.

    Abstract

    The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.

    Abstract

    The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.

    Abstract

    Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). Language family trees reflect geography and demography beyond neutral drift. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 38-40). Toruń, Poland: NCU Press. doi:10.12775/3991-1.006.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). The evolution of language families is shaped by the environment beyond neutral drift. Nature Human Behaviour, 2, 816-821. doi:10.1038/s41562-018-0457-6.

    Abstract

    There are more than 7,000 languages spoken in the world today1. It has been argued that the natural and social environment of languages drives this diversity. However, a fundamental question is how strong are environmental pressures, and does neutral drift suffice as a mechanism to explain diversification? We estimate the phylogenetic signals of geographic dimensions, distance to water, climate and population size on more than 6,000 phylogenetic trees of 46 language families. Phylogenetic signals of environmental factors are generally stronger than expected under the null hypothesis of no relationship with the shape of family trees. Importantly, they are also—in most cases—not compatible with neutral drift models of constant-rate change across the family tree branches. Our results suggest that language diversification is driven by further adaptive and non-adaptive pressures. Language diversity cannot be understood without modelling the pressures that physical, ecological and social factors exert on language users in different environments across the globe.
  • Bergelson*, E., Casillas*, M., Soderstrom, M., Seidl, A., Warlaumont, A. S., & Amatuni, A. (2019). What Do North American Babies Hear? A large-scale cross-corpus analysis. Developmental Science, 22(1): e12724. doi:10.1111/desc.12724.

    Abstract

    - * indicates joint first authorship - Abstract: A range of demographic variables influence how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically-valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2--3x more speech from females than males. Second, children in higher-maternal-education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for re-use and re-analysis by other researchers.

    Additional information

    desc12724-sup-0001-supinfo.pdf
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C., & Cristia, A. (2018). Environmental influences on infants’ native vowel discrimination: The case of talker number in daily life. Infancy, 23(4), 484-501. doi:10.1111/infa.12232.

    Abstract

    Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4-month-olds (before attunement to native vowels), 6-month-olds (at the cusp of native vowel attunement) and 12-month-olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4- and 6-month-olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.
  • Bergmann, C., Tsuji, S., Piccinini, P. E., Lewis, M. L., Braginsky, M. B., Frank, M. C., & Cristia, A. (2018). Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Development, 89(6), 1996-2009. doi:10.1111/cdev.13079.

    Abstract

    Previous work suggests key factors for replicability, a necessary feature for theory
    building, include statistical power and appropriate research planning. These factors are examined by analyzing a collection of 12 standardized meta-analyses on language development between birth and 5 years. With a median effect size of Cohen's d= 0.45 and typical sample size of 18 participants, most research is underpowered (range: 6%-99%;
    median 44%); and calculating power based on seminal publications is not a suitable strategy.
    Method choice can be improved, as shown in analyses on exclusion rates and effect size as a
    function of method. The article ends with a discussion on how to increase replicability in both language acquisition studies specifically and developmental research more generally.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • Blomert, L., & Hagoort, P. (1987). Neurobiologische en neuropsychologische aspecten van dyslexie. In J. Hamers, & A. Van der Leij (Eds.), Dyslexie 87 (pp. 35-44). Lisse: Swets and Zeitlinger.
  • Blythe, J. (2018). Genesis of the trinity: The convergent evolution of trirelational kinterms. In P. McConvell, & P. Kelly (Eds.), Skin, kin and clan: The dynamics of social categories in Indigenous Australia (pp. 431-471). Canberra: ANU EPress.
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bode, S., Feuerriegel, D., Bennett, D., & Alday, P. M. (2019). The Decision Decoding ToolBOX (DDTBOX) -- A Multivariate Pattern Analysis Toolbox for Event-Related Potentials. Neuroinformatics, 17(1), 27-42. doi:10.1007/s12021-018-9375-z.

    Abstract

    In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.
  • De Boer, B., & Thompson, B. (2018). Biology-culture co-evolution in finite populations. Scientific Reports, 8: 1209. doi:10.1038/s41598-017-18928-0.

    Abstract

    Language is the result of two concurrent evolutionary processes: Biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: The speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: A generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: In realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another. © 2018 The Author(s).

    Additional information

    41598_2017_18928_MOESM1_ESM.pdf
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bögels, S., Casillas, M., & Levinson, S. C. (2018). Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question. Neuropsychologia, 109, 295-310. doi:10.1016/j.neuropsychologia.2017.12.028.

    Abstract

    Rapid response latencies in conversation suggest that responders start planning before the ongoing turn is finished. Indeed, an earlier EEG study suggests that listeners start planning their responses to questions as soon as they can (Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5, 12881). The present study aimed to (1) replicate this early planning effect and (2) investigate whether such early response planning incurs a cost on participants’ concurrent comprehension of the ongoing turn. During the experiment participants answered questions from a confederate partner. To address aim (1), the questions were designed such that response planning could start either early or late in the turn. Our results largely replicate Bögels et al. (2015) showing a large positive ERP effect and an oscillatory alpha/beta reduction right after participants could have first started planning their verbal response, again suggesting an early start of response planning. To address aim (2), the confederate's questions also contained either an expected word or an unexpected one to elicit a differential N400 effect, either before or after the start of response planning. We hypothesized an attenuated N400 effect after response planning had started. In contrast, the N400 effects before and after planning did not differ. There was, however, a positive correlation between participants' response time and their N400 effect size after planning had started; quick responders showed a smaller N400 effect, suggesting reduced attention to comprehension and possibly reduced anticipatory processing. We conclude that early response planning can indeed impact comprehension processing.

    Additional information

    mmc1.pdf
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Bohnemeyer, J. (2000). Where do pragmatic meanings come from? In W. Spooren, T. Sanders, & C. van Wijk (Eds.), Samenhang in Diversiteit; Opstellen voor Leo Noorman, aangeboden bij gelegenheid van zijn zestigste verjaardag (pp. 137-153).
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Language testing, 30(2), 159-175. doi:10.1177/0265532212455394.

    Abstract

    The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
  • Bowerman, M. (1987). Commentary: Mechanisms of language acquisition. In B. MacWhinney (Ed.), Mechanisms of language acquisition (pp. 443-466). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M. L. Rice, & R. L. Schiefelbusch (Eds.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M. (2000). Where do children's word meanings come from? Rethinking the role of cognition in early semantic development. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought and development (pp. 199-230). Mahwah, NJ: Lawrence Erlbaum.
  • Boyle, W., Lindell, A. K., & Kidd, E. (2013). Investigating the role of verbal working memory in young children's sentence comprehension. Language Learning, 63(2), 211-242. doi:10.1111/lang.12003.

    Abstract

    This study considers the role of verbal working memory in sentence comprehension in typically developing English-speaking children. Fifty-six (N = 56) children aged 4;0–6;6 completed a test of language comprehension that contained sentences which varied in complexity, standardized tests of vocabulary and nonverbal intelligence, and three tests of memory that measured the three verbal components of Baddeley's model of Working Memory (WM): the phonological loop, the episodic buffer, and the central executive. The results showed that children experienced most difficulty comprehending sentences that contained noncanonical word order (passives and object relative clauses). A series of linear mixed effects models were run to analyze the contribution of each component of WM to sentence comprehension. In contrast to most previous studies, the measure of the central executive did not predict comprehension accuracy. A canonicity by episodic buffer interaction showed that the episodic buffer measure was positively associated with better performance on the noncanonical sentences. The results are discussed with reference to capacity-limit and experience-dependent approaches to language comprehension.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brand, S., & Ernestus, M. (2018). Listeners’ processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: evidence from native listeners and learners of French. Quarterly Journal of Experimental Psychology, 71(5), 1240-1259. doi:10.1080/17470218.2017.1313282.

    Abstract

    n casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀəvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners’ exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves
  • Brand, J., Monaghan, P., & Walker, P. (2018). The changing role of sound‐symbolism for small versus large vocabularies. Cognitive Science, 42(S2), 578-590. doi:10.1111/cogs.12565.

    Abstract

    Natural language contains many examples of sound‐symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound‐symbolism for word learning as vocabulary size varies. Participants learned form‐meaning mappings for words which were either congruent or incongruent with regard to sound‐symbolic relations. For the smaller vocabulary, sound‐symbolism facilitated learning individual words, whereas for larger vocabularies sound‐symbolism supported learning category distinctions. The changing properties of form‐meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development.

    Additional information

    https://git.io/v5BXJ
  • Brandler, W. M., Morris, A. P., Evans, D. M., Scerri, T. S., Kemp, J. P., Timpson, N. J., St Pourcain, B., Davey Smith, G., Ring, S. M., Stein, J., Monaco, A. P., Talcott, J. B., Fisher, S. E., Webber, C., & Paracchini, S. (2013). Common variants in left/right asymmetry genes and pathways are associated with relative hand skill. PLoS Genetics, 9(9): e1003751. doi:10.1371/journal.pgen.1003751.

    Abstract

    Humans display structural and functional asymmetries in brain organization, strikingly with respect to language and handedness. The molecular basis of these asymmetries is unknown. We report a genome-wide association study meta-analysis for a quantitative measure of relative hand skill in individuals with dyslexia [reading disability (RD)] (n = 728). The most strongly associated variant, rs7182874 (P = 8.68×10−9), is located in PCSK6, further supporting an association we previously reported. We also confirmed the specificity of this association in individuals with RD; the same locus was not associated with relative hand skill in a general population cohort (n = 2,666). As PCSK6 is known to regulate NODAL in the development of left/right (LR) asymmetry in mice, we developed a novel approach to GWAS pathway analysis, using gene-set enrichment to test for an over-representation of highly associated variants within the orthologs of genes whose disruption in mice yields LR asymmetry phenotypes. Four out of 15 LR asymmetry phenotypes showed an over-representation (FDR≤5%). We replicated three of these phenotypes; situs inversus, heterotaxia, and double outlet right ventricle, in the general population cohort (FDR≤5%). Our findings lead us to propose that handedness is a polygenic trait controlled in part by the molecular mechanisms that establish LR body asymmetry early in development.
  • Brandmeyer, A., Sadakata, M., Spyrou, L., McQueen, J. M., & Desain, P. (2013). Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Frontiers in Neuroscience, 7: 265. doi:10.3389/fnins.2013.00265.

    Abstract

    Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces

    Additional information

    Brandmeyer_etal_2013a.pdf
  • Brandmeyer, A., Farquhar, J., McQueen, J. M., & Desain, P. (2013). Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One, 8: e68261. doi:10.1371/journal.pone.0068261.

    Abstract

    Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.

Share this page