Publications

Displaying 1 - 100 of 1207
  • Acerbi, A., Van Leeuwen, E. J. C., Haun, D. B. M., & Tennie, C. (2018). Reply to 'Sigmoidal acquisition curves are good indicators of conformist transmission'. Scientific Reports, 8(1): 14016. doi:10.1038/s41598-018-30382-0.

    Abstract

    In the Smaldino et al. study ‘Sigmoidal Acquisition Curves are Good Indicators of Conformist Transmission’, our original findings regarding the conditional validity of using population-level sigmoidal acquisition curves as means to evidence individual-level conformity are contested. We acknowledge the identification of useful nuances, yet conclude that our original findings remain relevant for the study of conformist learning mechanisms. Replying to: Smaldino, P. E., Aplin, L. M. & Farine, D. R. Sigmoidal Acquisition Curves Are Good Indicators of Conformist Transmission. Sci. Rep. 8, https://doi.org/10.1038/s41598-018-30248-5 (2018).
  • Ahn, D., Abbott, M. J., Rayner, K., Ferreira, V. S., & Gollan, T. H. (2020). Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks. Journal of Neurolinguistics, 54: 100885. doi:10.1016/j.jneuroling.2019.100885.

    Abstract

    Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control.
  • Akita, K., & Dingemanse, M. (2019). Ideophones (Mimetics, Expressives). In Oxford Research Encyclopedia for Linguistics. Oxford: Oxford University Press. doi:10.1093/acrefore/9780199384655.013.477.

    Abstract

    Ideophones, also termed “mimetics” or “expressives,” are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack, but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.

    Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalised depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
  • Alcock, K., Meints, K., & Rowland, C. F. (2020). The UK communicative development inventories: Words and gestures. Guilford, UK: J&R Press Ltd.
  • Alday, P. M. (2019). How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology, 56(12): e13451. doi:10.1111/psyp.13451.

    Abstract

    Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal‐to‐noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM‐based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
  • Alday, P. M. (2019). M/EEG analysis of naturalistic stories: a review from speech to language processing. Language, Cognition and Neuroscience, 34(4), 457-473. doi:10.1080/23273798.2018.1546882.

    Abstract

    M/EEG research using naturally spoken stories as stimuli has focused largely on speech and not
    language processing. The temporal resolution of M/EEG is a two-edged sword, allowing for the
    study of the fine acoustic structure of speech, yet easily overwhelmed by the temporal noise of
    variation in constituent length. Recent theories on the neural encoding of linguistic structure
    require the temporal resolution of M/EEG, yet suffer from confounds when studied on traditional,
    heavily controlled stimuli. Recent methodological advances allow for synthesising naturalistic
    designs and traditional, controlled designs into effective M/EEG research on naturalistic
    language. In this review, we highlight common threads throughout the at-times distinct research
    traditions of speech and language processing. We conclude by examining the tradeoffs and
    successes of three M/EEG studies on fully naturalistic language paradigms and the future
    directions they suggest.
  • Alday, P. M., & Kretzschmar, F. (2019). Speed-accuracy tradeoffs in brain and behavior: Testing the independence of P300 and N400 related processes in behavioral responses to sentence categorization. Frontiers in Human Neuroscience, 13: 285. doi:10.3389/fnhum.2019.00285.

    Abstract

    Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.

    Additional information

    Supplementary material
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2020). Evaluating word embeddings for language acquisition. In E. Chersoni, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (pp. 38-42). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2020.cmcl-1.4.

    Abstract

    Continuous vector word representations (or
    word embeddings) have shown success in cap-turing semantic relations between words, as evidenced by evaluation against behavioral data of adult performance on semantic tasks (Pereira et al., 2016). Adult semantic knowl-edge is the endpoint of a language acquisition process; thus, a relevant question is whether these models can also capture emerging word
    representations of young language learners. However, the data for children’s semantic knowledge across development is scarce. In this paper, we propose to bridge this gap by using Age of Acquisition norms to evaluate word embeddings learnt from child-directed input. We present two methods that evaluate word embeddings in terms of (a) the semantic neighbourhood density of learnt words, and (b) con-
    vergence to adult word associations. We apply our methods to bag-of-words models, and find that (1) children acquire words with fewer semantic neighbours earlier, and (2) young learners only attend to very local context. These findings provide converging evidence for validity of our methods in understanding the prerequisite features for a distributional model of word learning.
  • Alhama, R. G., & Zuidema, W. (2019). A review of computational models of basic rule learning: The neural-symbolic debate and beyond. Psychonomic Bulletin & Review, 26(4), 1174-1194. doi:10.3758/s13423-019-01602-z.

    Abstract

    We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77–80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.
  • Alhama, R. G., & Zuidema, W. (2018). Pre-Wiring and Pre-Training: What Does a Neural Network Need to Learn Truly General Identity Rules? Journal of Artificial Intelligence Research, 61, 927-946. doi:10.1613/jair.1.11197.

    Abstract

    In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Vishton claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model. In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network (ESN), and we show that only when combining both techniques the ESN is able to learn truly general identity rules. Finally, we discuss the relation between these cognitively motivated techniques and recent advances in Deep Learning.
  • Alhama, R. G., Siegelman, N., Frost, R., & Armstrong, B. C. (2019). The role of information in visual word recognition: A perceptually-constrained connectionist account. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 83-89). Austin, TX: Cognitive Science Society.

    Abstract

    Proficient readers typically fixate near the center of a word, with a slight bias towards word onset. We explore a novel account of this phenomenon based on combining information-theory with visual perceptual constraints in a connectionist model of visual word recognition. This account posits that the amount of information-content available for word identification varies across fixation locations and across languages, thereby explaining the overall fixation location bias in different languages, making the novel prediction that certain words are more readily identified when fixating at an atypical fixation location, and predicting specific cross-linguistic differences. We tested these predictions across several simulations in English and Hebrew, and in a pilot behavioral experiment. Results confirmed that the bias to fixate closer to word onset aligns with maximizing information in the visual signal, that some words are more readily identified at atypical fixation locations, and that these effects vary to some degree across languages.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Twomey, K. E. (2020). Introduction. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 1-7). Amsterdam: John Benjamins. doi:10.1075/tilar.27.int.
  • Ambridge, B., Rowland, C. F., & Gummery, A. (2020). Teaching the unlearnable: A training study of complex yes/no questions. Language and Cognition, 12(2), 385-410. doi:10.1017/langcog.2020.5.

    Abstract

    A central question in language acquisition is how children master sentence types that they have seldom, if ever, heard. Here we report the findings of a pre-registered, randomised, single-blind intervention study designed to test the prediction that, for one such sentence type, complex questions (e.g., Is the crocodile who’s hot eating?), children could combine schemas learned, on the basis of the input, for complex noun phrases (the [THING] who’s [PROPERTY]) and simple questions (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Children aged 4;2 to 6;8 (M = 5;6, SD = 7.7 months) were trained on simple questions (e.g., Is the bird cleaning?) and either (Experimental group, N = 61) complex noun phrases (e.g., the bird who’s sad) or (Control group, N = 61) matched simple noun phrases (e.g., the sad bird). In general, the two groups did not differ on their ability to produce novel complex questions at test. However, the Experimental group did show (a) some evidence of generalising a particular complex NP schema (the [THING] who’s [PROPERTY] as opposed to the [THING] that’s [PROPERTY]) from training to test, (b) a lower rate of auxiliary-doubling errors (e.g., *Is the crocodile who’s hot is eating?), and (c) a greater ability to produce complex questions on the first test trial. We end by suggesting some different methods – specifically artificial language learning and syntactic priming – that could potentially be used to better test the present account.
  • Ameka, F. K. (1987). A comparative analysis of linguistic routines in two languages: English and Ewe. Journal of Pragmatics, 11(3), 299-326. doi:10.1016/0378-2166(87)90135-4.

    Abstract

    It is very widely acknowledged that linguistic routines are not only embodiments of the sociocultural values of speech communities that use them, but their knowledge and appropriate use also form an essential part of a speaker's communicative/pragmatic competence. Despite this, many studies concentrate more on describing the use of routines rather than explaining the socio-cultural aspects of their meaning and the way they affect their use. It is the contention of this paper that there is the need to go beyond descriptions to explanations and explications of the use and meaning of routines that are culturally and socially revealing. This view is illustrated by a comparative analysis of functionally equivalent formulaic expressions in English and Ewe. The similarities are noted and the differences explained in terms of the socio-cultural traditions associated with the respective languages. It is argued that insights gained from such studies are valuable for crosscultural understanding and communication as well as for second language pedagogy.
  • Amora, K. K., Garcia, R., & Gagarina, N. (2020). Tagalog adaptation of the Multilingual Assessment Instrument for Narratives: History, process and preliminary results. In N. Gagarina, & J. Lindgren (Eds.), New language versions of MAIN: Multilingual Assessment Instrument for Narratives – Revised (pp. 221-233).

    Abstract

    This paper briefly presents the current situation of bilingualism in the Philippines,
    specifically that of Tagalog-English bilingualism. More importantly, it describes the process of adapting the Multilingual Assessment Instrument for Narratives (LITMUS-MAIN) to Tagalog, the basis of Filipino, which is the country’s national language.
    Finally, the results of a pilot study conducted on Tagalog-English bilingual children and
    adults (N=27) are presented. The results showed that Story Structure is similar across the
    two languages and that it develops significantly with age.
  • Anastasopoulos, A., Lekakou, M., Quer, J., Zimianiti, E., DeBenedetto, J., & Chiang, D. (2018). Part-of-speech tagging on an endangered language: a parallel Griko-Italian Resource. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018) (pp. 2529-2539).

    Abstract

    Most work on part-of-speech (POS) tagging is focused on high resource languages, or examines low-resource and active learning settings through simulated studies. We evaluate POS tagging techniques on an actual endangered language, Griko. We present a resource that contains 114 narratives in Griko, along with sentence-level translations in Italian, and provides gold annotations for the test set. Based on a previously collected small corpus, we investigate several traditional methods, as well as methods that take advantage of monolingual data or project cross-lingual POS tags. We show that the combination of a semi-supervised method with cross-lingual transfer is more appropriate for this extremely challenging setting, with the best tagger achieving an accuracy of 72.9%. With an applied active learning scheme, which we use to collect sentence-level annotations over the test set, we achieve improvements of more than 21 percentage points
  • Anichini, M., De Heer Kloots, M., & Ravignani, A. (2020). Interactive rhythms in the wild, in the brain, and in silico. Canadian Journal of Experimental Psychology, 74(3), 170-175. doi:10.1037/cep0000224.

    Abstract

    There are some historical divisions in methods, rationales, and purposes between
    studies on comparative cognition and behavioural ecology. In turn, the interaction between
    these two branches and studies from mathematics, computation and neuroscience is not
    usual. In this short piece, we attempt to build bridges among these disciplines. We present a
    series of interconnected vignettes meant to illustrate how a more interdisciplinary approach
    looks like when successful, and its advantages. Concretely, we focus on a recent topic,
    namely animal rhythms in interaction, studied under different approaches. We showcase 5
    research efforts, which we believe successfully link 5 particular Scientific areas of rhythm
    research conceptualized as: Social neuroscience, Detailed rhythmic quantification,
    Ontogeny, Computational approaches and Spontaneous interactions. Our suggestions will
    hopefully spur a ‘Comparative rhythms in interaction’ field, which can integrate and
    capitalize on knowledge from zoology, comparative psychology, neuroscience, and
    computation.
  • Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2020). Sensory modality-independent activation of the brain network for language. The Journal of Neuroscience, 40(14), 2914-2924. doi:10.1523/JNEUROSCI.2271-19.2020.

    Abstract

    The meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects.
  • Araújo, S., Fernandes, T., & Huettig, F. (2019). Learning to read facilitates retrieval of phonological representations in rapid automatized naming: Evidence from unschooled illiterate, ex-illiterate, and schooled literate adults. Developmental Science, 22(4): e12783. doi:10.1111/desc.12783.

    Abstract

    Rapid automatized naming (RAN) of visual items is a powerful predictor of reading skills. However, the direction and locus of the association between RAN and reading is still largely unclear. Here we investigated whether literacy acquisition directly bolsters RAN efficiency for objects, adopting a strong methodological design, by testing three groups of adults matched in age and socioeconomic variables, who differed only in literacy/schooling: unschooled illiterate and ex-illiterate, and schooled literate adults. To investigate in a fine-grained manner whether and how literacy facilitates lexical retrieval, we orthogonally manipulated the word-form frequency (high vs. low) and phonological neighborhood density (dense vs. spare) of the objects’ names. We observed that literacy experience enhances the automaticity with which visual stimuli (e.g., objects) can be retrieved and named: relative to readers (ex-illiterate and literate), illiterate adults performed worse on RAN. Crucially, the group difference was exacerbated and significant only for those items that were of low frequency and from sparse neighborhoods. These results thus suggest that, regardless of schooling and age at which literacy was acquired, learning to read facilitates the access to and retrieval of phonological representations, especially of difficult lexical items.
  • Armeni, K., Willems, R. M., Van den Bosch, A., & Schoffelen, J.-M. (2019). Frequency-specific brain dynamics related to prediction during language comprehension. NeuroImage, 198, 283-295. doi:10.1016/j.neuroimage.2019.04.083.

    Abstract

    The brain's remarkable capacity to process spoken language virtually in real time requires fast and efficient information processing machinery. In this study, we investigated how frequency-specific brain dynamics relate to models of probabilistic language prediction during auditory narrative comprehension. We recorded MEG activity while participants were listening to auditory stories in Dutch. Using trigram statistical language models, we estimated for every word in a story its conditional probability of occurrence. On the basis of word probabilities, we computed how unexpected the current word is given its context (word perplexity) and how (un)predictable the current linguistic context is (word entropy). We then evaluated whether source-reconstructed MEG oscillations at different frequency bands are modulated as a function of these language processing metrics. We show that theta-band source dynamics are increased in high relative to low entropy states, likely reflecting lexical computations. Beta-band dynamics are increased in situations of low word entropy and perplexity possibly reflecting maintenance of ongoing cognitive context. These findings lend support to the idea that the brain engages in the active generation and evaluation of predicted language based on the statistical properties of the input signal.

    Additional information

    Supplementary data
  • Arnhold, A., Porretta, V., Chen, A., Verstegen, S. A., Mok, I., & Järvikivi, J. (2020). (Mis) understanding your native language: Regional accent impedes processing of information status. Psychonomic Bulletin & Review, 27, 801-808. doi:10.3758/s13423-020-01731-w.

    Abstract

    Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues.
    However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent.
    Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech,
    we show that regional accent impedes higher levels of language processing, making native listeners’ processing resemble that of
    second-language listeners.
    In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a
    screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between
    two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the
    Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native
    British listeners do.
    In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as
    the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the
    Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent
    matched than for mismatches, suggesting a native-like competence in these offline ratings.
    These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and
    representation to include both prosody and regional variation.
  • Arshamian, A., Manko, P., & Majid, A. (2020). Limitations in odour simulation may originate from differential sensory embodiment. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190273. doi:10.1098/rstb.2019.0273.

    Abstract

    Across diverse lineages, animals communicate using chemosignals, but only humans communicate about chemical signals. Many studies have observed that compared with other sensory modalities, communication about smells is relatively rare and not always reliable. Recent cross-cultural studies, on the other hand, suggest some communities are more olfactorily oriented than previously supposed. Nevertheless, across the globe a general trend emerges where olfactory communication is relatively hard. We suggest here that this is in part because olfactory representations are different in kind: they have a low degree of embodiment, and are not easily expressed as primitives, thereby limiting the mental manipulations that can be performed with them. New exploratory data from Dutch children (9–12 year-olds) and adults support that mental imagery from olfaction is weak in comparison with vision and audition, and critically this is not affected by language development. Specifically, while visual and auditory imagery becomes more vivid with age, olfactory imagery shows no such development. This is consistent with the idea that olfactory representations are different in kind from representations from the other senses.

    Additional information

    Supplementary material
  • Arshamian, A., Iravani, B., Majid, A., & Lundström, J. N. (2018). Respiration modulates olfactory memory consolidation in humans. The Journal of Neuroscience, 38(48), 10286-10294. doi:10.1523/JNEUROSCI.3360-17.2018.

    Abstract

    In mammals, respiratory-locked hippocampal rhythms are implicated in the scaffolding and transfer of information between sensory and memory networks. These oscillations are entrained by nasal respiration and driven by the olfactory bulb. They then travel to the piriform cortex where they propagate further downstream to the hippocampus and modulate neural processes critical for memory formation. In humans, bypassing nasal airflow through mouth-breathing abolishes these rhythms and impacts encoding as well as recognition processes thereby reducing memory performance. It has been hypothesized that similar behavior should be observed for the consolidation process, the stage between encoding and recognition, were memory is reactivated and strengthened. However, direct evidence for such an effect is lacking in human and non-human animals. Here we tested this hypothesis by examining the effect of respiration on consolidation of episodic odor memory. In two separate sessions, female and male participants encoded odors followed by a one hour awake resting consolidation phase where they either breathed solely through their nose or mouth. Immediately after the consolidation phase, memory for odors was tested. Recognition memory significantly increased during nasal respiration compared to mouth respiration during consolidation. These results provide the first evidence that respiration directly impacts consolidation of episodic events, and lends further support to the notion that core cognitive functions are modulated by the respiratory cycle.
  • Asano, Y., Yuan, C., Grohe, A.-K., Weber, A., Antoniou, M., & Cutler, A. (2020). Uptalk interpretation as a function of listening experience. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 735-739). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-150.

    Abstract

    The term “uptalk” describes utterance-final pitch rises that carry no sentence-structural information. Uptalk is usually dialectal or sociolectal, and Australian English (AusEng) is particularly known for this attribute. We ask here whether experience with an uptalk variety affects listeners’ ability to categorise rising pitch contours on the basis of the timing and height of their onset and offset. Listeners were two groups of English-speakers (AusEng, and American English), and three groups of listeners with L2 English: one group with Mandarin as L1 and experience of listening to AusEng, one with German as L1 and experience of listening to AusEng, and one with German as L1 but no AusEng experience. They heard nouns (e.g. flower, piano) in the framework “Got a NOUN”, each ending with a pitch rise artificially manipulated on three contrasts: low vs. high rise onset, low vs. high rise offset and early vs. late rise onset. Their task was to categorise the tokens as “question” or “statement”, and we analysed the effect of the pitch contrasts on their judgements. Only the native AusEng listeners were able to use the pitch contrasts systematically in making these categorisations.
  • Azar, Z. (2020). Effect of language contact on speech and gesture: The case of Turkish-Dutch bilinguals in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Azar, Z., Backus, A., & Ozyurek, A. (2019). General and language specific factors influence reference tracking in speech and gesture in discourse. Discourse Processes, 56(7), 553-574. doi:10.1080/0163853X.2018.1519368.

    Abstract

    Referent accessibility influences expressions in speech and gestures in similar ways. Speakers mostly use richer forms as noun phrases (NPs) in speech and gesture more when referents have low accessibility, whereas they use reduced forms such as pronouns more often and gesture less when referents have high accessibility. We investigated the relationships between speech and gesture during reference tracking in a pro-drop language—Turkish. Overt pronouns were not strongly associated with accessibility but with pragmatic context (i.e., marking similarity, contrast). Nevertheless, speakers gestured more when referents were re-introduced versus maintained and when referents were expressed with NPs versus pronouns. Pragmatic context did not influence gestures. Further, pronouns in low-accessibility contexts were accompanied with gestures—possibly for reference disambiguation—more often than previously found for non-pro-drop languages in such contexts. These findings enhance our understanding of the relationships between speech and gesture at the discourse level.
  • Azar, Z., Backus, A., & Ozyurek, A. (2020). Language contact does not drive gesture transfer: Heritage speakers maintain language specific gesture patterns in each language. Bilingualism: Language and Cognition, 23(2), 414-428. doi:10.1017/S136672891900018X.

    Abstract

    This paper investigates whether there are changes in gesture rate when speakers of two languages with different gesture rates (Turkish-high gesture; Dutch-low gesture) come into daily contact. We analyzed gestures produced by second-generation heritage speakers of Turkish in the Netherlands in each language, comparing them to monolingual baselines. We did not find differences between bilingual and monolingual speakers, possibly because bilinguals were proficient in both languages and used them frequently – in line with a usage-based approach to language. However, bilinguals produced more deictic gestures than monolinguals in both Turkish and Dutch, which we interpret as a bilingual strategy. Deictic gestures may help organize discourse by placing entities in gesture space and help reduce the cognitive load associated with being bilingual, e.g., inhibition cost. Therefore, gesture rate does not necessarily change in contact situations but might be modulated by frequency of language use, proficiency, and cognitive factors related to being bilingual.
  • Azar, Z., Ozyurek, A., & Backus, A. (2020). Turkish-Dutch bilinguals maintain language-specific reference tracking strategies in elicited narratives. International Journal of Bilingualism, 24(2), 376-409. doi:10.1177/1367006919838375.

    Abstract

    Aim:

    This paper examines whether second-generation Turkish heritage speakers in the Netherlands follow language-specific patterns of reference tracking in Turkish and Dutch, focusing on discourse status and pragmatic contexts as factors that may modulate the choice of referring expressions (REs), that is, the noun phrase (NP), overt pronoun and null pronoun.
    Methodology:

    Two short silent videos were used to elicit narratives from 20 heritage speakers of Turkish, both in Turkish and in Dutch. Monolingual baseline data were collected from 20 monolingually raised speakers of Turkish in Turkey and 20 monolingually raised speakers of Dutch in the Netherlands. We also collected language background data from bilinguals with an extensive survey.
    Data and analysis:

    Using generalised logistic mixed-effect regression, we analysed the influence of discourse status and pragmatic context on the choice of subject REs in Turkish and Dutch, comparing bilingual data to the monolingual baseline in each language.
    Findings:

    Heritage speakers used overt versus null pronouns in Turkish and stressed versus reduced pronouns in Dutch in pragmatically appropriate contexts. There was, however, a slight increase in the proportions of overt pronouns as opposed to NPs in Turkish and as opposed to null pronouns in Dutch. We suggest an explanation based on the degree of entrenchment of differential RE types in relation to discourse status as the possible source of the increase.
    Originality:

    This paper provides data from an understudied language pair in the domain of reference tracking in language contact situations. Unlike several studies of pronouns in language contact, we do not find differences across monolingual and bilingual speakers with regard to pragmatic constraints on overt pronouns in the minority pro-drop language.
    Significance:

    Our findings highlight the importance of taking language proficiency and use into account while studying bilingualism and combining formal approaches to language use with usage-based approaches for a more complete understanding of bilingual language production.
  • Badimala, P., Mishra, C., Venkataramana, R. K. M., Bukhari, S. S., & Dengel, A. (2019). A Study of Various Text Augmentation Techniques for Relation Classification in Free Text. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (pp. 360-367). Setúbal, Portugal: SciTePress Digital Library. doi:10.5220/0007311003600367.

    Abstract

    Data augmentation techniques have been widely used in visual recognition tasks as it is easy to generate new
    data by simple and straight forward image transformations. However, when it comes to text data augmen-
    tations, it is difficult to find appropriate transformation techniques which also preserve the contextual and
    grammatical structure of language texts. In this paper, we explore various text data augmentation techniques
    in text space and word embedding space. We study the effect of various augmented datasets on the efficiency
    of different deep learning models for relation classification in text.
  • Bakker-Marshall, I., Takashima, A., Schoffelen, J.-M., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2018). Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. Journal of Cognitive Neuroscience, 30(5), 621-633. doi:10.1162/jocn_a_01240.

    Abstract

    Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
  • Balakrishnan, B., Verheijen, J., Lupo, A., Raymond, K., Turgeon, C., Yang, Y., Carter, K. L., Whitehead, K. J., Kozicz, T., Morava, E., & Lai, K. (2019). A novel phosphoglucomutase-deficient mouse model reveals aberrant glycosylation and early embryonic lethality. Journal of Inherited Metabolic Disease, 42(5), 998-1007. doi:10.1002/jimd.12110.

    Abstract

    Patients with phosphoglucomutase (PGM1) deficiency, a congenital disorder of glycosylation (CDG) suffer from multiple disease phenotypes. Midline cleft defects are present at birth. Overtime, additional clinical phenotypes, which include severe hypoglycemia, hepatopathy, growth retardation, hormonal deficiencies, hemostatic anomalies, frequently lethal, early-onset of dilated cardiomyopathy and myopathy emerge, reflecting the central roles of the enzyme in (glycogen) metabolism and glycosylation. To delineate the pathophysiology of the tissue-specific disease phenotypes, we constructed a constitutive Pgm2 (mouse ortholog of human PGM1)-knockout (KO) mouse model using CRISPR-Cas9 technology. After multiple crosses between heterozygous parents, we were unable to identify homozygous life births in 78 newborn pups (P = 1.59897E-06), suggesting an embryonic lethality phenotype in the homozygotes. Ultrasound studies of the course of pregnancy confirmed Pgm2-deficient pups succumb before E9.5. Oral galactose supplementation (9 mg/mL drinking water) did not rescue the lethality. Biochemical studies of tissues and skin fibroblasts harvested from heterozygous animals confirmed reduced Pgm2 enzyme activity and abundance, but no change in glycogen content. However, glycomics analyses in serum revealed an abnormal glycosylation pattern in the Pgm2(+/-) animals, similar to that seen in PGM1-CDG.
  • Baranova, J. (2020). Reasons for every-day activities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Barendse, M. T., & Rosseel, Y. (2020). Multilevel modeling in the ‘wide format’ approach with discrete data: A solution for small cluster sizes. Structural Equation Modeling: A Multidisciplinary Journal, 27(5), 696-721. doi:10.1080/10705511.2019.1689366.

    Abstract

    In multilevel data, units at level 1 are nested in clusters at level 2, which in turn may be nested in even larger clusters at level 3, and so on. For continuous data, several authors have shown how to model multilevel data in a ‘wide’ or ‘multivariate’ format approach. We provide a general framework to analyze random intercept multilevel SEM in the ‘wide format’ (WF) and extend this approach for discrete data. In a simulation study, we vary response scale (binary, four response options), covariate presence (no, between-level, within-level), design (balanced, unbalanced), model misspecification (present, not present), and the number of clusters (small, large) to determine accuracy and efficiency of the estimated model parameters. With a small number of observations in a cluster, results indicate that the WF approach is a preferable approach to estimate multilevel data with discrete response options.
  • Barrett, R. L. C., Dawson, M., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Croxson, P. L., Johnson, P. J., Howells, H., Forkel, S. J., Dell'Acqua, F., & Catani, M. (2020). Differences in Frontal Network Anatomy Across Primate Species. The Journal of Neuroscience, 40(10), 2094-2107. doi:10.1523/JNEUROSCI.1650-18.2019.

    Abstract

    The frontal lobe is central to distinctive aspects of human cognition and behavior. Some comparative studies link this to a larger frontal cortex and even larger frontal white matter in humans compared with other primates, yet others dispute these findings. The discrepancies between studies could be explained by limitations of the methods used to quantify volume differences across species, especially when applied to white matter connections. In this study, we used a novel tractography approach to demonstrate that frontal lobe networks, extending within and beyond the frontal lobes, occupy 66% of total brain white matter in humans and 48% in three monkey species: vervets (Chlorocebus aethiops), rhesus macaque (Macaca mulatta) and cynomolgus macaque (Macaca fascicularis), all male. The simian–human differences in proportional frontal tract volume were significant for projection, commissural, and both intralobar and interlobar association tracts. Among the long association tracts, the greatest difference was found for tracts involved in motor planning, auditory memory, top-down control of sensory information, and visuospatial attention, with no significant differences in frontal limbic tracts important for emotional processing and social behaviour. In addition, we found that a nonfrontal tract, the anterior commissure, had a smaller volume fraction in humans, suggesting that the disproportionally large volume of human frontal lobe connections is accompanied by a reduction in the proportion of some nonfrontal connections. These findings support a hypothesis of an overall rearrangement of brain connections during human evolution.
  • Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Barthel, M., & Levinson, S. C. (2020). Next speakers plan word forms in overlap with the incoming turn: Evidence from gaze-contingent switch task performance. Language, Cognition and Neuroscience, 35(9), 1183-1202. doi:10.1080/23273798.2020.1716030.

    Abstract

    To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.

    Additional information

    Supplemental material
  • Barthel, M., & Sauppe, S. (2019). Speech planning at turn transitions in dialogue is associated with increased processing load. Cognitive Science, 43(7): e12768. doi:10.1111/cogs.12768.

    Abstract

    Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre‐recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence‐final verbs evokes larger task‐evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn‐taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
  • Basnakova, J. (2019). Beyond the language given: The neurobiological infrastructure for pragmatic inferencing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bauer, B. L. M. (2020). Language sources and the reconstruction of early languages: Sociolinguistic discrepancies and evolution in Old French grammar. Diachronica, 37(3), 273-317. doi:10.1075/dia.18026.bau.

    Abstract

    This article argues that with the original emphasis on dialectal variation, using primarily literary texts from various regions, analysis of Old French has routinely neglected social variation, providing an incomplete picture of its grammar. Accordingly, Old French has been identified as typically featuring e.g. “pro-drop”, brace constructions, and single negation. Yet examination of these features in informal texts, as opposed to the formal texts typically dealt with, demonstrates that these documents do not corroborate the picture of Old French that is commonly presented in the linguistic literature. Our reconstruction of Old French grammar therefore needs adjustment and further refinement, in particular by implementing sociolinguistic data. With a broader scope, the call for inclusion of sociolinguistic variation may resonate in the investigation of other early languages, resulting in the reassessment of the sources used, and reopening the debate about social variation in dead languages and its role in language evolution.

    Files private

    Request files
  • Bauer, B. L. M. (2020). Appositive compounds in dialectal and sociolinguistic varieties of French. In M. Maiden, & S. Wolfe (Eds.), Variation and change in Gallo-Romance (pp. 326-346). Oxford: Oxford University Press.
  • Bauer, B. L. M. (1987). L’évolution des structures morphologiques et syntaxiques du latin au français. Travaux de linguistique, 14-15, 95-107.
  • Bauer, B. L. M. (2019). Language contact and language borrowing? Compound verb forms in the Old French translation of the Gospel of St. Mark. Belgian Journal of Linguistics, 33, 210-250. doi:10.1075/bjl.00028.bau.

    Abstract

    This study investigates the potential influence of Latin syntax on the development of analytic verb forms in a well-defined and concrete instance of language contact, the Old French translation of a Latin Gospel. The data show that the formation of verb forms in the Old French was remarkably independent from the Latin original. While the Old French text closely follows the narrative of the Latin Gospel, its usage of compound verb forms is not dictated by the source text, as reflected e.g. in the quasi-omnipresence of the relative sequence finite verb + pp, which – with a few exceptions – all trace back to a different structure in the Latin text. Engels (VerenigdeStaten) Another important innovative difference in the Old French is the widespread use of aveir ‘have’ as an auxiliary, unknown in Latin. The article examines in detail the relation between the verbal forms in the two texts, showing that the translation is in line with of grammar. The usage of compound verb forms in the Old French Gospel is therefore autonomous rather than contact stimulated, let alone contact induced. The results challenge Blatt’s (1957) assumption identifying compound verb forms as a shared feature in European languages that should be ascribed to Latin influence.

    Files private

    Request files
  • Bauer, B. L. M., & Mota, M. (2018). On language, cognition, and the brain: An interview with Peter Hagoort. Sobre linguagem, cognição e cérebro: Uma entrevista com Peter Hagoort. Revista da Anpoll, (45), 291-296. doi:10.18309/anp.v1i45.1179.

    Abstract

    Managing Director of the Max Planck Institute for Psycholinguistics, founding Director of the Donders Centre for Cognitive Neuroimaging (DCCN, 1999), and professor of Cognitive Neuroscience at Radboud University, all located in Nijmegen, the Netherlands, PETER HAGOORT examines how the brain controls language production and comprehension. He was one of the first to integrate psychological theory and models from neuroscience in an attempt to understand how the human language faculty is instantiated in the brain.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2018). Mapping of Human FOXP2 Enhancers Reveals Complex Regulation. Frontiers in Molecular Neuroscience, 11: 47. doi:10.3389/fnmol.2018.00047.

    Abstract

    Mutations of the FOXP2 gene cause a severe speech and language disorder, providing a molecular window into the neurobiology of language. Individuals with FOXP2 mutations have structural and functional alterations affecting brain circuits that overlap with sites of FOXP2 expression, including regions of the cortex, striatum, and cerebellum. FOXP2 displays complex patterns of expression in the brain, as well as in non-neuronal tissues, suggesting that sophisticated regulatory mechanisms control its spatio-temporal expression. However, to date, little is known about the regulation of FOXP2 or the genomic elements that control its expression. Using chromatin conformation capture (3C), we mapped the human FOXP2 locus to identify putative enhancer regions that engage in long-range interactions with the promoter of this gene. We demonstrate the ability of the identified enhancer regions to drive gene expression. We also show regulation of the FOXP2 promoter and enhancer regions by candidate regulators – FOXP family and TBR1 transcription factors. These data point to regulatory elements that may contribute to the temporal- or tissue-specific expression patterns of human FOXP2. Understanding the upstream regulatory pathways controlling FOXP2 expression will bring new insight into the molecular networks contributing to human language and related disorders.
  • Beckmann, N. S., Indefrey, P., & Petersen, W. (2018). Words count, but thoughts shift: A frame-based account to conceptual shifts in noun countability. Voprosy Kognitivnoy Lingvistiki (Issues of Cognitive Linguistics ), 2, 79-89. doi:10.20916/1812-3228-2018-2-79-89.

    Abstract

    The current paper proposes a frame-based account to conceptual shifts in the countability do-main. We interpret shifts in noun countability as syntactically driven metonymy. Inserting a noun in an incongruent noun phrase, that is combining it with a determiner of the other countability class, gives rise to a re-interpretation of the noun referent. We assume lexical entries to be three-fold frame com-plexes connecting conceptual knowledge representations with language-specific form representations via a lemma level. Empirical data from a lexical decision experiment are presented, that support the as-sumption of such a lemma level connecting perceptual input of linguistic signs to conceptual knowledge.
  • Beierholm, U., Rohe, T., Ferrari, A., Stegle, O., & Noppeney, U. (2020). Using the past to estimate sensory uncertainty. eLife, 9: e54172. doi:10.7554/eLife.54172.

    Abstract

    To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
  • Bekemeier, N., Brenner, D., Klepp, A., Biermann-Ruben, K., & Indefrey, P. (2019). Electrophysiological correlates of concept type shifts. PLoS One, 14(3): e0212624. doi:10.1371/journal.pone.0212624.

    Abstract

    A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.

    Additional information

    dataset
  • Belpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E. and 4 moreBelpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E., Mamus, E., Montanier, J.-M., Oranç, C., & Pandey, A. K. (2018). Guidelines for designing social robots as second language tutors. International Journal of Social Robotics, 10(3), 325-341. doi:10.1007/s12369-018-0467-6.

    Abstract

    In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.
  • Benítez-Burraco, A., & Dediu, D. (2018). Ancient DNA and language evolution: A special section. Journal of Language Evolution, 3(1), 47-48. doi:10.1093/jole/lzx024.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.

    Abstract

    The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.

    Abstract

    The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.

    Abstract

    Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). Language family trees reflect geography and demography beyond neutral drift. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 38-40). Toruń, Poland: NCU Press. doi:10.12775/3991-1.006.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). The evolution of language families is shaped by the environment beyond neutral drift. Nature Human Behaviour, 2, 816-821. doi:10.1038/s41562-018-0457-6.

    Abstract

    There are more than 7,000 languages spoken in the world today1. It has been argued that the natural and social environment of languages drives this diversity. However, a fundamental question is how strong are environmental pressures, and does neutral drift suffice as a mechanism to explain diversification? We estimate the phylogenetic signals of geographic dimensions, distance to water, climate and population size on more than 6,000 phylogenetic trees of 46 language families. Phylogenetic signals of environmental factors are generally stronger than expected under the null hypothesis of no relationship with the shape of family trees. Importantly, they are also—in most cases—not compatible with neutral drift models of constant-rate change across the family tree branches. Our results suggest that language diversification is driven by further adaptive and non-adaptive pressures. Language diversity cannot be understood without modelling the pressures that physical, ecological and social factors exert on language users in different environments across the globe.
  • Bergelson*, E., Casillas*, M., Soderstrom, M., Seidl, A., Warlaumont, A. S., & Amatuni, A. (2019). What Do North American Babies Hear? A large-scale cross-corpus analysis. Developmental Science, 22(1): e12724. doi:10.1111/desc.12724.

    Abstract

    - * indicates joint first authorship - Abstract: A range of demographic variables influence how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically-valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2--3x more speech from females than males. Second, children in higher-maternal-education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for re-use and re-analysis by other researchers.

    Additional information

    desc12724-sup-0001-supinfo.pdf
  • Bergmann, C., & Cristia, A. (2018). Environmental influences on infants’ native vowel discrimination: The case of talker number in daily life. Infancy, 23(4), 484-501. doi:10.1111/infa.12232.

    Abstract

    Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4-month-olds (before attunement to native vowels), 6-month-olds (at the cusp of native vowel attunement) and 12-month-olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4- and 6-month-olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.
  • Bergmann, C., Tsuji, S., Piccinini, P. E., Lewis, M. L., Braginsky, M. B., Frank, M. C., & Cristia, A. (2018). Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Development, 89(6), 1996-2009. doi:10.1111/cdev.13079.

    Abstract

    Previous work suggests key factors for replicability, a necessary feature for theory
    building, include statistical power and appropriate research planning. These factors are examined by analyzing a collection of 12 standardized meta-analyses on language development between birth and 5 years. With a median effect size of Cohen's d= 0.45 and typical sample size of 18 participants, most research is underpowered (range: 6%-99%;
    median 44%); and calculating power based on seminal publications is not a suitable strategy.
    Method choice can be improved, as shown in analyses on exclusion rates and effect size as a
    function of method. The article ends with a discussion on how to increase replicability in both language acquisition studies specifically and developmental research more generally.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Bidgood, A., Pine, J. M., Rowland, C. F., & Ambridge, B. (2020). Syntactic representations are both abstract and semantically constrained: Evidence from children’s and adults’ comprehension and production/priming of the English passive. Cognitive Science, 44(9): e12892. doi:10.1111/cogs.12892.

    Abstract

    All accounts of language acquisition agree that, by around age 4, children’s knowledge of grammatical constructions is abstract, rather than tied solely to individual lexical items. The aim of the present research was to investigate, focusing on the passive, whether children’s and adults’ performance is additionally semantically constrained, varying according to the distance between the semantics of the verb and those of the construction. In a forced‐choice pointing study (Experiment 1), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) showed support for the prediction of this semantic construction prototype account of an interaction such that the observed disadvantage for passives as compared to actives (i.e., fewer correct points/longer reaction time) was greater for experiencer‐theme verbs than for agent‐patient and theme‐experiencer verbs (e.g., Bob was seen/hit/frightened by Wendy). Similarly, in a production/priming study (Experiment 2), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) produced fewer passives for experiencer‐theme verbs than for agent‐patient/theme‐experiencer verbs. We conclude that these findings are difficult to explain under accounts based on the notion of A(rgument) movement or of a monostratal, semantics‐free, level of syntax, and instead necessitate some form of semantic construction prototype account.

    Additional information

    Supplementary material
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • Blomert, L., & Hagoort, P. (1987). Neurobiologische en neuropsychologische aspecten van dyslexie. In J. Hamers, & A. Van der Leij (Eds.), Dyslexie 87 (pp. 35-44). Lisse: Swets and Zeitlinger.
  • Blythe, J. (2018). Genesis of the trinity: The convergent evolution of trirelational kinterms. In P. McConvell, & P. Kelly (Eds.), Skin, kin and clan: The dynamics of social categories in Indigenous Australia (pp. 431-471). Canberra: ANU EPress.
  • Bobadilla-Suarez, S., Guest, O., & Love, B. C. (2020). Subjective value and decision entropy are jointly encoded by aligned gradients across the human brain. Communications Biology, 3: 597. doi:10.1038/s42003-020-01315-3.

    Abstract

    Recent work has considered the relationship between value and confidence in both behavioural and neural representation. Here we evaluated whether the brain organises value and confidence signals in a systematic fashion that reflects the overall desirability of options. If so, regions that respond to either increases or decreases in both value and confidence should be widespread. We strongly confirmed these predictions through a model-based fMRI analysis of a mixed gambles task that assessed subjective value (SV) and inverse decision entropy (iDE), which is related to confidence. Purported value areas more strongly signalled iDE than SV, underscoring how intertwined value and confidence are. A gradient tied to the desirability of actions transitioned from positive SV and iDE in ventromedial prefrontal cortex to negative SV and iDE in dorsal medial prefrontal cortex. This alignment of SV and iDE signals could support retrospective evaluation to guide learning and subsequent decisions.

    Additional information

    supplemental information
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.
  • Bode, S., Feuerriegel, D., Bennett, D., & Alday, P. M. (2019). The Decision Decoding ToolBOX (DDTBOX) -- A Multivariate Pattern Analysis Toolbox for Event-Related Potentials. Neuroinformatics, 17(1), 27-42. doi:10.1007/s12021-018-9375-z.

    Abstract

    In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.
  • De Boer, B., & Thompson, B. (2018). Biology-culture co-evolution in finite populations. Scientific Reports, 8: 1209. doi:10.1038/s41598-017-18928-0.

    Abstract

    Language is the result of two concurrent evolutionary processes: Biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: The speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: A generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: In realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another. © 2018 The Author(s).

    Additional information

    41598_2017_18928_MOESM1_ESM.pdf
  • De Boer, B., Thompson, B., Ravignani, A., & Boeckx, C. (2020). Analysis of mutation and fixation for language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 56-58). Nijmegen: The Evolution of Language Conferences.
  • De Boer, B., Thompson, B., Ravignani, A., & Boeckx, C. (2020). Evolutionary dynamics do not motivate a single-mutant theory of human language. Scientific Reports, 10: 451. doi:10.1038/s41598-019-57235-8.

    Abstract

    One of the most controversial hypotheses in cognitive science is the Chomskyan evolutionary conjecture that language arose instantaneously in humans through a single mutation. Here we analyze the evolutionary dynamics implied by this hypothesis, which has never been formalized before. The hypothesis supposes the emergence and fixation of a single mutant (capable of the syntactic operation Merge) during a narrow historical window as a result of frequency-independent selection under a huge fitness advantage in a population of an effective size no larger than ~15 000 individuals. We examine this proposal by combining diffusion analysis and extreme value theory to derive a probabilistic formulation of its dynamics. We find that although a macro-mutation is much more likely to go to fixation if it occurs, it is much more unlikely a priori than multiple mutations with smaller fitness effects. The most likely scenario is therefore one where a medium number of mutations with medium fitness effects accumulate. This precise analysis of the probability of mutations occurring and going to fixation has not been done previously in the context of the evolution of language. Our results cast doubt on any suggestion that evolutionary reasoning provides an independent rationale for a single-mutant theory of language.

    Additional information

    Supplementary material
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2020). Conversational expectations get revised as response latencies unfold. Language, Cognition and Neuroscience, 35(6), 766-779. doi:10.1080/23273798.2019.1590609.

    Abstract

    The present study extends neuro-imaging into conversation through studying dialogue comprehension. Conversation entails rapid responses, with negative semiotics for delay. We explored how expectations about the valence of the forthcoming response develop during the silence before the response and whether negative responses have mainly cognitive or social-emotional consequences. EEG-participants listened to questions from a spontaneous spoken corpus, cross-spliced with short/long gaps and “yes”/“no” responses. Preceding contexts biased listeners to expect the eventual response, which was hypothesised to translate to expectations for a shorter or longer gap. “No” responses showed a trend towards an early positivity, suggesting socio-emotional consequences. Within the long gap, expecting a “yes” response led to an earlier negativity, as well as a trend towards stronger theta-oscillations, after 300 milliseconds. This suggests that listeners anticipate/predict “yes” responses to come earlier than “no” responses, showing strong sensitivities to timing, which presumably promote hastening the pace of verbal interaction.

    Additional information

    plcp_a_1590609_sm4630.docx
  • Bögels, S. (2020). Neural correlates of turn-taking in the wild: Response planning starts early in free interviews. Cognition, 203: 104347. doi:10.1016/j.cognition.2020.104347.

    Abstract

    Conversation is generally characterized by smooth transitions between turns, with only very short gaps. This entails that responders often begin planning their response before the ongoing turn is finished. However, controversy exists about whether they start planning as early as they can, to make sure they respond on time, or as late as possible, to minimize the overlap between comprehension and production planning. Two earlier EEG studies have found neural correlates of response planning (positive ERP and alpha decrease) as soon as listeners could start planning their response, already midway through the current turn. However, in these studies, the questions asked were highly controlled with respect to the position where planning could start (e.g., very early) and required short and easy responses. The present study measured participants' EEG while an experimenter interviewed them in a spontaneous interaction. Coding the questions in the interviews showed that, under these natural circumstances, listeners can, in principle, start planning a response relatively early, on average after only about one third of the question has passed. Furthermore, ERP results showed a large positivity, interpreted before as an early neural signature of response planning, starting about half a second after the start of the word that allowed listeners to start planning a response. A second neural signature of response planning, an alpha decrease, was not replicated as reliably. In conclusion, listeners appear to start planning their response early during the ongoing turn, also under natural circumstances, presumably in order to keep the gap between turns short and respond on time. These results have several important implications for turn-taking theories, which need to explain how interlocutors deal with the overlap between comprehension and production, how they manage to come in on time, and the sources that lead to variability between conversationalists in the start of planning.

    Additional information

    supplementary data
  • Bögels, S., Casillas, M., & Levinson, S. C. (2018). Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question. Neuropsychologia, 109, 295-310. doi:10.1016/j.neuropsychologia.2017.12.028.

    Abstract

    Rapid response latencies in conversation suggest that responders start planning before the ongoing turn is finished. Indeed, an earlier EEG study suggests that listeners start planning their responses to questions as soon as they can (Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5, 12881). The present study aimed to (1) replicate this early planning effect and (2) investigate whether such early response planning incurs a cost on participants’ concurrent comprehension of the ongoing turn. During the experiment participants answered questions from a confederate partner. To address aim (1), the questions were designed such that response planning could start either early or late in the turn. Our results largely replicate Bögels et al. (2015) showing a large positive ERP effect and an oscillatory alpha/beta reduction right after participants could have first started planning their verbal response, again suggesting an early start of response planning. To address aim (2), the confederate's questions also contained either an expected word or an unexpected one to elicit a differential N400 effect, either before or after the start of response planning. We hypothesized an attenuated N400 effect after response planning had started. In contrast, the N400 effects before and after planning did not differ. There was, however, a positive correlation between participants' response time and their N400 effect size after planning had started; quick responders showed a smaller N400 effect, suggesting reduced attention to comprehension and possibly reduced anticipatory processing. We conclude that early response planning can indeed impact comprehension processing.

    Additional information

    mmc1.pdf
  • Bosker, H. R., Van Os, M., Does, R., & Van Bergen, G. (2019). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language, 106, 189-202. doi:10.1016/j.jml.2019.02.006.

    Abstract

    Disfluencies, like 'uh', have been shown to help listeners anticipate reference to low-frequency words. The associative account of this 'disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.
  • Bosker, H. R., & Cooke, M. (2020). Enhanced amplitude modulations contribute to the Lombard intelligibility benefit: Evidence from the Nijmegen Corpus of Lombard Speech. The Journal of the Acoustical Society of America, 147: 721. doi:10.1121/10.0000646.

    Abstract

    Speakers adjust their voice when talking in noise, which is known as Lombard speech. These acoustic adjustments facilitate speech comprehension in noise relative to plain speech (i.e., speech produced in quiet). However, exactly which characteristics of Lombard speech drive this intelligibility benefit in noise remains unclear. This study assessed the contribution of enhanced amplitude modulations to the Lombard speech intelligibility benefit by demonstrating that (1) native speakers of Dutch in the Nijmegen Corpus of Lombard Speech (NiCLS) produce more pronounced amplitude modulations in noise vs. in quiet; (2) more enhanced amplitude modulations correlate positively with intelligibility in a speech-in-noise perception experiment; (3) transplanting the amplitude modulations from Lombard speech onto plain speech leads to an intelligibility improvement, suggesting that enhanced amplitude modulations in Lombard speech contribute towards intelligibility in noise. Results are discussed in light of recent neurobiological models of speech perception with reference to neural oscillators phase-locking to the amplitude modulations in speech, guiding the processing of speech.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R., Peeters, D., & Holler, J. (2020). How visual cues to speech rate influence speech perception. Quarterly Journal of Experimental Psychology, 73(10), 1523-1536. doi:10.1177/1747021820914564.

    Abstract

    Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear.
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Temporal contrast effects in human speech perception are immune to selective attention. Scientific Reports, 10: 5607. doi:10.1038/s41598-020-62613-8.

    Abstract

    Two fundamental properties of perception are selective attention and perceptual contrast, but how these two processes interact remains unknown. Does an attended stimulus history exert a larger contrastive influence on the perception of a following target than unattended stimuli? Dutch listeners categorized target sounds with a reduced prefix “ge-” marking tense (e.g., ambiguous between gegaan-gaan “gone-go”). In ‘single talker’ Experiments 1–2, participants perceived the reduced syllable (reporting gegaan) when the target was heard after a fast sentence, but not after a slow sentence (reporting gaan). In ‘selective attention’ Experiments 3–5, participants listened to two simultaneous sentences from two different talkers, followed by the same target sounds, with instructions to attend only one of the two talkers. Critically, the speech rates of attended and unattended talkers were found to equally influence target perception – even when participants could watch the attended talker speak. In fact, participants’ target perception in ‘selective attention’ Experiments 3–5 did not differ from participants who were explicitly instructed to divide their attention equally across the two talkers (Experiment 6). This suggests that contrast effects of speech rate are immune to selective attention, largely operating prior to attentional stream segregation in the auditory processing hierarchy.

    Additional information

    Supplementary information
  • Bosker, H. R., Sjerps, M. J., & Reinisch, E. (2020). Spectral contrast effects are modulated by selective attention in ‘cocktail party’ settings. Attention, Perception & Psychophysics, 82, 1318-1332. doi:10.3758/s13414-019-01824-2.

    Abstract

    Speech sounds are perceived relative to spectral properties of surrounding speech. For instance, target words ambiguous between /bɪt/ (with low F1) and /bɛt/ (with high F1) are more likely to be perceived as “bet” after a ‘low F1’ sentence, but as “bit” after a ‘high F1’ sentence. However, it is unclear how these spectral contrast effects (SCEs) operate in multi-talker listening conditions. Recently, Feng and Oxenham [(2018b). J.Exp.Psychol.-Hum.Percept.Perform. 44(9), 1447–1457] reported that selective attention affected SCEs to a small degree, using two simultaneously presented sentences produced by a single talker. The present study assessed the role of selective attention in more naturalistic ‘cocktail party’ settings, with 200 lexically unique sentences, 20 target words, and different talkers. Results indicate that selective attention to one talker in one ear (while ignoring another talker in the other ear) modulates SCEs in such a way that only the spectral properties of the attended talker influences target perception. However, SCEs were much smaller in multi-talker settings (Experiment 2) than those in single-talker settings (Experiment 1). Therefore, the influence of SCEs on speech comprehension in more naturalistic settings (i.e., with competing talkers) may be smaller than estimated based on studies without competing talkers.

    Additional information

    13414_2019_1824_MOESM1_ESM.docx
  • Bosma, E., & Nota, N. (2020). Cognate facilitation in Frisian-Dutch bilingual children’s sentence reading: An eye-tracking study. Journal of Experimental Child Psychology, 189: 104699. doi:10.1016/j.jecp.2019.104699.
  • Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B., Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., Benoit, R. G. and 177 moreBotvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B., Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., Benoit, R. G., Berkers, R., Bhanji, J. P., Biswal, B. B., Bobadilla-Suarez, S., Bortolini, T., Bottenhorn, K. L., Bowring, A., Braem, S., Brooks, H. R., Brudner, E. G., Calderon, C. B., Camilleri, J. A., Castrellon, J. J., Cecchetti, L., Cieslik, E. C., Cole, Z. J., Collignon, O., Cox, R. W., Cunningham, W. A., Czoschke, S., Dadi, K., Davis, C. P., De Luca, A., Delgado, M. R., Demetriou, L., Dennison, J. B., Di, X., Dickie, E. W., Dobryakova, E., Donnat, C. L., Dukart, J., Duncan, N. W., Durnez, J., Eed, A., Eickhoff, S. B., Erhart, A., Fontanesi, L., Fricke, G. M., Fu, S., Galván, A., Gau, R., Genon, S., Glatard, T., Glerean, E., Goeman, J. J., Golowin, S. A. E., González-García, C., Gorgolewski, K. J., Grady, C. L., Green, M. A., Guassi Moreira, J. F., Guest, O., Hakimi, S., Hamilton, J. P., Hancock, R., Handjaras, G., Harry, B. B., Hawco, C., Herholz, P., Herman, G., Heunis, S., Hoffstaedter, F., Hogeveen, J., Holmes, S., Hu, C.-P., Huettel, S. A., Hughes, M. E., Iacovella, V., Iordan, A. D., Isager, P. M., Isik, A. I., Jahn, A., Johnson, M. R., Johnstone, T., Joseph, M. J. E., Juliano, A. C., Kable, J. W., Kassinopoulos, M., Koba, C., Kong, X., Koscik, T. R., Kucukboyaci, N. E., Kuhl, B. A., Kupek, S., Laird, A. R., Lamm, C., Langner, R., Lauharatanahirun, N., Lee, H., Lee, S., Leemans, A., Leo, A., Lesage, E., Li, F., Li, M. Y. C., Lim, P. C., Lintz, E. N., Liphardt, S. W., Losecaat Vermeer, A. B., Love, B. C., Mack, M. L., Malpica, N., Marins, T., Maumet, C., McDonald, K., McGuire, J. T., Melero, H., Méndez Leal, A. S., Meyer, B., Meyer, K. N., Mihai, P. G., Mitsis, G. D., Moll, J., Nielson, D. M., Nilsonne, G., Notter, M. P., Olivetti, E., Onicas, A. I., Papale, P., Patil, K. R., Peelle, J. E., Pérez, A., Pischedda, D., Poline, J.-B., Prystauka, Y., Ray, S., Reuter-Lorenz, P. A., Reynolds, R. C., Ricciardi, E., Rieck, J. R., Rodriguez-Thompson, A. M., Romyn, A., Salo, T., Samanez-Larkin, G. R., Sanz-Morales, E., Schlichting, M. L., Schultz, D. H., Shen, Q., Sheridan, M. A., Silvers, J. A., Skagerlund, K., Smith, A., Smith, D. V., Sokol-Hessner, P., Steinkamp, S. R., Tashjian, S. M., Thirion, B., Thorp, J. N., Tinghög, G., Tisdall, L., Tompson, S. H., Toro-Serey, C., Torre Tresols, J. J., Tozzi, L., Truong, V., Turella, L., van 't Veer, A. E., Verguts, T., Vettel, J. M., Vijayarajah, S., Vo, K., Wall, M. B., Weeda, W. D., Weis, S., White, D. J., Wisniewski, D., Xifra-Porxas, A., Yearling, E. A., Yoon, S., Yuan, R., Yuen, K. S. L., Zhang, L., Zhang, X., Zosky, J. E., Nichols, T. E., Poldrack, R. A., & Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84-88. doi:10.1038/s41586-020-2314-9.

    Abstract

    Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses1. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2,3,4,5. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.
  • Bouhali, F., Mongelli, V., Thiebaut de Schotten, M., & Cohen, L. (2020). Reading music and words: The anatomical connectivity of musicians’ visual cortex. NeuroImage, 212: 116666. doi:10.1016/j.neuroimage.2020.116666.

    Abstract

    Musical score reading and word reading have much in common, from their historical origins to their cognitive foundations and neural correlates. In the ventral occipitotemporal cortex (VOT), the specialization of the so-called Visual Word Form Area for word reading has been linked to its privileged structural connectivity to distant language regions. Here we investigated how anatomical connectivity relates to the segregation of regions specialized for musical notation or words in the VOT. In a cohort of professional musicians and non-musicians, we used probabilistic tractography combined with task-related functional MRI to identify the connections of individually defined word- and music-selective left VOT regions. Despite their close proximity, these regions differed significantly in their structural connectivity, irrespective of musical expertise. The music-selective region was significantly more connected to posterior lateral temporal regions than the word-selective region, which, conversely, was significantly more connected to anterior ventral temporal cortex. Furthermore, musical expertise had a double impact on the connectivity of the music region. First, music tracts were significantly larger in musicians than in non-musicians, associated with marginally higher connectivity to perisylvian music-related areas. Second, the spatial similarity between music and word tracts was significantly increased in musicians, consistently with the increased overlap of language and music functional activations in musicians, as compared to non-musicians. These results support the view that, for music as for words, very specific anatomical connections influence the specialization of distinct VOT areas, and that reciprocally those connections are selectively enhanced by the expertise for word or music reading.

    Additional information

    Supplementary data
  • Bowerman, M. (1973). Early syntactic development: A cross linguistic study with special reference to Finnish. Cambridge: Cambridge University Press.

    Abstract

    First published in 1973, this important work was the first systematic attempt to apply theoretical and methodological tools developed in America to the acquisition of a language other than English. Dr Bowerman presents and analyses data from a longitudinal investigation of the early syntactic development of two Finnish children, and compares their speech at two stages of development with that of American, Samoan and Luo children. The four language families (Finno-Ugric, Indo-European, Malayo-Polynesian and Nilotic respectively) with very different structures, and this is the first systematic comparison of the acquisition of several types of native language within a common analysis. Similarities in the linguistic behaviour of children learning these four different languages are used to evaluate hypotheses about universals of language, and to generate new proposals.
  • Bowerman, M. (1987). Commentary: Mechanisms of language acquisition. In B. MacWhinney (Ed.), Mechanisms of language acquisition (pp. 443-466). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M. (1973). [Review of Lois Bloom, Language development: Form and function in emerging grammars (MIT Press 1970)]. American Scientist, 61(3), 369-370.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Bowerman, M. (1982). Reorganizational processes in lexical and syntactic development. In E. Wanner, & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 319-346). New York: Academic Press.
  • Bowerman, M. (1982). Starting to talk worse: Clues to language acquisition from children's late speech errors. In S. Strauss (Ed.), U shaped behavioral growth (pp. 101-145). New York: Academic Press.
  • Bowerman, M. (1973). Structural relationships in children's utterances: Semantic or syntactic? In T. Moore (Ed.), Cognitive development and the acquisition of language (pp. 197-213). New York: Academic Press.
  • Brand, J., Monaghan, P., & Walker, P. (2018). Changing Signs: Testing How Sound-Symbolism Supports Early Word Learning. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1398-1403). Austin, TX: Cognitive Science Society.

    Abstract

    Learning a language involves learning how to map specific forms onto their associated meanings. Such mappings can utilise arbitrariness and non-arbitrariness, yet, our understanding of how these two systems operate at different stages of vocabulary development is still not fully understood. The Sound-Symbolism Bootstrapping Hypothesis (SSBH) proposes that sound-symbolism is essential for word learning to commence, but empirical evidence of exactly how sound-symbolism influences language learning is still sparse. It may be the case that sound-symbolism supports acquisition of categories of meaning, or that it enables acquisition of individualized word meanings. In two Experiments where participants learned form-meaning mappings from either sound-symbolic or arbitrary languages, we demonstrate the changing roles of sound-symbolism and arbitrariness for different vocabulary sizes, showing that sound-symbolism provides an advantage for learning of broad categories, which may then transfer to support learning individual words, whereas an arbitrary language impedes acquisition of categories of sound to meaning.
  • Brand, S., & Ernestus, M. (2018). Listeners’ processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: evidence from native listeners and learners of French. Quarterly Journal of Experimental Psychology, 71(5), 1240-1259. doi:10.1080/17470218.2017.1313282.

    Abstract

    n casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀəvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners’ exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves
  • Brand, J., Monaghan, P., & Walker, P. (2018). The changing role of sound‐symbolism for small versus large vocabularies. Cognitive Science, 42(S2), 578-590. doi:10.1111/cogs.12565.

    Abstract

    Natural language contains many examples of sound‐symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound‐symbolism for word learning as vocabulary size varies. Participants learned form‐meaning mappings for words which were either congruent or incongruent with regard to sound‐symbolic relations. For the smaller vocabulary, sound‐symbolism facilitated learning individual words, whereas for larger vocabularies sound‐symbolism supported learning category distinctions. The changing properties of form‐meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development.

    Additional information

    https://git.io/v5BXJ
  • Brehm, L., & Goldrick, M. (2018). Connectionist principles in theories of speech production. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 372-397). Oxford: Oxford University Press.

    Abstract

    This chapter focuses on connectionist modeling in language production, highlighting how
    core principles of connectionism provide coverage for empirical observations about
    representation and selection at the phonological, lexical, and sentence levels. The first
    section focuses on the connectionist principles of localist representations and spreading
    activation. It discusses how these two principles have motivated classic models of speech
    production and shows how they cover results of the picture-word interference paradigm,
    the mixed error effect, and aphasic naming errors. The second section focuses on how
    newer connectionist models incorporate the principles of learning and distributed
    representations through discussion of syntactic priming, cumulative semantic
    interference, sequencing errors, phonological blends, and code-switching
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material
  • Brehm, L., Hussey, E., & Christianson, K. (2020). The role of word frequency and morpho-orthography in agreement processing. Language, Cognition and Neuroscience, 35(1), 58-77. doi:10.1080/23273798.2019.1631456.

    Abstract

    Agreement attraction in comprehension (when an ungrammatical verb is read quickly if preceded by a feature-matching local noun) is well described by a cue-based retrieval framework. This suggests a role for lexical retrieval in attraction. To examine this, we manipulated two probabilistic factors known to affect lexical retrieval: local noun word frequency and morpho-orthography (agreement morphology realised with or without –s endings) in a self-paced reading study. Noun number and word frequency affected noun and verb region reading times, with higher-frequency words not eliciting attraction. Morpho-orthography impacted verb processing but not attraction: atypical plurals led to slower verb reading times regardless of verb number. Exploratory individual difference analyses further underscore the importance of lexical retrieval dynamics in sentence processing. This provides evidence that agreement operates via a cue-based retrieval mechanism over lexical representations that vary in their strength and association to number features.

    Additional information

    Supplemental material

Share this page