Publications

Displaying 1 - 100 of 1343
  • Abbott, M. J., Angele, B., Ahn, D., & Rayner, K. (2015). Skipping syntactically illegal the previews: The role of predictability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1703-1714. doi:10.1037/xlm0000142.

    Abstract

    Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.
  • Abma, R., Breeuwsma, G., & Poletiek, F. H. (2001). Toetsen in het onderwijs. De Psycholoog, 36, 638-639.
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Ahlsson, F., Åkerud, H., Schijven, D., Olivier, J., & Sundström-Poromaa, I. (2015). Gene expression in placentas from nondiabetic women giving birth to large for gestational age infants. Reproductive Sciences, 22(10), 1281-1288. doi:10.1177/1933719115578928.

    Abstract

    Gestational diabetes, obesity, and excessive weight gain are known independent risk factors for the birth of a large for gestational age (LGA) infant. However, only 1 of the 10 infants born LGA is born by mothers with diabetes or obesity. Thus, the aim of the present study was to compare placental gene expression between healthy, nondiabetic mothers (n = 22) giving birth to LGA infants and body mass index-matched mothers (n = 24) giving birth to appropriate for gestational age infants. In the whole gene expression analysis, only 29 genes were found to be differently expressed in LGA placentas. Top upregulated genes included insulin-like growth factor binding protein 1, aminolevulinate δ synthase 2, and prolactin, whereas top downregulated genes comprised leptin, gametocyte-specific factor 1, and collagen type XVII α 1. Two enriched gene networks were identified, namely, (1) lipid metabolism, small molecule biochemistry, and organismal development and (2) cellular development, cellular growth, proliferation, and tumor morphology.
  • Ahn, D., Abbott, M. J., Rayner, K., Ferreira, V. S., & Gollan, T. H. (2020). Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks. Journal of Neurolinguistics, 54: 100885. doi:10.1016/j.jneuroling.2019.100885.

    Abstract

    Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control.
  • Alcock, K., Meints, K., & Rowland, C. F. (2020). The UK communicative development inventories: Words and gestures. Guilford, UK: J&R Press Ltd.
  • Alday, P. M. (2015). Be Careful When Assuming the Obvious: Commentary on “The Placement of the Head that Minimizes Online Memory: A Complex Systems Approach”. Language Dynamics and Change, 5(1), 138-146. doi:10.1163/22105832-00501008.

    Abstract

    Ferrer-i-Cancho (this volume) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2015). Discovering prominence and its role in language processing: An individual (differences) approach. Linguistics Vanguard, 1(1), 201-213. doi:10.1515/lingvan-2014-1013.

    Abstract

    It has been suggested that, during real time language comprehension, the human language processing system attempts to identify the argument primarily responsible for the state of affairs (the “actor”) as quickly and unambiguously as possible. However, previous work on a prominence (e.g. animacy, definiteness, case marking) based heuristic for actor identification has suffered from underspecification of the relationship between different cue hierarchies. Qualitative work has yielded a partial ordering of many features (e.g. MacWhinney et al. 1984), but a precise quantification has remained elusive due to difficulties in exploring the full feature space in a particular language. Feature pairs tend to correlate strongly in individual languages for semantic-pragmatic reasons (e.g., animate arguments tend to be actors and actors tend to be morphosyntactically privileged), and it is thus difficult to create acceptable stimuli for a fully factorial design even for binary features. Moreover, the exponential function grows extremely rapidly and a fully crossed factorial design covering the entire feature space would be prohibitively long for a purely within-subjects design. Here, we demonstrate the feasibility of parameter estimation in a short experiment. We are able to estimate parameters at a single subject level for the parameters animacy, case and number. This opens the door for research into individual differences and population variation. Moreover, the framework we introduce here can be used in the field to measure more “exotic” languages and populations, even with small sample sizes. Finally, pooled single-subject results are used to reduce the number of free parameters in previous work based on the extended Argument Dependency Model (Bornkessel-Schlesewsky and Schlesewsky 2006, 2009, 2013, in press; Alday et al. 2014).
  • Alday, P. M. (2015). Quantity and Quality:Not a Zero-Sum Game: A computational and neurocognitive examination of human language processing. PhD Thesis, Philipps-Universität Marburg, Marburg.
  • Alferink, I. (2015). Dimensions of convergence in bilingual speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2020). Evaluating word embeddings for language acquisition. In E. Chersoni, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (pp. 38-42). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2020.cmcl-1.4.

    Abstract

    Continuous vector word representations (or
    word embeddings) have shown success in cap-turing semantic relations between words, as evidenced by evaluation against behavioral data of adult performance on semantic tasks (Pereira et al., 2016). Adult semantic knowl-edge is the endpoint of a language acquisition process; thus, a relevant question is whether these models can also capture emerging word
    representations of young language learners. However, the data for children’s semantic knowledge across development is scarce. In this paper, we propose to bridge this gap by using Age of Acquisition norms to evaluate word embeddings learnt from child-directed input. We present two methods that evaluate word embeddings in terms of (a) the semantic neighbourhood density of learnt words, and (b) con-
    vergence to adult word associations. We apply our methods to bag-of-words models, and find that (1) children acquire words with fewer semantic neighbours earlier, and (2) young learners only attend to very local context. These findings provide converging evidence for validity of our methods in understanding the prerequisite features for a distributional model of word learning.
  • Alhama, R. G., Scha, R., & Zudema, W. (2015). How should we evaluate models of segmentation in artificial language learning? In N. A. Taatgen, M. K. van Vugt, J. P. Borst, & K. Mehlhorn (Eds.), Proceedings of ICCM 2015 (pp. 172-173). Groningen: University of Groningen.

    Abstract

    One of the challenges that infants have to solve when learn- ing their native language is to identify the words in a con- tinuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saf- fran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typ- ically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).
  • Alibali, M. W., Kita, S., Bigelow, L. J., Wolfman, C. M., & Klein, S. M. (2001). Gesture plays a role in thinking for speaking. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 407-410). Paris, France: Éditions L'Harmattan.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). Authors' response [The ubiquity of frequency effects in first language acquisition]. Journal of Child Language, 42(2), 316-322. doi:10.1017/S0305000914000841.

    Abstract

    Our target paper argued for the ubiquity of frequency effects in acquisition, and that any comprehensive theory must take into account the multiplicity of ways that frequently occurring and co-occurring linguistic units affect the acquisition process. The commentaries on the paper provide a largely unanimous endorsement of this position, but raise additional issues likely to frame further discussion and theoretical development. Specifically, while most commentators did not deny the importance of frequency effects, all saw this as the tip of the theoretical iceberg. In this short response we discuss common themes raised in the commentaries, focusing on the broader issue of what frequency effects mean for language acquisition.

    Additional information

    Target paper
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Twomey, K. E. (2020). Introduction. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 1-7). Amsterdam: John Benjamins. doi:10.1075/tilar.27.int.
  • Ambridge, B., & Rowland, C. F. (2013). Experimental methods in studying child language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 149-168. doi:10.1002/wcs.1215.

    Abstract

    This article reviews the some of the most widely used methods used for studying children's language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repetition/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while listening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth-value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, counterbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choosing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215
  • Ambridge, B., Rowland, C. F., & Gummery, A. (2020). Teaching the unlearnable: A training study of complex yes/no questions. Language and Cognition, 12(2), 385-410. doi:10.1017/langcog.2020.5.

    Abstract

    A central question in language acquisition is how children master sentence types that they have seldom, if ever, heard. Here we report the findings of a pre-registered, randomised, single-blind intervention study designed to test the prediction that, for one such sentence type, complex questions (e.g., Is the crocodile who’s hot eating?), children could combine schemas learned, on the basis of the input, for complex noun phrases (the [THING] who’s [PROPERTY]) and simple questions (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Children aged 4;2 to 6;8 (M = 5;6, SD = 7.7 months) were trained on simple questions (e.g., Is the bird cleaning?) and either (Experimental group, N = 61) complex noun phrases (e.g., the bird who’s sad) or (Control group, N = 61) matched simple noun phrases (e.g., the sad bird). In general, the two groups did not differ on their ability to produce novel complex questions at test. However, the Experimental group did show (a) some evidence of generalising a particular complex NP schema (the [THING] who’s [PROPERTY] as opposed to the [THING] that’s [PROPERTY]) from training to test, (b) a lower rate of auxiliary-doubling errors (e.g., *Is the crocodile who’s hot is eating?), and (c) a greater ability to produce complex questions on the first test trial. We end by suggesting some different methods – specifically artificial language learning and syntactic priming – that could potentially be used to better test the present account.
  • Ambridge, B., Bidgood, A., Twomey, K. E., Pine, J. M., Rowland, C. F., & Freudenthal, D. (2015). Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization. PLoS One, 10(4): e0123723. doi:10.1371/journal.pone.0123723.

    Abstract

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Chang, F., & Bidgood, A. (2013). The retreat from overgeneralization in child language acquisition: Word learning, morphology, and verb argument structure. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), 47-62. doi:10.1002/wcs.1207.

    Abstract

    This review investigates empirical evidence for different theoretical proposals regarding the retreat from overgeneralization errors in three domains: word learning (e.g., *doggie to refer to all animals), morphology [e.g., *spyer, *cooker (one who spies/cooks), *unhate, *unsqueeze, *sitted; *drawed], and verb argument structure [e.g., *Don't giggle me (c.f. Don't make me giggle); *Don't say me that (c.f. Don't say that to me)]. The evidence reviewed provides support for three proposals. First, in support of the pre-emption hypothesis, the acquisition of competing forms that express the desired meaning (e.g., spy for *spyer, sat for *sitted, and Don't make me giggle for *Don't giggle me) appears to block errors. Second, in support of the entrenchment hypothesis, repeated occurrence of particular items in particular constructions (e.g., giggle in the intransitive construction) appears to contribute to an ever strengthening probabilistic inference that non-attested uses (e.g., *Don't giggle me) are ungrammatical for adult speakers. That is, both the rated acceptability and production probability of particular errors decline with increasing frequency of pre-empting and entrenching forms in the input. Third, learners appear to acquire semantic and morphophonological constraints on particular constructions, conceptualized as properties of slots in constructions [e.g., the (VERB) slot in the morphological un-(VERB) construction or the transitive-causative (SUBJECT) (VERB) (OBJECT) argument-structure construction]. Errors occur as children acquire the fine-grained semantic and morphophonological properties of particular items and construction slots, and so become increasingly reluctant to use items in slots with which they are incompatible. Findings also suggest some role for adult feedback and conventionality; the principle that, for many given meanings, there is a conventional form that is used by all members of the speech community.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42(2), 239-273. doi:10.1017/S030500091400049X.

    Abstract

    This review article presents evidence for the claim that frequency effects are pervasive in children's first language acquisition, and hence constitute a phenomenon that any successful account must explain. The article is organized around four key domains of research: children's acquisition of single words, inflectional morphology, simple syntactic constructions, and more advanced constructions. In presenting this evidence, we develop five theses. (i) There exist different types of frequency effect, from effects at the level of concrete lexical strings to effects at the level of abstract cues to thematic-role assignment, as well as effects of both token and type, and absolute and relative, frequency. High-frequency forms are (ii) early acquired and (iii) prevent errors in contexts where they are the target, but also (iv) cause errors in contexts in which a competing lower-frequency form is the target. (v) Frequency effects interact with other factors (e.g. serial position, utterance length), and the patterning of these interactions is generally informative with regard to the nature of the learning mechanism. We conclude by arguing that any successful account of language acquisition, from whatever theoretical standpoint, must be frequency sensitive to the extent that it can explain the effects documented in this review, and outline some types of account that do and do not meet this criterion.

    Additional information

    Author's response
  • Ameka, F. K. (2001). Ideophones and the nature of the adjective word class in Ewe. In F. K. E. Voeltz, & C. Kilian-Hatz (Eds.), Ideophones (pp. 25-48). Amsterdam: Benjamins.
  • Ameka, F. K. (2001). Ewe. In J. Garry, & C. Rubino (Eds.), Facts about the world’s languages: An encyclopedia of the world's major languages past and present (pp. 207-213). New York: H.W. Wilson Press.
  • Ameka, F. K., & Essegbey, J. (2013). Serialising languages: Satellite-framed, verb-framed or neither. Ghana Journal of Linguistics, 2(1), 19-38.

    Abstract

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner verbs. Verb serializing languages pose a challenge to this typology as they express Path as well as the Co-event of manner in finite verbs that together function as a single predicate in translational motion clause. We argue that these languages do not fit in the typology and constitute a type of their own. We draw on data from Akan and Frog story narrations in Ewe, a Kwa language, and Sranan, a Caribbean Creole with Gbe substrate, to show that in terms of discourse properties verb serializing languages behave like Verb-framed with respect to some properties and like Satellite-framed languages in terms of others. This study fed into the revision of the typology and such languages are now said to be equipollently-framed languages.
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Amora, K. K., Garcia, R., & Gagarina, N. (2020). Tagalog adaptation of the Multilingual Assessment Instrument for Narratives: History, process and preliminary results. In N. Gagarina, & J. Lindgren (Eds.), New language versions of MAIN: Multilingual Assessment Instrument for Narratives – Revised (pp. 221-233).

    Abstract

    This paper briefly presents the current situation of bilingualism in the Philippines,
    specifically that of Tagalog-English bilingualism. More importantly, it describes the process of adapting the Multilingual Assessment Instrument for Narratives (LITMUS-MAIN) to Tagalog, the basis of Filipino, which is the country’s national language.
    Finally, the results of a pilot study conducted on Tagalog-English bilingual children and
    adults (N=27) are presented. The results showed that Story Structure is similar across the
    two languages and that it develops significantly with age.
  • Anderson, P., Harandi, N. M., Moisik, S. R., Stavness, I., & Fels, S. (2015). A comprehensive 3D biomechanically-driven vocal tract model including inverse dynamics for speech research. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 2395-2399).

    Abstract

    We introduce a biomechanical model of oropharyngeal structures that adds the soft-palate, pharynx, and larynx to our previous models of jaw, skull, hyoid, tongue, and face in a unified model. The model includes a comprehensive description of the upper airway musculature, using point-to-point muscles that may either be embedded within the deformable structures or operate exter- nally. The airway is described by an air-tight mesh that fits and deforms with the surrounding articulators, which enables dynamic coupling to our articulatory speech synthesizer. We demonstrate that the biomechanics, in conjunction with the skinning, supports a range from physically realistic to simplified vocal tract geometries to investigate different approaches to aeroacoustic modeling of vocal tract. Furthermore, our model supports inverse modeling to support investigation of plausible muscle activation patterns to generate speech.
  • Andics, A. (2013). Who is talking? Behavioural and neural evidence for norm-based coding in voice identity learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Andics, A., Gál, V., Vicsi, K., Rudas, G., & Vidnyánszky, Z. (2013). FMRI repetition suppression for voices is modulated by stimulus expectations. NeuroImage, 69, 277-283. doi:10.1016/j.neuroimage.2012.12.033.

    Abstract

    According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing (‘alt’) and identical (‘rep’) voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Anichini, M., De Heer Kloots, M., & Ravignani, A. (2020). Interactive rhythms in the wild, in the brain, and in silico. Canadian Journal of Experimental Psychology, 74(3), 170-175. doi:10.1037/cep0000224.

    Abstract

    There are some historical divisions in methods, rationales, and purposes between
    studies on comparative cognition and behavioural ecology. In turn, the interaction between
    these two branches and studies from mathematics, computation and neuroscience is not
    usual. In this short piece, we attempt to build bridges among these disciplines. We present a
    series of interconnected vignettes meant to illustrate how a more interdisciplinary approach
    looks like when successful, and its advantages. Concretely, we focus on a recent topic,
    namely animal rhythms in interaction, studied under different approaches. We showcase 5
    research efforts, which we believe successfully link 5 particular Scientific areas of rhythm
    research conceptualized as: Social neuroscience, Detailed rhythmic quantification,
    Ontogeny, Computational approaches and Spontaneous interactions. Our suggestions will
    hopefully spur a ‘Comparative rhythms in interaction’ field, which can integrate and
    capitalize on knowledge from zoology, comparative psychology, neuroscience, and
    computation.
  • Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2020). Sensory modality-independent activation of the brain network for language. The Journal of Neuroscience, 40(14), 2914-2924. doi:10.1523/JNEUROSCI.2271-19.2020.

    Abstract

    The meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects.
  • Araújo, S., Faísca, L., Bramão, I., Reis, A., & Petersson, K. M. (2015). Lexical and sublexical orthographic processing: An ERP study with skilled and dyslexic adult readers. Brain and Language, 141, 16-27. doi:10.1016/j.bandl.2014.11.007.

    Abstract

    This ERP study investigated the cognitive nature of the P1–N1 components during orthographic processing. We used an implicit reading task with various types of stimuli involving different amounts of sublexical or lexical orthographic processing (words, pseudohomophones, pseudowords, nonwords, and symbols), and tested average and dyslexic readers. An orthographic regularity effect (pseudowords– nonwords contrast) was observed in the average but not in the dyslexic group. This suggests an early sensitivity to the dependencies among letters in word-forms that reflect orthographic structure, while the dyslexic brain apparently fails to be appropriately sensitive to these complex features. Moreover, in the adults the N1-response may already reflect lexical access: (i) the N1 was sensitive to the familiar vs. less familiar orthographic sequence contrast; (ii) and early effects of the phonological form (words-pseudohomophones contrast) were also found. Finally, the later N320 component was attenuated in the dyslexics, suggesting suboptimal processing in later stages of phonological analysis.
  • Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta-analysis. Journal of Educational Psychology, 107(3), 868-883. doi:10.1037/edu0000006.

    Abstract

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid automatized naming (RAN) and reading performance. To this end, we conducted a meta-analysis of the correlational relationship between these 2 constructs to (a) determine the overall strength of the RAN–reading association and (b) identify variables that systematically moderate this relationship. A random-effects model analysis of data from 137 studies (857 effect sizes; 28,826 participants) indicated a moderate-to-strong relationship between RAN and reading performance (r = .43, I2 = 68.40). Further analyses revealed that RAN contributes to the 4 measures of reading (word reading, text reading, non-word reading, and reading comprehension), but higher coefficients emerged in favor of real word reading and text reading. RAN stimulus type and type of reading score were the factors with the greatest moderator effect on the magnitude of the RAN–reading relationship. The consistency of orthography and the subjects’ grade level were also found to impact this relationship, although the effect was contingent on reading outcome. It was less evident whether the subjects’ reading proficiency played a role in the relationship. Implications for future studies are discussed.
  • Arnhold, A., Porretta, V., Chen, A., Verstegen, S. A., Mok, I., & Järvikivi, J. (2020). (Mis) understanding your native language: Regional accent impedes processing of information status. Psychonomic Bulletin & Review, 27, 801-808. doi:10.3758/s13423-020-01731-w.

    Abstract

    Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues.
    However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent.
    Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech,
    we show that regional accent impedes higher levels of language processing, making native listeners’ processing resemble that of
    second-language listeners.
    In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a
    screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between
    two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the
    Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native
    British listeners do.
    In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as
    the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the
    Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent
    matched than for mismatches, suggesting a native-like competence in these offline ratings.
    These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and
    representation to include both prosody and regional variation.
  • Arshamian, A., Manko, P., & Majid, A. (2020). Limitations in odour simulation may originate from differential sensory embodiment. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190273. doi:10.1098/rstb.2019.0273.

    Abstract

    Across diverse lineages, animals communicate using chemosignals, but only humans communicate about chemical signals. Many studies have observed that compared with other sensory modalities, communication about smells is relatively rare and not always reliable. Recent cross-cultural studies, on the other hand, suggest some communities are more olfactorily oriented than previously supposed. Nevertheless, across the globe a general trend emerges where olfactory communication is relatively hard. We suggest here that this is in part because olfactory representations are different in kind: they have a low degree of embodiment, and are not easily expressed as primitives, thereby limiting the mental manipulations that can be performed with them. New exploratory data from Dutch children (9–12 year-olds) and adults support that mental imagery from olfaction is weak in comparison with vision and audition, and critically this is not affected by language development. Specifically, while visual and auditory imagery becomes more vivid with age, olfactory imagery shows no such development. This is consistent with the idea that olfactory representations are different in kind from representations from the other senses.

    Additional information

    Supplementary material
  • Asano, Y., Yuan, C., Grohe, A.-K., Weber, A., Antoniou, M., & Cutler, A. (2020). Uptalk interpretation as a function of listening experience. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 735-739). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-150.

    Abstract

    The term “uptalk” describes utterance-final pitch rises that carry no sentence-structural information. Uptalk is usually dialectal or sociolectal, and Australian English (AusEng) is particularly known for this attribute. We ask here whether experience with an uptalk variety affects listeners’ ability to categorise rising pitch contours on the basis of the timing and height of their onset and offset. Listeners were two groups of English-speakers (AusEng, and American English), and three groups of listeners with L2 English: one group with Mandarin as L1 and experience of listening to AusEng, one with German as L1 and experience of listening to AusEng, and one with German as L1 but no AusEng experience. They heard nouns (e.g. flower, piano) in the framework “Got a NOUN”, each ending with a pitch rise artificially manipulated on three contrasts: low vs. high rise onset, low vs. high rise offset and early vs. late rise onset. Their task was to categorise the tokens as “question” or “statement”, and we analysed the effect of the pitch contrasts on their judgements. Only the native AusEng listeners were able to use the pitch contrasts systematically in making these categorisations.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Asaridou, S. S. (2015). An ear for pitch: On the effects of experience and aptitude in processing pitch in language and music. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Athanasopoulos, P., Bylund, E., Montero-Melis, G., Damjanovic, L., Schartner, A., Kibbe, A., Riches, N., & Thierry, G. (2015). Two languages, two minds: Flexible cognitive processing driven by language of operation. Psychological Science, 26(4), 518-526. doi:10.1177/0956797614567509.

    Abstract

    People make sense of objects and events around them by classifying them into identifiable categories. The extent to which language affects this process has been the focus of a long-standing debate: Do different languages cause their speakers to behave differently? Here, we show that fluent German-English bilinguals categorize motion events according to the grammatical constraints of the language in which they operate. First, as predicted from cross-linguistic differences in motion encoding, bilingual participants functioning in a German testing context prefer to match events on the basis of motion completion to a greater extent than do bilingual participants in an English context. Second, when bilingual participants experience verbal interference in English, their categorization behavior is congruent with that predicted for German; when bilingual participants experience verbal interference in German, their categorization becomes congruent with that predicted for English. These findings show that language effects on cognition are context-bound and transient, revealing unprecedented levels of malleability in human cognition.

    Files private

    Request files
  • Ayub, Q., Yngvadottir, B., Chen, Y., Xue, Y., Hu, M., Vernes, S. C., Fisher, S. E., & Tyler-Smith, C. (2013). FOXP2 targets show evidence of positive selection in European populations. American Journal of Human Genetics, 92, 696-706. doi:10.1016/j.ajhg.2013.03.019.

    Abstract

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction.
  • Azar, Z. (2020). Effect of language contact on speech and gesture: The case of Turkish-Dutch bilinguals in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Azar, Z., Backus, A., & Ozyurek, A. (2020). Language contact does not drive gesture transfer: Heritage speakers maintain language specific gesture patterns in each language. Bilingualism: Language and Cognition, 23(2), 414-428. doi:10.1017/S136672891900018X.

    Abstract

    This paper investigates whether there are changes in gesture rate when speakers of two languages with different gesture rates (Turkish-high gesture; Dutch-low gesture) come into daily contact. We analyzed gestures produced by second-generation heritage speakers of Turkish in the Netherlands in each language, comparing them to monolingual baselines. We did not find differences between bilingual and monolingual speakers, possibly because bilinguals were proficient in both languages and used them frequently – in line with a usage-based approach to language. However, bilinguals produced more deictic gestures than monolinguals in both Turkish and Dutch, which we interpret as a bilingual strategy. Deictic gestures may help organize discourse by placing entities in gesture space and help reduce the cognitive load associated with being bilingual, e.g., inhibition cost. Therefore, gesture rate does not necessarily change in contact situations but might be modulated by frequency of language use, proficiency, and cognitive factors related to being bilingual.
  • Azar, Z., Ozyurek, A., & Backus, A. (2020). Turkish-Dutch bilinguals maintain language-specific reference tracking strategies in elicited narratives. International Journal of Bilingualism, 24(2), 376-409. doi:10.1177/1367006919838375.

    Abstract

    Aim:

    This paper examines whether second-generation Turkish heritage speakers in the Netherlands follow language-specific patterns of reference tracking in Turkish and Dutch, focusing on discourse status and pragmatic contexts as factors that may modulate the choice of referring expressions (REs), that is, the noun phrase (NP), overt pronoun and null pronoun.
    Methodology:

    Two short silent videos were used to elicit narratives from 20 heritage speakers of Turkish, both in Turkish and in Dutch. Monolingual baseline data were collected from 20 monolingually raised speakers of Turkish in Turkey and 20 monolingually raised speakers of Dutch in the Netherlands. We also collected language background data from bilinguals with an extensive survey.
    Data and analysis:

    Using generalised logistic mixed-effect regression, we analysed the influence of discourse status and pragmatic context on the choice of subject REs in Turkish and Dutch, comparing bilingual data to the monolingual baseline in each language.
    Findings:

    Heritage speakers used overt versus null pronouns in Turkish and stressed versus reduced pronouns in Dutch in pragmatically appropriate contexts. There was, however, a slight increase in the proportions of overt pronouns as opposed to NPs in Turkish and as opposed to null pronouns in Dutch. We suggest an explanation based on the degree of entrenchment of differential RE types in relation to discourse status as the possible source of the increase.
    Originality:

    This paper provides data from an understudied language pair in the domain of reference tracking in language contact situations. Unlike several studies of pronouns in language contact, we do not find differences across monolingual and bilingual speakers with regard to pragmatic constraints on overt pronouns in the minority pro-drop language.
    Significance:

    Our findings highlight the importance of taking language proficiency and use into account while studying bilingualism and combining formal approaches to language use with usage-based approaches for a more complete understanding of bilingual language production.
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bakker, I., Takashima, A., Van Hall, J. G., & McQueen, J. M. (2015). Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of cognitive neuroscience, 27(7), 1286-1297. doi:10.1162/jocn_a_00801.

    Abstract

    The complementary learning systems account of word learning states that novel words, like other types of memories, undergo an offline consolidation process during which they are gradually integrated into the neocortical memory network. A fundamental change in the neural representation of a novel word should therefore occur in the hours after learning. The present EEG study tested this hypothesis by investigating whether novel words learned before a 24-hr consolidation period elicited more word-like oscillatory responses than novel words learned immediately before testing. In line with previous studies indicating that theta synchronization reflects lexical access, unfamiliar novel words elicited lower power in the theta band (4–8 Hz) than existing words. Recently learned words still showed a marginally lower theta increase than existing words, but theta responses to novel words that had been acquired 24 hr earlier were indistinguishable from responses to existing words. Consistent with evidence that beta desynchronization (16–21 Hz) is related to lexical-semantic processing, we found that both unfamiliar and recently learned novel words elicited less beta desynchronization than existing words. In contrast, no difference was found between novel words learned 24 hr earlier and existing words. These data therefore suggest that an offline consolidation period enables novel words to acquire lexically integrated, word-like neural representations.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia, 79, 33-41. doi:10.1016/j.neuropsychologia.2015.10.020.
  • Bank, R., Crasborn, O., & Van Hout, R. (2015). Alignment of two languages: The spreading of mouthings in Sign Language of the Netherlands. International Journal of Bilingualism, 19, 40-55. doi:10.1177/1367006913484991.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT are mouth actions that have their origin in spoken Dutch, and are usually time aligned with the signs they co-occur with. Frequently, however, they spread over one or more adjacent signs, so that one mouthing co-occurs with multiple manual signs. We conducted a corpus study to explore how frequently this occurs in NGT and whether there is any sociolinguistic variation in the use of spreading. Further, we looked at the circumstances under which spreading occurs. Answers to these questions may give us insight into the prosodic structure of sign languages. We investigated a sample of the Corpus NGT containing 5929 mouthings by 46 participants. We found that spreading over an adjacent sign is independent of social factors. Further, mouthings that spread are longer than non-spreading mouthings, whether measured in syllables or in milliseconds. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilised in sign languages
  • Bank, R. (2015). The ubiquity of mouthings in NGT: A corpus study. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Baranova, J. (2015). Other-initiated repair in Russian. Open linguistics, 1(1), 555-577. doi:10.1515/opli-2015-0019.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversations in Russian. In the discussion of various repair cases special attention is given to the modifications that the trouble source turn undergoes in response to an open versus a restricted repair initiation. Speakers often modify their problematic turn in multiple ways at ones when responding to an open repair initiation. They can alter the word order of the problematic turn, change prosodic contour of the utterance, omit redundant elements and add more specific ones. By contrast, restricted repair initiations usually receive specific repair solutions that target only one problem at a time
  • Baranova, J. (2020). Reasons for every-day activities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Barendse, M. T. (2015). Dimensionality assessment with factor analysis methods. PhD Thesis, University of Groningen, Groningen.
  • Barendse, M. T., Oort, F. J., Jak, S., & Timmerman, M. E. (2013). Multilevel exploratory factor analysis of discrete data. Netherlands Journal of Psychology, 67(4), 114-121.
  • Barendse, M. T., & Rosseel, Y. (2020). Multilevel modeling in the ‘wide format’ approach with discrete data: A solution for small cluster sizes. Structural Equation Modeling: A Multidisciplinary Journal, 27(5), 696-721. doi:10.1080/10705511.2019.1689366.

    Abstract

    In multilevel data, units at level 1 are nested in clusters at level 2, which in turn may be nested in even larger clusters at level 3, and so on. For continuous data, several authors have shown how to model multilevel data in a ‘wide’ or ‘multivariate’ format approach. We provide a general framework to analyze random intercept multilevel SEM in the ‘wide format’ (WF) and extend this approach for discrete data. In a simulation study, we vary response scale (binary, four response options), covariate presence (no, between-level, within-level), design (balanced, unbalanced), model misspecification (present, not present), and the number of clusters (small, large) to determine accuracy and efficiency of the estimated model parameters. With a small number of observations in a cluster, results indicate that the WF approach is a preferable approach to estimate multilevel data with discrete response options.
  • Barendse, M. T., Oort, F. J., & Timmerman, M. E. (2015). Using exploratory factor analysis to determine the dimensionality of discrete responses. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 87-101. doi:10.1080/10705511.2014.934850.

    Abstract

    Exploratory factor analysis (EFA) is commonly used to determine the dimensionality of continuous data. In a simulation study we investigate its usefulness with discrete data. We vary response scales (continuous, dichotomous, polytomous), factor loadings (medium, high), sample size (small, large), and factor structure (simple, complex). For each condition, we generate 1,000 data sets and apply EFA with 5 estimation methods (maximum likelihood [ML] of covariances, ML of polychoric correlations, robust ML, weighted least squares [WLS], and robust WLS) and 3 fit criteria (chi-square test, root mean square error of approximation, and root mean square residual). The various EFA procedures recover more factors when sample size is large, factor loadings are high, factor structure is simple, and response scales have more options. Robust WLS of polychoric correlations is the preferred method, as it is theoretically justified and shows fewer convergence problems than the other estimation methods.
  • Baron-Cohen, S., Johnson, D., Asher, J. E., Wheelwright, S., Fisher, S. E., Gregersen, P. K., & Allison, C. (2013). Is synaesthesia more common in autism? Molecular Autism, 4(1): 40. doi:10.1186/2040-2392-4-40.

    Abstract

    BACKGROUND:
    Synaesthesia is a neurodevelopmental condition in which a sensation in one modality triggers a perception in a second modality. Autism (shorthand for Autism Spectrum Conditions) is a neurodevelopmental condition involving social-communication disability alongside resistance to change and unusually narrow interests or activities. Whilst on the surface they appear distinct, they have been suggested to share common atypical neural connectivity.

    METHODS:
    In the present study, we carried out the first prevalence study of synaesthesia in autism to formally test whether these conditions are independent. After exclusions, 164 adults with autism and 97 controls completed a synaesthesia questionnaire, autism spectrum quotient, and test of genuineness-revised (ToG-R) online.

    RESULTS:
    The rate of synaesthesia in adults with autism was 18.9% (31 out of 164), almost three times greater than in controls (7.22%, 7 out of 97, P <0.05). ToG-R proved unsuitable for synaesthetes with autism.

    CONCLUSIONS:
    The significant increase in synaesthesia prevalence in autism suggests that the two conditions may share some common underlying mechanisms. Future research is needed to develop more feasible validation methods of synaesthesia in autism.

    Files private

    Request files
  • Barrett, R. L. C., Dawson, M., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Croxson, P. L., Johnson, P. J., Howells, H., Forkel, S. J., Dell'Acqua, F., & Catani, M. (2020). Differences in Frontal Network Anatomy Across Primate Species. The Journal of Neuroscience, 40(10), 2094-2107. doi:10.1523/JNEUROSCI.1650-18.2019.

    Abstract

    The frontal lobe is central to distinctive aspects of human cognition and behavior. Some comparative studies link this to a larger frontal cortex and even larger frontal white matter in humans compared with other primates, yet others dispute these findings. The discrepancies between studies could be explained by limitations of the methods used to quantify volume differences across species, especially when applied to white matter connections. In this study, we used a novel tractography approach to demonstrate that frontal lobe networks, extending within and beyond the frontal lobes, occupy 66% of total brain white matter in humans and 48% in three monkey species: vervets (Chlorocebus aethiops), rhesus macaque (Macaca mulatta) and cynomolgus macaque (Macaca fascicularis), all male. The simian–human differences in proportional frontal tract volume were significant for projection, commissural, and both intralobar and interlobar association tracts. Among the long association tracts, the greatest difference was found for tracts involved in motor planning, auditory memory, top-down control of sensory information, and visuospatial attention, with no significant differences in frontal limbic tracts important for emotional processing and social behaviour. In addition, we found that a nonfrontal tract, the anterior commissure, had a smaller volume fraction in humans, suggesting that the disproportionally large volume of human frontal lobe connections is accompanied by a reduction in the proportion of some nonfrontal connections. These findings support a hypothesis of an overall rearrangement of brain connections during human evolution.
  • Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Barthel, M., & Levinson, S. C. (2020). Next speakers plan word forms in overlap with the incoming turn: Evidence from gaze-contingent switch task performance. Language, Cognition and Neuroscience, 35(9), 1183-1202. doi:10.1080/23273798.2020.1716030.

    Abstract

    To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.

    Additional information

    Supplemental material
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Brunia, C. H. M., De Munck, J. C., & Spekreijse, H. (2001). Desynchronization during anticipatory attention for an upcoming stimulus: A comparative EEG/MEG study. Clinical Neurophysiology, 112, 393-403.

    Abstract

    Objectives: Our neurophysiological model of anticipatory behaviour (e.g. Acta Psychol 101 (1999) 213; Bastiaansen et al., 1999a) predicts an activation of (primary) sensory cortex during anticipatory attention for an upcoming stimulus. In this paper we attempt to demonstrate this by means of event-related desynchronization (ERD). Methods: Five subjects performed a time estimation task, and were informed about the quality of their time estimation by either visual or auditory stimuli providing Knowledge of Results (KR). EEG and MEG were recorded in separate sessions, and ERD was computed in the 8± 10 and 10±12 Hz frequency bands for both datasets. Results: Both in the EEG and the MEG we found an occipitally maximal ERD preceding the visual KR for all subjects. Preceding the auditory KR, no ERD was present in the EEG, whereas in the MEG we found an ERD over the temporal cortex in two of the 5 subjects. These subjects were also found to have higher levels of absolute power over temporal recording sites in the MEG than the other subjects, which we consider to be an indication of the presence of a `tau' rhythm (e.g. Neurosci Lett 222 (1997) 111). Conclusions: It is concluded that the results are in line with the predictions of our neurophysiological model.
  • Bastiaansen, M. C. M., & Brunia, C. H. M. (2001). Anticipatory attention: An event-related desynchronization approach. International Journal of Psychophysiology, 43, 91-107.

    Abstract

    This paper addresses the question of whether anticipatory attention - i.e. attention directed towards an upcoming stimulus in order to facilitate its processing - is realized at the neurophysiological level by a pre-stimulus desynchronization of the sensory cortex corresponding to the modality of the anticipated stimulus, reflecting then opening of a thalamocortical gate in the relevant sensory modality. It is argued that a technique called Event-Related Desynchronization (ERD) of rhythmic 10-Hz activity is well suited to study the thalamocortical processes that are thought to mediate anticipatory attention. In a series of experiments, ERD was computed on EEG and MEG data, recorded while subjects performed a time estimation task and were informed about the quality of their time estimation by stimuli providing Knowledge of Results (KR). The modality of the KR stimuli (auditory, visual, or somatosensory) was manipulated both within and between experiments. The results indicate to varying degrees that preceding the presentation of the KR stimuli, ERD is present over the sensory cortex, which corresponds to the modality of the KR stimulus. The general pattern of results supports the notion that a thalamocortical gating mechanism forms the neurophysiological basis of anticipatory attention. Furthermore, the results support the notion that Event-Related Potential(ERP) and ERD measures reflect fundamentally different neurophysiological processes.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Bastos, A. M., Vezoli, J., Bosman, C. A., Schoffelen, J.-M., Oostenveld, R., Dowdall, J. R., De Weerd, P., Kennedy, H., & Fries, P. (2015). Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2), 390-401. doi:10.1016/j.neuron.2014.12.018.

    Abstract

    Visual cortical areas subserve cognitive functions by interacting in both feedforward and feedback directions. While feedforward influences convey sensory signals, feedback influences modulate feedforward signaling according to the current behavioral context. We investigated whether these interareal influences are subserved differentially by rhythmic synchronization. We correlated frequency-specific directed influences among 28 pairs of visual areas with anatomical metrics of the feedforward or feedback character of the respective interareal projections. This revealed that in the primate visual system, feedforward influences are carried by theta-band ( approximately 4 Hz) and gamma-band ( approximately 60-80 Hz) synchronization, and feedback influences by beta-band ( approximately 14-18 Hz) synchronization. The functional directed influences constrain a functional hierarchy similar to the anatomical hierarchy, but exhibiting task-dependent dynamic changes in particular with regard to the hierarchical positions of frontal areas. Our results demonstrate that feedforward and feedback signaling use distinct frequency channels, suggesting that they subserve differential communication requirements.
  • Bauer, B. L. M. (2020). Language sources and the reconstruction of early languages: Sociolinguistic discrepancies and evolution in Old French grammar. Diachronica, 37(3), 273-317. doi:10.1075/dia.18026.bau.

    Abstract

    This article argues that with the original emphasis on dialectal variation, using primarily literary texts from various regions, analysis of Old French has routinely neglected social variation, providing an incomplete picture of its grammar. Accordingly, Old French has been identified as typically featuring e.g. “pro-drop”, brace constructions, and single negation. Yet examination of these features in informal texts, as opposed to the formal texts typically dealt with, demonstrates that these documents do not corroborate the picture of Old French that is commonly presented in the linguistic literature. Our reconstruction of Old French grammar therefore needs adjustment and further refinement, in particular by implementing sociolinguistic data. With a broader scope, the call for inclusion of sociolinguistic variation may resonate in the investigation of other early languages, resulting in the reassessment of the sources used, and reopening the debate about social variation in dead languages and its role in language evolution.

    Files private

    Request files
  • Bauer, B. L. M. (2020). Appositive compounds in dialectal and sociolinguistic varieties of French. In M. Maiden, & S. Wolfe (Eds.), Variation and change in Gallo-Romance (pp. 326-346). Oxford: Oxford University Press.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality.

    Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bauer, B. L. M. (2015). Origins of grammatical forms and evidence from Latin. Journal of Indo-European studies, 43, 201-235.

    Abstract

    This article submits that the instances of incipient grammaticalization that are found in the later stages of Latin and that resulted in new grammatical forms in Romance, reflect a major linguistic innovation. While the new grammatical forms are created out of lexical or mildly grammatical autonomous elements, earlier processes seem to primarily involve particles with a certain semantic value and freezing. This fundamental difference explains why the attempts of early Indo-Europeanists such as Franz Bopp at tracing the lexical origins of Indo-European inflected forms were unsuccessful and strongly criticized by the Neo-Grammarians.
  • Bauer, B. L. M. (2015). Origins of the indefinite HOMO constructions. In G. Haverling (Ed.), Latin Linguistics in the Early 21st Century: Acts of the 16th International Colloquium on Latin Linguistics (pp. 542-553). Uppsala: Uppsala University.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2015). A chromosomal rearrangement in a child with severe speech and language disorder separates FOXP2 from a functional enhancer. Molecular Cytogenetics, 8: 69. doi:10.1186/s13039-015-0173-0.

    Abstract

    Mutations of FOXP2 in 7q31 cause a rare disorder involving speech apraxia, accompanied by expressive and receptive language impairments. A recent report described a child with speech and language deficits, and a genomic rearrangement affecting chromosomes 7 and 11. One breakpoint mapped to 7q31 and, although outside its coding region, was hypothesised to disrupt FOXP2 expression. We identified an element 2 kb downstream of this breakpoint with epigenetic characteristics of an enhancer. We show that this element drives reporter gene expression in human cell-lines. Thus, displacement of this element by translocation may disturb gene expression, contributing to the observed language phenotype.
  • Becker, R., Pefkou, M., Michel, C. M., & Hervais-Adelman, A. (2013). Left temporal alpha-band activity reflects single word intelligibility. Frontiers in Systems Neuroscience, 7: 121. doi:10.3389/fnsys.2013.00121.

    Abstract

    The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
  • Behrens, B., Flecken, M., & Carroll, M. (2013). Progressive Attraction: On the Use and Grammaticalization of Progressive Aspect in Dutch, Norwegian, and German. Journal of Germanic linguistics, 25(2), 95-136. doi:10.1017/S1470542713000020.

    Abstract

    This paper investigates the use of aspectual constructions in Dutch, Norwegian, and German, languages in which aspect marking that presents events explicitly as ongoing, is optional. Data were elicited under similar conditions with native speakers in the three countries. We show that while German speakers make insignificant use of aspectual constructions, usage patterns in Norwegian and Dutch present an interesting case of overlap, as well as differences, with respect to a set of factors that attract or constrain the use of different constructions. The results indicate that aspect marking is grammaticalizing in Dutch, but there are no clear signs of a similar process in Norwegian.*
  • Beierholm, U., Rohe, T., Ferrari, A., Stegle, O., & Noppeney, U. (2020). Using the past to estimate sensory uncertainty. eLife, 9: e54172. doi:10.7554/eLife.54172.

    Abstract

    To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
  • Berghuis, B., De Kovel, C. G. F., van Iterson, L., Lamberts, R. J., Sander, J. W., Lindhout, D., & Koeleman, B. P. C. (2015). Complex SCN8A DNA-abnormalities in an individual with therapy resistant absence epilepsy. Epilepsy Research, 115, 141-144. doi:10.1016/j.eplepsyres.2015.06.007.

    Abstract

    Background De novo SCN8A missense mutations have been identified as a rare dominant cause of epileptic encephalopathy. We described a person with epileptic encephalopathy associated with a mosaic deletion of the SCN8A gene. Methods Array comparative genome hybridization was used to identify chromosomal abnormalities. Next Generation Sequencing was used to screen for variants in known and candidate epilepsy genes. A single nucleotide polymorphism array was used to test whether the SCN8A variants were in cis or in trans. Results We identified a de novo mosaic deletion of exons 2–14 of SCN8A, and a rare maternally inherited missense variant on the other allele in a woman presenting with absence seizures, challenging behavior, intellectual disability and QRS-fragmentation on the ECG. We also found a variant in SCN5A. Conclusions The combination of a rare missense variant with a de novo mosaic deletion of a large part of the SCN8A gene suggests that other possible mechanisms for SCN8A mutations may cause epilepsy; loss of function, genetic modifiers and cellular interference may play a role. This case expands the phenotype associated with SCN8A mutations, with absence epilepsy and regression in language and memory skills.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C., Bosch, L. t., Fikkert, P., & Boves, L. (2015). Modelling the Noise-Robustness of Infants’ Word Representations: The Impact of Previous Experience. PLoS One, 10(7): e0132245. doi:10.1371/journal.pone.0132245.

    Abstract

    During language acquisition, infants frequently encounter ambient noise. We present a computational model to address whether specific acoustic processing abilities are necessary to detect known words in moderate noise—an ability attested experimentally in infants. The model implements a general purpose speech encoding and word detection procedure. Importantly, the model contains no dedicated processes for removing or cancelling out ambient noise, and it can replicate the patterns of results obtained in several infant experiments. In addition to noise, we also addressed the role of previous experience with particular target words: does the frequency of a word matter, and does it play a role whether that word has been spoken by one or multiple speakers? The simulation results show that both factors affect noise robustness. We also investigated how robust word detection is to changes in speaker identity by comparing words spoken by known versus unknown speakers during the simulated test. This factor interacted with both noise level and past experience, showing that an increase in exposure is only helpful when a familiar speaker provides the test material. Added variability proved helpful only when encountering an unknown speaker. Finally, we addressed whether infants need to recognise specific words, or whether a more parsimonious explanation of infant behaviour, which we refer to as matching, is sufficient. Recognition involves a focus of attention on a specific target word, while matching only requires finding the best correspondence of acoustic input to a known pattern in the memory. Attending to a specific target word proves to be more noise robust, but a general word matching procedure can be sufficient to simulate experimental data stemming from young infants. A change from acoustic matching to targeted recognition provides an explanation of the improvements observed in infants around their first birthday. In summary, we present a computational model incorporating only the processes infants might employ when hearing words in noise. Our findings show that a parsimonious interpretation of behaviour is sufficient and we offer a formal account of emerging abilities.
  • Bidgood, A., Pine, J. M., Rowland, C. F., & Ambridge, B. (2020). Syntactic representations are both abstract and semantically constrained: Evidence from children’s and adults’ comprehension and production/priming of the English passive. Cognitive Science, 44(9): e12892. doi:10.1111/cogs.12892.

    Abstract

    All accounts of language acquisition agree that, by around age 4, children’s knowledge of grammatical constructions is abstract, rather than tied solely to individual lexical items. The aim of the present research was to investigate, focusing on the passive, whether children’s and adults’ performance is additionally semantically constrained, varying according to the distance between the semantics of the verb and those of the construction. In a forced‐choice pointing study (Experiment 1), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) showed support for the prediction of this semantic construction prototype account of an interaction such that the observed disadvantage for passives as compared to actives (i.e., fewer correct points/longer reaction time) was greater for experiencer‐theme verbs than for agent‐patient and theme‐experiencer verbs (e.g., Bob was seen/hit/frightened by Wendy). Similarly, in a production/priming study (Experiment 2), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) produced fewer passives for experiencer‐theme verbs than for agent‐patient/theme‐experiencer verbs. We conclude that these findings are difficult to explain under accounts based on the notion of A(rgument) movement or of a monostratal, semantics‐free, level of syntax, and instead necessitate some form of semantic construction prototype account.

    Additional information

    Supplementary material
  • Blackwell, N. L., Perlman, M., & Fox Tree, J. E. (2015). Quotation as a multimodal construction. Journal of Pragmatics, 81, 1-7. doi:10.1016/j.pragma.2015.03.004.

    Abstract

    Quotations are a means to report a broad range of events in addition to speech, and often involve both vocal and bodily demonstration. The present study examined the use of quotation to report a variety of multisensory events (i.e., containing salient visible and audible elements) as participants watched and then described a set of video clips including human speech and animal vocalizations. We examined the relationship between demonstrations conveyed through the vocal versus bodily modality, comparing them across four common quotation devices (be like, go, say, and zero quotatives), as well as across direct and non-direct quotations and retellings. We found that direct quotations involved high levels of both vocal and bodily demonstration, while non-direct quotations involved lower levels in both these channels. In addition, there was a strong positive correlation between vocal and bodily demonstration for direct quotation. This result supports a Multimodal Hypothesis where information from the two channels arises from one central concept.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bobadilla-Suarez, S., Guest, O., & Love, B. C. (2020). Subjective value and decision entropy are jointly encoded by aligned gradients across the human brain. Communications Biology, 3: 597. doi:10.1038/s42003-020-01315-3.

    Abstract

    Recent work has considered the relationship between value and confidence in both behavioural and neural representation. Here we evaluated whether the brain organises value and confidence signals in a systematic fashion that reflects the overall desirability of options. If so, regions that respond to either increases or decreases in both value and confidence should be widespread. We strongly confirmed these predictions through a model-based fMRI analysis of a mixed gambles task that assessed subjective value (SV) and inverse decision entropy (iDE), which is related to confidence. Purported value areas more strongly signalled iDE than SV, underscoring how intertwined value and confidence are. A gradient tied to the desirability of actions transitioned from positive SV and iDE in ventromedial prefrontal cortex to negative SV and iDE in dorsal medial prefrontal cortex. This alignment of SV and iDE signals could support retrospective evaluation to guide learning and subsequent decisions.

    Additional information

    supplemental information
  • Bock, K., Eberhard, K. M., Cutting, J. C., Meyer, A. S., & Schriefers, H. (2001). Some attractions of verb agreement. Cognitive Psychology, 43(2), 83-128. doi:10.1006/cogp.2001.0753.

    Abstract

    In English, words like scissors are grammatically plural but conceptually singular, while words like suds are both grammatically and conceptually plural. Words like army can be construed plurally, despite being grammatically singular. To explore whether and how congruence between grammatical and conceptual number affected the production of subject-verb number agreement in English, we elicited sentence completions for complex subject noun phrases like The advertisement for the scissors. In these phrases, singular subject nouns were followed by distractor words whose grammatical and conceptual numbers varied. The incidence of plural attraction (the use of plural verbs after plural distractors) increased only when distractors were grammatically plural, and revealed no influence from the distractors' number meanings. Companion experiments in Dutch offered converging support for this account and suggested that similar agreement processes operate in that language. The findings argue for a component of agreement that is sensitive primarily to the grammatical reflections of number. Together with other results, the evidence indicates that the implementation of agreement in languages like English and Dutch involves separable processes of number marking and number morphing, in which number meaning plays different parts.

    Files private

    Request files
  • De Boer, B., Thompson, B., Ravignani, A., & Boeckx, C. (2020). Analysis of mutation and fixation for language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 56-58). Nijmegen: The Evolution of Language Conferences.
  • De Boer, B., Thompson, B., Ravignani, A., & Boeckx, C. (2020). Evolutionary dynamics do not motivate a single-mutant theory of human language. Scientific Reports, 10: 451. doi:10.1038/s41598-019-57235-8.

    Abstract

    One of the most controversial hypotheses in cognitive science is the Chomskyan evolutionary conjecture that language arose instantaneously in humans through a single mutation. Here we analyze the evolutionary dynamics implied by this hypothesis, which has never been formalized before. The hypothesis supposes the emergence and fixation of a single mutant (capable of the syntactic operation Merge) during a narrow historical window as a result of frequency-independent selection under a huge fitness advantage in a population of an effective size no larger than ~15 000 individuals. We examine this proposal by combining diffusion analysis and extreme value theory to derive a probabilistic formulation of its dynamics. We find that although a macro-mutation is much more likely to go to fixation if it occurs, it is much more unlikely a priori than multiple mutations with smaller fitness effects. The most likely scenario is therefore one where a medium number of mutations with medium fitness effects accumulate. This precise analysis of the probability of mutations occurring and going to fixation has not been done previously in the context of the evolution of language. Our results cast doubt on any suggestion that evolutionary reasoning provides an independent rationale for a single-mutant theory of language.

    Additional information

    Supplementary material
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2020). Conversational expectations get revised as response latencies unfold. Language, Cognition and Neuroscience, 35(6), 766-779. doi:10.1080/23273798.2019.1590609.

    Abstract

    The present study extends neuro-imaging into conversation through studying dialogue comprehension. Conversation entails rapid responses, with negative semiotics for delay. We explored how expectations about the valence of the forthcoming response develop during the silence before the response and whether negative responses have mainly cognitive or social-emotional consequences. EEG-participants listened to questions from a spontaneous spoken corpus, cross-spliced with short/long gaps and “yes”/“no” responses. Preceding contexts biased listeners to expect the eventual response, which was hypothesised to translate to expectations for a shorter or longer gap. “No” responses showed a trend towards an early positivity, suggesting socio-emotional consequences. Within the long gap, expecting a “yes” response led to an earlier negativity, as well as a trend towards stronger theta-oscillations, after 300 milliseconds. This suggests that listeners anticipate/predict “yes” responses to come earlier than “no” responses, showing strong sensitivities to timing, which presumably promote hastening the pace of verbal interaction.

    Additional information

    plcp_a_1590609_sm4630.docx
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S. (2020). Neural correlates of turn-taking in the wild: Response planning starts early in free interviews. Cognition, 203: 104347. doi:10.1016/j.cognition.2020.104347.

    Abstract

    Conversation is generally characterized by smooth transitions between turns, with only very short gaps. This entails that responders often begin planning their response before the ongoing turn is finished. However, controversy exists about whether they start planning as early as they can, to make sure they respond on time, or as late as possible, to minimize the overlap between comprehension and production planning. Two earlier EEG studies have found neural correlates of response planning (positive ERP and alpha decrease) as soon as listeners could start planning their response, already midway through the current turn. However, in these studies, the questions asked were highly controlled with respect to the position where planning could start (e.g., very early) and required short and easy responses. The present study measured participants' EEG while an experimenter interviewed them in a spontaneous interaction. Coding the questions in the interviews showed that, under these natural circumstances, listeners can, in principle, start planning a response relatively early, on average after only about one third of the question has passed. Furthermore, ERP results showed a large positivity, interpreted before as an early neural signature of response planning, starting about half a second after the start of the word that allowed listeners to start planning a response. A second neural signature of response planning, an alpha decrease, was not replicated as reliably. In conclusion, listeners appear to start planning their response early during the ongoing turn, also under natural circumstances, presumably in order to keep the gap between turns short and respond on time. These results have several important implications for turn-taking theories, which need to explain how interlocutors deal with the overlap between comprehension and production, how they manage to come in on time, and the sources that lead to variability between conversationalists in the start of planning.

    Additional information

    supplementary data
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bohnemeyer, J. (2001). Motionland films version 2: Referential communication task with motionland stimulus. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 97-99). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874623.

    Abstract

    How do languages express ideas of movement, and how do they package different components of moving, such as manner and path? This task supports detailed investigation of motion descriptions. The specific study goals are: (a) the coding of “via” grounds (i.e., ground objects which the figure moves along, over, around, through, past, etc.); (b) the coding of direction changes; (c) the spontaneous segmentation of complex motion scenarios; and (d) the gestural representation of motion paths. The stimulus set is 5 simple 3D animations (7-17 seconds long) that show a ball rolling through a landscape. The task is a director-matcher task for two participants. The director describes the path of the ball in each clip to the matcher, who is asked to trace the path with a pen in a 2D picture.

    Additional information

    2001_Motionland_films_v2.zip
  • Bohnemeyer, J., Eisenbeiss, S., & Narasimhan, B. (2001). Event triads. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 100-114). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874630.

    Abstract

    Judgments we make about how similar or different events are to each other can reveal the features we find useful in classifying the world. This task is designed to investigate how speakers of different languages classify events, and to examine how linguistic and gestural encoding relates to non-linguistic classification. Specifically, the task investigates whether speakers judge two events to be similar on the basis of (a) the path versus manner of motion, (b) sub-events versus larger complex events, (c) participant identity versus event identity, and (d) different participant roles. In the task, participants are asked to make similarity judgments concerning sets of 2D animation clips.
  • Bohnemeyer, J. (2001). A questionnaire on event integration. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 177-184). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Bohnemeyer, J., Bowerman, M., & Brown, P. (2001). Cut and break clips. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 90-96). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874626.

    Abstract

    How do different languages treat a particular semantic domain? It has already been established that languages have widely varied words for talking about “cutting” and “breaking” things: for example, English has a very general verb break, but K’iche’ Maya has many different ‘break’ verbs that are used for different kinds of objects (e.g., brittle, flexible, long). The aim of this task is to map out cross-linguistic lexicalisation patterns in the cutting/breaking domain. The stimuli comprise 61 short video clips that show one or two actors breaking various objects (sticks, carrots, pieces of cloth or string, etc.) using various instruments (a knife, a hammer, an axe, their hands, etc.), or situations in which various kinds of objects break spontaneously. The clips are used to elicit descriptions of actors’ actions and the state changes that the objects undergo.

    Additional information

    2001_Cut_and_break_clips.zip

Share this page