Publications

Displaying 1 - 100 of 1810
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Acheson, D. J., Postle, B. R., & MacDonald, M. C. (2010). The interaction of concreteness and phonological similarity in verbal working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1), 17-36. doi:10.1037/a0017679.

    Abstract

    Although phonological representations have been a primary focus of verbal working memory research, lexical-semantic manipulations also influence performance. In the present study, the authors investigated whether a classic phenomenon in verbal working memory, the phonological similarity effect (PSE), is modulated by a lexical-semantic variable, word concreteness. Phonological overlap and concreteness were factorially manipulated in each of four experiments across which presentation modality (Experiments 1 and 2: visual presentation; Experiments 3 and 4: auditory presentation) and concurrent articulation (present in Experiments 2 and 4) were manipulated. In addition to main effects of each variable, results show a Phonological Overlap x Concreteness interaction whereby the magnitude of the PSE is greater for concrete word lists relative to abstract word lists. This effect is driven by superior item memory for nonoverlapping, concrete lists and is robust to the modality of presentation and concurrent articulation. These results demonstrate that in verbal working memory tasks, there are multiple routes to the phonological form of a word and that maintenance and retrieval occur over more than just a phonological level.
  • Adank, P., & Janse, E. (2010). Comprehension of a novel accent by young and older listeners. Psychology and Aging, 25(3), 736-740. doi:10.1037/a0020054.

    Abstract

    The authors investigated perceptual learning of a novel accent in young and older listeners through
    measuring speech reception thresholds (SRTs) using speech materials spoken in a novel—unfamiliar—
    accent. Younger and older listeners adapted to this accent, but older listeners showed poorer comprehension
    of the accent. Furthermore, perceptual learning differed across groups: The older listeners
    stopped learning after the first block, whereas younger listeners showed further improvement with longer
    exposure. Among the older participants, hearing acuity predicted the SRT as well as the effect of the
    novel accent on SRT. Finally, a measure of executive function predicted the impact of accent on SRT.
  • Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903-1909. doi:10.1177/0956797610389192.

    Abstract

    Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker’s accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.
  • Ahn, D., Abbott, M. J., Rayner, K., Ferreira, V. S., & Gollan, T. H. (2020). Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks. Journal of Neurolinguistics, 54: 100885. doi:10.1016/j.jneuroling.2019.100885.

    Abstract

    Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control.
  • Akita, K., & Dingemanse, M. (2019). Ideophones (Mimetics, Expressives). In Oxford Research Encyclopedia for Linguistics. Oxford: Oxford University Press. doi:10.1093/acrefore/9780199384655.013.477.

    Abstract

    Ideophones, also termed “mimetics” or “expressives,” are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack, but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.

    Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalised depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
  • Alcock, K., Meints, K., & Rowland, C. F. (2020). The UK communicative development inventories: Words and gestures. Guilford, UK: J&R Press Ltd.
  • Alday, P. M. (2019). How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology, 56(12): e13451. doi:10.1111/psyp.13451.

    Abstract

    Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal‐to‐noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM‐based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
  • Alday, P. M. (2019). M/EEG analysis of naturalistic stories: a review from speech to language processing. Language, Cognition and Neuroscience, 34(4), 457-473. doi:10.1080/23273798.2018.1546882.

    Abstract

    M/EEG research using naturally spoken stories as stimuli has focused largely on speech and not
    language processing. The temporal resolution of M/EEG is a two-edged sword, allowing for the
    study of the fine acoustic structure of speech, yet easily overwhelmed by the temporal noise of
    variation in constituent length. Recent theories on the neural encoding of linguistic structure
    require the temporal resolution of M/EEG, yet suffer from confounds when studied on traditional,
    heavily controlled stimuli. Recent methodological advances allow for synthesising naturalistic
    designs and traditional, controlled designs into effective M/EEG research on naturalistic
    language. In this review, we highlight common threads throughout the at-times distinct research
    traditions of speech and language processing. We conclude by examining the tradeoffs and
    successes of three M/EEG studies on fully naturalistic language paradigms and the future
    directions they suggest.
  • Alday, P. M., & Kretzschmar, F. (2019). Speed-accuracy tradeoffs in brain and behavior: Testing the independence of P300 and N400 related processes in behavioral responses to sentence categorization. Frontiers in Human Neuroscience, 13: 285. doi:10.3389/fnhum.2019.00285.

    Abstract

    Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.

    Additional information

    Supplementary material
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2020). Evaluating word embeddings for language acquisition. In E. Chersoni, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (pp. 38-42). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2020.cmcl-1.4.

    Abstract

    Continuous vector word representations (or
    word embeddings) have shown success in cap-turing semantic relations between words, as evidenced by evaluation against behavioral data of adult performance on semantic tasks (Pereira et al., 2016). Adult semantic knowl-edge is the endpoint of a language acquisition process; thus, a relevant question is whether these models can also capture emerging word
    representations of young language learners. However, the data for children’s semantic knowledge across development is scarce. In this paper, we propose to bridge this gap by using Age of Acquisition norms to evaluate word embeddings learnt from child-directed input. We present two methods that evaluate word embeddings in terms of (a) the semantic neighbourhood density of learnt words, and (b) con-
    vergence to adult word associations. We apply our methods to bag-of-words models, and find that (1) children acquire words with fewer semantic neighbours earlier, and (2) young learners only attend to very local context. These findings provide converging evidence for validity of our methods in understanding the prerequisite features for a distributional model of word learning.
  • Alhama, R. G., & Zuidema, W. (2019). A review of computational models of basic rule learning: The neural-symbolic debate and beyond. Psychonomic Bulletin & Review, 26(4), 1174-1194. doi:10.3758/s13423-019-01602-z.

    Abstract

    We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77–80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.
  • Alhama, R. G., Siegelman, N., Frost, R., & Armstrong, B. C. (2019). The role of information in visual word recognition: A perceptually-constrained connectionist account. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 83-89). Austin, TX: Cognitive Science Society.

    Abstract

    Proficient readers typically fixate near the center of a word, with a slight bias towards word onset. We explore a novel account of this phenomenon based on combining information-theory with visual perceptual constraints in a connectionist model of visual word recognition. This account posits that the amount of information-content available for word identification varies across fixation locations and across languages, thereby explaining the overall fixation location bias in different languages, making the novel prediction that certain words are more readily identified when fixating at an atypical fixation location, and predicting specific cross-linguistic differences. We tested these predictions across several simulations in English and Hebrew, and in a pilot behavioral experiment. Results confirmed that the bias to fixate closer to word onset aligns with maximizing information in the visual signal, that some words are more readily identified at atypical fixation locations, and that these effects vary to some degree across languages.
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Altvater-Mackensen, N. (2010). Do manners matter? Asymmetries in the acquisition of manner of articulation features. PhD Thesis, Radboud University of Nijmegen, Nijmegen.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Twomey, K. E. (2020). Introduction. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 1-7). Amsterdam: John Benjamins. doi:10.1075/tilar.27.int.
  • Ambridge, B., & Rowland, C. F. (2013). Experimental methods in studying child language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 149-168. doi:10.1002/wcs.1215.

    Abstract

    This article reviews the some of the most widely used methods used for studying children's language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repetition/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while listening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth-value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, counterbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choosing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215
  • Ambridge, B., Rowland, C. F., & Gummery, A. (2020). Teaching the unlearnable: A training study of complex yes/no questions. Language and Cognition, 12(2), 385-410. doi:10.1017/langcog.2020.5.

    Abstract

    A central question in language acquisition is how children master sentence types that they have seldom, if ever, heard. Here we report the findings of a pre-registered, randomised, single-blind intervention study designed to test the prediction that, for one such sentence type, complex questions (e.g., Is the crocodile who’s hot eating?), children could combine schemas learned, on the basis of the input, for complex noun phrases (the [THING] who’s [PROPERTY]) and simple questions (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Children aged 4;2 to 6;8 (M = 5;6, SD = 7.7 months) were trained on simple questions (e.g., Is the bird cleaning?) and either (Experimental group, N = 61) complex noun phrases (e.g., the bird who’s sad) or (Control group, N = 61) matched simple noun phrases (e.g., the sad bird). In general, the two groups did not differ on their ability to produce novel complex questions at test. However, the Experimental group did show (a) some evidence of generalising a particular complex NP schema (the [THING] who’s [PROPERTY] as opposed to the [THING] that’s [PROPERTY]) from training to test, (b) a lower rate of auxiliary-doubling errors (e.g., *Is the crocodile who’s hot is eating?), and (c) a greater ability to produce complex questions on the first test trial. We end by suggesting some different methods – specifically artificial language learning and syntactic priming – that could potentially be used to better test the present account.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Chang, F., & Bidgood, A. (2013). The retreat from overgeneralization in child language acquisition: Word learning, morphology, and verb argument structure. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), 47-62. doi:10.1002/wcs.1207.

    Abstract

    This review investigates empirical evidence for different theoretical proposals regarding the retreat from overgeneralization errors in three domains: word learning (e.g., *doggie to refer to all animals), morphology [e.g., *spyer, *cooker (one who spies/cooks), *unhate, *unsqueeze, *sitted; *drawed], and verb argument structure [e.g., *Don't giggle me (c.f. Don't make me giggle); *Don't say me that (c.f. Don't say that to me)]. The evidence reviewed provides support for three proposals. First, in support of the pre-emption hypothesis, the acquisition of competing forms that express the desired meaning (e.g., spy for *spyer, sat for *sitted, and Don't make me giggle for *Don't giggle me) appears to block errors. Second, in support of the entrenchment hypothesis, repeated occurrence of particular items in particular constructions (e.g., giggle in the intransitive construction) appears to contribute to an ever strengthening probabilistic inference that non-attested uses (e.g., *Don't giggle me) are ungrammatical for adult speakers. That is, both the rated acceptability and production probability of particular errors decline with increasing frequency of pre-empting and entrenching forms in the input. Third, learners appear to acquire semantic and morphophonological constraints on particular constructions, conceptualized as properties of slots in constructions [e.g., the (VERB) slot in the morphological un-(VERB) construction or the transitive-causative (SUBJECT) (VERB) (OBJECT) argument-structure construction]. Errors occur as children acquire the fine-grained semantic and morphophonological properties of particular items and construction slots, and so become increasingly reluctant to use items in slots with which they are incompatible. Findings also suggest some role for adult feedback and conventionality; the principle that, for many given meanings, there is a conventional form that is used by all members of the speech community.
  • Ameka, F. K. (1989). [Review of The case for lexicase: An outline of lexicase grammatical theory by Stanley Starosta]. Studies in Language, 13(2), 506-518.
  • Ameka, F. K. (2010). Information packaging constructions in Kwa: Micro-variation and typology. In E. O. Aboh, & J. Essegbey (Eds.), Topics in Kwa syntax (pp. 141-176). Dordrecht: Springer.

    Abstract

    Kwa languages such as Akye, Akan, Ewe, Ga, Likpe, Yoruba etc. are not prototypically “topic-prominent” like Chinese nor “focus-prominent” like Somali, yet they have dedicated structural positions in the clause, as well as morphological markers for signalling the information status of the component parts of information units. They could thus be seen as “discourse configurational languages” (Kiss 1995). In this chapter, I first argue for distinct positions in the left periphery of the clause in these languages for scene-setting topics, contrastive topics and focus. I then describe the morpho-syntactic properties of various information packaging constructions and the variations that we find across the languages in this domain.
  • Ameka, F. K. (1991). Ewe: Its grammatical constructions and illocutionary devices. PhD Thesis, Australian National University, Canberra.
  • Ameka, F. K., & Essegbey, J. (2013). Serialising languages: Satellite-framed, verb-framed or neither. Ghana Journal of Linguistics, 2(1), 19-38.

    Abstract

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner verbs. Verb serializing languages pose a challenge to this typology as they express Path as well as the Co-event of manner in finite verbs that together function as a single predicate in translational motion clause. We argue that these languages do not fit in the typology and constitute a type of their own. We draw on data from Akan and Frog story narrations in Ewe, a Kwa language, and Sranan, a Caribbean Creole with Gbe substrate, to show that in terms of discourse properties verb serializing languages behave like Verb-framed with respect to some properties and like Satellite-framed languages in terms of others. This study fed into the revision of the typology and such languages are now said to be equipollently-framed languages.
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Amora, K. K., Garcia, R., & Gagarina, N. (2020). Tagalog adaptation of the Multilingual Assessment Instrument for Narratives: History, process and preliminary results. In N. Gagarina, & J. Lindgren (Eds.), New language versions of MAIN: Multilingual Assessment Instrument for Narratives – Revised (pp. 221-233).

    Abstract

    This paper briefly presents the current situation of bilingualism in the Philippines,
    specifically that of Tagalog-English bilingualism. More importantly, it describes the process of adapting the Multilingual Assessment Instrument for Narratives (LITMUS-MAIN) to Tagalog, the basis of Filipino, which is the country’s national language.
    Finally, the results of a pilot study conducted on Tagalog-English bilingual children and
    adults (N=27) are presented. The results showed that Story Structure is similar across the
    two languages and that it develops significantly with age.
  • Andics, A. (2013). Who is talking? Behavioural and neural evidence for norm-based coding in voice identity learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Andics, A., Gál, V., Vicsi, K., Rudas, G., & Vidnyánszky, Z. (2013). FMRI repetition suppression for voices is modulated by stimulus expectations. NeuroImage, 69, 277-283. doi:10.1016/j.neuroimage.2012.12.033.

    Abstract

    According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing (‘alt’) and identical (‘rep’) voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Anichini, M., De Heer Kloots, M., & Ravignani, A. (2020). Interactive rhythms in the wild, in the brain, and in silico. Canadian Journal of Experimental Psychology, 74(3), 170-175. doi:10.1037/cep0000224.

    Abstract

    There are some historical divisions in methods, rationales, and purposes between
    studies on comparative cognition and behavioural ecology. In turn, the interaction between
    these two branches and studies from mathematics, computation and neuroscience is not
    usual. In this short piece, we attempt to build bridges among these disciplines. We present a
    series of interconnected vignettes meant to illustrate how a more interdisciplinary approach
    looks like when successful, and its advantages. Concretely, we focus on a recent topic,
    namely animal rhythms in interaction, studied under different approaches. We showcase 5
    research efforts, which we believe successfully link 5 particular Scientific areas of rhythm
    research conceptualized as: Social neuroscience, Detailed rhythmic quantification,
    Ontogeny, Computational approaches and Spontaneous interactions. Our suggestions will
    hopefully spur a ‘Comparative rhythms in interaction’ field, which can integrate and
    capitalize on knowledge from zoology, comparative psychology, neuroscience, and
    computation.
  • Arana, S., Marquand, A., Hulten, A., Hagoort, P., & Schoffelen, J.-M. (2020). Sensory modality-independent activation of the brain network for language. The Journal of Neuroscience, 40(14), 2914-2924. doi:10.1523/JNEUROSCI.2271-19.2020.

    Abstract

    The meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects.
  • Araújo, S., Fernandes, T., & Huettig, F. (2019). Learning to read facilitates retrieval of phonological representations in rapid automatized naming: Evidence from unschooled illiterate, ex-illiterate, and schooled literate adults. Developmental Science, 22(4): e12783. doi:10.1111/desc.12783.

    Abstract

    Rapid automatized naming (RAN) of visual items is a powerful predictor of reading skills. However, the direction and locus of the association between RAN and reading is still largely unclear. Here we investigated whether literacy acquisition directly bolsters RAN efficiency for objects, adopting a strong methodological design, by testing three groups of adults matched in age and socioeconomic variables, who differed only in literacy/schooling: unschooled illiterate and ex-illiterate, and schooled literate adults. To investigate in a fine-grained manner whether and how literacy facilitates lexical retrieval, we orthogonally manipulated the word-form frequency (high vs. low) and phonological neighborhood density (dense vs. spare) of the objects’ names. We observed that literacy experience enhances the automaticity with which visual stimuli (e.g., objects) can be retrieved and named: relative to readers (ex-illiterate and literate), illiterate adults performed worse on RAN. Crucially, the group difference was exacerbated and significant only for those items that were of low frequency and from sparse neighborhoods. These results thus suggest that, regardless of schooling and age at which literacy was acquired, learning to read facilitates the access to and retrieval of phonological representations, especially of difficult lexical items.
  • Araújo, S., Pacheco, A., Faísca, L., Petersson, K. M., & Reis, A. (2010). Visual rapid naming and phonological abilities: Different subtypes in dyslexic children. International Journal of Psychology, 45, 443-452. doi:10.1080/00207594.2010.499949.

    Abstract

    One implication of the double-deficit hypothesis for dyslexia is that there should be subtypes of dyslexic readers that exhibit rapid naming deficits with or without concomitant phonological processing problems. In the current study, we investigated the validity of this hypothesis for Portuguese orthography, which is more consistent than English orthography, by exploring different cognitive profiles in a sample of dyslexic children. In particular, we were interested in identifying readers characterized by a pure rapid automatized naming deficit. We also examined whether rapid naming and phonological awareness independently account for individual differences in reading performance. We characterized the performance of dyslexic readers and a control group of normal readers matched for age on reading, visual rapid naming and phonological processing tasks. Our results suggest that there is a subgroup of dyslexic readers with intact phonological processing capacity (in terms of both accuracy and speed measures) but poor rapid naming skills. We also provide evidence for an independent association between rapid naming and reading competence in the dyslexic sample, when the effect of phonological skills was controlled. Altogether, the results are more consistent with the view that rapid naming problems in dyslexia represent a second core deficit rather than an exclusive phonological explanation for the rapid naming deficits. Furthermore, additional non-phonological processes, which subserve rapid naming performance, contribute independently to reading development.
  • Armeni, K., Willems, R. M., Van den Bosch, A., & Schoffelen, J.-M. (2019). Frequency-specific brain dynamics related to prediction during language comprehension. NeuroImage, 198, 283-295. doi:10.1016/j.neuroimage.2019.04.083.

    Abstract

    The brain's remarkable capacity to process spoken language virtually in real time requires fast and efficient information processing machinery. In this study, we investigated how frequency-specific brain dynamics relate to models of probabilistic language prediction during auditory narrative comprehension. We recorded MEG activity while participants were listening to auditory stories in Dutch. Using trigram statistical language models, we estimated for every word in a story its conditional probability of occurrence. On the basis of word probabilities, we computed how unexpected the current word is given its context (word perplexity) and how (un)predictable the current linguistic context is (word entropy). We then evaluated whether source-reconstructed MEG oscillations at different frequency bands are modulated as a function of these language processing metrics. We show that theta-band source dynamics are increased in high relative to low entropy states, likely reflecting lexical computations. Beta-band dynamics are increased in situations of low word entropy and perplexity possibly reflecting maintenance of ongoing cognitive context. These findings lend support to the idea that the brain engages in the active generation and evaluation of predicted language based on the statistical properties of the input signal.

    Additional information

    Supplementary data
  • Arnhold, A., Porretta, V., Chen, A., Verstegen, S. A., Mok, I., & Järvikivi, J. (2020). (Mis) understanding your native language: Regional accent impedes processing of information status. Psychonomic Bulletin & Review, 27, 801-808. doi:10.3758/s13423-020-01731-w.

    Abstract

    Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues.
    However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent.
    Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech,
    we show that regional accent impedes higher levels of language processing, making native listeners’ processing resemble that of
    second-language listeners.
    In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a
    screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between
    two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the
    Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native
    British listeners do.
    In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as
    the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the
    Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent
    matched than for mismatches, suggesting a native-like competence in these offline ratings.
    These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and
    representation to include both prosody and regional variation.
  • Arnhold, A., Vainio, M., Suni, A., & Järvikivi, J. (2010). Intonation of Finnish verbs. Speech Prosody 2010, 100054, 1-4. Retrieved from http://speechprosody2010.illinois.edu/papers/100054.pdf.

    Abstract

    A production experiment investigated the tonal shape of Finnish finite verbs in transitive sentences without narrow focus. Traditional descriptions of Finnish stating that non-focused finite verbs do not receive accents were only partly supported. Verbs were found to have a consistently smaller pitch range than words in other word classes, but their pitch contours were neither flat nor explainable by pure interpolation.
  • Arshamian, A., Manko, P., & Majid, A. (2020). Limitations in odour simulation may originate from differential sensory embodiment. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190273. doi:10.1098/rstb.2019.0273.

    Abstract

    Across diverse lineages, animals communicate using chemosignals, but only humans communicate about chemical signals. Many studies have observed that compared with other sensory modalities, communication about smells is relatively rare and not always reliable. Recent cross-cultural studies, on the other hand, suggest some communities are more olfactorily oriented than previously supposed. Nevertheless, across the globe a general trend emerges where olfactory communication is relatively hard. We suggest here that this is in part because olfactory representations are different in kind: they have a low degree of embodiment, and are not easily expressed as primitives, thereby limiting the mental manipulations that can be performed with them. New exploratory data from Dutch children (9–12 year-olds) and adults support that mental imagery from olfaction is weak in comparison with vision and audition, and critically this is not affected by language development. Specifically, while visual and auditory imagery becomes more vivid with age, olfactory imagery shows no such development. This is consistent with the idea that olfactory representations are different in kind from representations from the other senses.

    Additional information

    Supplementary material
  • Asano, Y., Yuan, C., Grohe, A.-K., Weber, A., Antoniou, M., & Cutler, A. (2020). Uptalk interpretation as a function of listening experience. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 735-739). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-150.

    Abstract

    The term “uptalk” describes utterance-final pitch rises that carry no sentence-structural information. Uptalk is usually dialectal or sociolectal, and Australian English (AusEng) is particularly known for this attribute. We ask here whether experience with an uptalk variety affects listeners’ ability to categorise rising pitch contours on the basis of the timing and height of their onset and offset. Listeners were two groups of English-speakers (AusEng, and American English), and three groups of listeners with L2 English: one group with Mandarin as L1 and experience of listening to AusEng, one with German as L1 and experience of listening to AusEng, and one with German as L1 but no AusEng experience. They heard nouns (e.g. flower, piano) in the framework “Got a NOUN”, each ending with a pitch rise artificially manipulated on three contrasts: low vs. high rise onset, low vs. high rise offset and early vs. late rise onset. Their task was to categorise the tokens as “question” or “statement”, and we analysed the effect of the pitch contrasts on their judgements. Only the native AusEng listeners were able to use the pitch contrasts systematically in making these categorisations.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2010). Automatic annotation of media field recordings. In C. Sporleder, & K. Zervanou (Eds.), Proceedings of the ECAI 2010 Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH 2010) (pp. 31-34). Lisbon: University de Lisbon. Retrieved from http://ilk.uvt.nl/LaTeCH2010/.

    Abstract

    In the paper we describe a new attempt to come to automatic detectors processing real scene audio-video streams that can be used by researchers world-wide to speed up their annotation and analysis work. Typically these recordings are taken in field and experimental situations mostly with bad quality and only little corpora preventing to use standard stochastic pattern recognition techniques. Audio/video processing components are taken out of the expert lab and are integrated in easy-to-use interactive frameworks so that the researcher can easily start them with modified parameters and can check the usefulness of the created annotations. Finally a variety of detectors may have been used yielding a lattice of annotations. A flexible search engine allows finding combinations of patterns opening completely new analysis and theorization possibilities for the researchers who until were required to do all annotations manually and who did not have any help in pre-segmenting lengthy media recordings.
  • Auer, E., Russel, A., Sloetjes, H., Wittenburg, P., Schreer, O., Masnieri, S., Schneider, D., & Tschöpel, S. (2010). ELAN as flexible annotation framework for sound and image processing detectors. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 890-893). European Language Resources Association (ELRA).

    Abstract

    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities.
  • Ayub, Q., Yngvadottir, B., Chen, Y., Xue, Y., Hu, M., Vernes, S. C., Fisher, S. E., & Tyler-Smith, C. (2013). FOXP2 targets show evidence of positive selection in European populations. American Journal of Human Genetics, 92, 696-706. doi:10.1016/j.ajhg.2013.03.019.

    Abstract

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction.
  • Azar, Z. (2020). Effect of language contact on speech and gesture: The case of Turkish-Dutch bilinguals in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Azar, Z., Backus, A., & Ozyurek, A. (2019). General and language specific factors influence reference tracking in speech and gesture in discourse. Discourse Processes, 56(7), 553-574. doi:10.1080/0163853X.2018.1519368.

    Abstract

    Referent accessibility influences expressions in speech and gestures in similar ways. Speakers mostly use richer forms as noun phrases (NPs) in speech and gesture more when referents have low accessibility, whereas they use reduced forms such as pronouns more often and gesture less when referents have high accessibility. We investigated the relationships between speech and gesture during reference tracking in a pro-drop language—Turkish. Overt pronouns were not strongly associated with accessibility but with pragmatic context (i.e., marking similarity, contrast). Nevertheless, speakers gestured more when referents were re-introduced versus maintained and when referents were expressed with NPs versus pronouns. Pragmatic context did not influence gestures. Further, pronouns in low-accessibility contexts were accompanied with gestures—possibly for reference disambiguation—more often than previously found for non-pro-drop languages in such contexts. These findings enhance our understanding of the relationships between speech and gesture at the discourse level.
  • Azar, Z., Backus, A., & Ozyurek, A. (2020). Language contact does not drive gesture transfer: Heritage speakers maintain language specific gesture patterns in each language. Bilingualism: Language and Cognition, 23(2), 414-428. doi:10.1017/S136672891900018X.

    Abstract

    This paper investigates whether there are changes in gesture rate when speakers of two languages with different gesture rates (Turkish-high gesture; Dutch-low gesture) come into daily contact. We analyzed gestures produced by second-generation heritage speakers of Turkish in the Netherlands in each language, comparing them to monolingual baselines. We did not find differences between bilingual and monolingual speakers, possibly because bilinguals were proficient in both languages and used them frequently – in line with a usage-based approach to language. However, bilinguals produced more deictic gestures than monolinguals in both Turkish and Dutch, which we interpret as a bilingual strategy. Deictic gestures may help organize discourse by placing entities in gesture space and help reduce the cognitive load associated with being bilingual, e.g., inhibition cost. Therefore, gesture rate does not necessarily change in contact situations but might be modulated by frequency of language use, proficiency, and cognitive factors related to being bilingual.
  • Azar, Z., Ozyurek, A., & Backus, A. (2020). Turkish-Dutch bilinguals maintain language-specific reference tracking strategies in elicited narratives. International Journal of Bilingualism, 24(2), 376-409. doi:10.1177/1367006919838375.

    Abstract

    Aim:

    This paper examines whether second-generation Turkish heritage speakers in the Netherlands follow language-specific patterns of reference tracking in Turkish and Dutch, focusing on discourse status and pragmatic contexts as factors that may modulate the choice of referring expressions (REs), that is, the noun phrase (NP), overt pronoun and null pronoun.
    Methodology:

    Two short silent videos were used to elicit narratives from 20 heritage speakers of Turkish, both in Turkish and in Dutch. Monolingual baseline data were collected from 20 monolingually raised speakers of Turkish in Turkey and 20 monolingually raised speakers of Dutch in the Netherlands. We also collected language background data from bilinguals with an extensive survey.
    Data and analysis:

    Using generalised logistic mixed-effect regression, we analysed the influence of discourse status and pragmatic context on the choice of subject REs in Turkish and Dutch, comparing bilingual data to the monolingual baseline in each language.
    Findings:

    Heritage speakers used overt versus null pronouns in Turkish and stressed versus reduced pronouns in Dutch in pragmatically appropriate contexts. There was, however, a slight increase in the proportions of overt pronouns as opposed to NPs in Turkish and as opposed to null pronouns in Dutch. We suggest an explanation based on the degree of entrenchment of differential RE types in relation to discourse status as the possible source of the increase.
    Originality:

    This paper provides data from an understudied language pair in the domain of reference tracking in language contact situations. Unlike several studies of pronouns in language contact, we do not find differences across monolingual and bilingual speakers with regard to pragmatic constraints on overt pronouns in the minority pro-drop language.
    Significance:

    Our findings highlight the importance of taking language proficiency and use into account while studying bilingualism and combining formal approaches to language use with usage-based approaches for a more complete understanding of bilingual language production.
  • Baayen, H., & Lieber, R. (1991). Productivity and English derivation: A corpus-based study. Linguistics, 29(5), 801-843. doi:10.1515/ling.1991.29.5.801.

    Abstract

    The notion of productivity is one which is central to the study of morphology.
    It is a notion about which linguists frequently have intuitions. But it is a notion which still
    remains somewhat problematic in the
    literature on generative morphology some
    15 years after Aronoff raised the issue in his (1976) monograph. In this paper we will review some of the definitions and measures of productivity discussed in the generative and pregenerative literature.
    We will adopt the definition of productivity suggested by Schultink (1961) and propose
    a number of statistical measures of productivity whose results, when
    applied to a fixed corpus, accord nicely with our intuitive estimates of productivity, and which shed light on the quantitative weight of linguistic restrictions on word formation rules. Part of our
    purpose here is also a very
    simple one: to make
    available a substantial
    set of empirical data concerning
    the productivity of
    some of the major derivational
    affixes of English.

    Files private

    Request files
  • Badimala, P., Mishra, C., Venkataramana, R. K. M., Bukhari, S. S., & Dengel, A. (2019). A Study of Various Text Augmentation Techniques for Relation Classification in Free Text. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (pp. 360-367). Setúbal, Portugal: SciTePress Digital Library. doi:10.5220/0007311003600367.

    Abstract

    Data augmentation techniques have been widely used in visual recognition tasks as it is easy to generate new
    data by simple and straight forward image transformations. However, when it comes to text data augmen-
    tations, it is difficult to find appropriate transformation techniques which also preserve the contextual and
    grammatical structure of language texts. In this paper, we explore various text data augmentation techniques
    in text space and word embedding space. We study the effect of various augmented datasets on the efficiency
    of different deep learning models for relation classification in text.
  • Baggio, G., Choma, T., Van Lambalgen, M., & Hagoort, P. (2010). Coercion and compositionality. Journal of Cognitive Neuroscience, 22, 2131-2140. doi:10.1162/jocn.2009.21303.

    Abstract

    Research in psycholinguistics and in the cognitive neuroscience of language has suggested that semantic and syntactic integration are associated with different neurophysiologic correlates, such as the N400 and the P600 in the ERPs. However, only a handful of studies have investigated the neural basis of the syntax–semantics interface, and even fewer experiments have dealt with the cases in which semantic composition can proceed independently of the syntax. Here we looked into one such case—complement coercion—using ERPs. We compared sentences such as, “The journalist wrote the article” with “The journalist began the article.” The second sentence seems to involve a silent semantic element, which is expressed in the first sentence by the head of the VP “wrote the article.” The second type of construction may therefore require the reader to infer or recover from memory a richer event sense of the VP “began the article,” such as began writing the article, and to integrate that into a semantic representation of the sentence. This operation is referred to as “complement coercion.” Consistently with earlier reading time, eye tracking, and MEG studies, we found traces of such additional computations in the ERPs: Coercion gives rise to a long-lasting negative shift, which differs at least in duration from a standard N400 effect. Issues regarding the nature of the computation involved are discussed in the light of a neurocognitive model of language processing and a formal semantic analysis of coercion.
  • Balakrishnan, B., Verheijen, J., Lupo, A., Raymond, K., Turgeon, C., Yang, Y., Carter, K. L., Whitehead, K. J., Kozicz, T., Morava, E., & Lai, K. (2019). A novel phosphoglucomutase-deficient mouse model reveals aberrant glycosylation and early embryonic lethality. Journal of Inherited Metabolic Disease, 42(5), 998-1007. doi:10.1002/jimd.12110.

    Abstract

    Patients with phosphoglucomutase (PGM1) deficiency, a congenital disorder of glycosylation (CDG) suffer from multiple disease phenotypes. Midline cleft defects are present at birth. Overtime, additional clinical phenotypes, which include severe hypoglycemia, hepatopathy, growth retardation, hormonal deficiencies, hemostatic anomalies, frequently lethal, early-onset of dilated cardiomyopathy and myopathy emerge, reflecting the central roles of the enzyme in (glycogen) metabolism and glycosylation. To delineate the pathophysiology of the tissue-specific disease phenotypes, we constructed a constitutive Pgm2 (mouse ortholog of human PGM1)-knockout (KO) mouse model using CRISPR-Cas9 technology. After multiple crosses between heterozygous parents, we were unable to identify homozygous life births in 78 newborn pups (P = 1.59897E-06), suggesting an embryonic lethality phenotype in the homozygotes. Ultrasound studies of the course of pregnancy confirmed Pgm2-deficient pups succumb before E9.5. Oral galactose supplementation (9 mg/mL drinking water) did not rescue the lethality. Biochemical studies of tissues and skin fibroblasts harvested from heterozygous animals confirmed reduced Pgm2 enzyme activity and abundance, but no change in glycogen content. However, glycomics analyses in serum revealed an abnormal glycosylation pattern in the Pgm2(+/-) animals, similar to that seen in PGM1-CDG.
  • Banissy, M., Sauter, D., Ward, J., Warren, J. E., Walsh, V., & Scott, S. K. (2010). Suppressing sensorimotor activity modulates the discrimination of auditory emotions but not speaker identity. Journal of Neuroscience, 30(41), 13552-13557. doi:10.1523/JNEUROSCI.0786-10.2010.

    Abstract

    Our ability to recognise the emotions of others is a crucial feature of human social cognition. Functional neuroimaging studies indicate that activity in sensorimotor cortices is evoked during the perception of emotion. In the visual domain, right somatosensory cortex activity has been shown to be critical for facial emotion recognition. However, the importance of sensorimotor representations in modalities outside of vision remains unknown. Here we use continuous theta-burst transcranial magnetic stimulation (cTBS) to investigate whether neural activity in the right postcentral gyrus (rPoG) and right lateral premotor cortex (rPM) is involved in non-verbal auditory emotion recognition. Three groups of participants completed same-different tasks on auditory stimuli, discriminating between either the emotion expressed or the speakers' identities, prior to and following cTBS targeted at rPoG, rPM or the vertex (control site). A task-selective deficit in auditory emotion discrimination was observed. Stimulation to rPoG and rPM resulted in a disruption of participants' abilities to discriminate emotion, but not identity, from vocal signals. These findings suggest that sensorimotor activity may be a modality independent mechanism which aids emotion discrimination.

    Additional information

    S1_Banissy.pdf
  • Baranova, J. (2020). Reasons for every-day activities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bardhan, N. P. (2010). Adults’ self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. PhD Thesis, University of Rochester, Rochester, New York.

    Abstract

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three experiments, we asked whether adult learners choose to listen to novel words in a particular order based on their acoustic similarity. We use a new paradigm for learning an artificial lexicon in which the learner, rather than the experimenter, determines the order and frequency of exposure to items. We analyze both the proportions of selections and the temporal clustering of subjects' sampling of lexical neighborhoods during training as well as their performance during repeated testing phases (accuracy and reaction time) to determine the time course of learning these neighborhoods. In the first experiment, subjects sampled the high and low density neighborhoods randomly in early learning, and then over-sampled the high density neighborhood until test performance on both neighborhoods reached asymptote. A second experiment involved items similar to the first, but also neighborhoods that are not fully revealed at the start of the experiment. Subjects adjusted their training patterns to focus their selections on neighborhoods of increasing density was revealed; evidence of learning in the test phase was slower to emerge than in the first experiment, impaired by the presence of additional sets of items of varying density. Crucially, in both the first and second experiments there was no effect of dense vs. sparse neighborhood in the accuracy results, which is accounted for by subjects’ over-sampling of items from the dense neighborhood. The third experiment was identical in design to the second except for a second day of further training and testing on the same items. Testing at the beginning of the second day showed impaired, not improved, accuracy, except for the consistently dense items. Further training, however, improved accuracy for some items to above Day 1 levels. Overall, these results provide a new window on the time-course of learning an artificial lexicon and the role that learners’ implicit preferences, stemming from their self-selected experience with the entire lexicon, play in learning highly confusable words.
  • Bardhan, N. P., Aslin, R., & Tanenhaus, M. (2010). Adults' self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (pp. 364-368). Austin, TX: Cognitive Science Society.
  • Barendse, M. T., Oort, F. J., Jak, S., & Timmerman, M. E. (2013). Multilevel exploratory factor analysis of discrete data. Netherlands Journal of Psychology, 67(4), 114-121.
  • Barendse, M. T., & Rosseel, Y. (2020). Multilevel modeling in the ‘wide format’ approach with discrete data: A solution for small cluster sizes. Structural Equation Modeling: A Multidisciplinary Journal, 27(5), 696-721. doi:10.1080/10705511.2019.1689366.

    Abstract

    In multilevel data, units at level 1 are nested in clusters at level 2, which in turn may be nested in even larger clusters at level 3, and so on. For continuous data, several authors have shown how to model multilevel data in a ‘wide’ or ‘multivariate’ format approach. We provide a general framework to analyze random intercept multilevel SEM in the ‘wide format’ (WF) and extend this approach for discrete data. In a simulation study, we vary response scale (binary, four response options), covariate presence (no, between-level, within-level), design (balanced, unbalanced), model misspecification (present, not present), and the number of clusters (small, large) to determine accuracy and efficiency of the estimated model parameters. With a small number of observations in a cluster, results indicate that the WF approach is a preferable approach to estimate multilevel data with discrete response options.
  • Barendse, M. T., Oort, F. J., & Garst, G. J. A. (2010). Using restricted factor analysis with latent moderated structures to detect uniform and nonuniform measurement bias: A simulation study. AStA Advances in Statistical Analysis, 94, 117-127. doi:10.1007/s10182-010-0126-1.

    Abstract

    Factor analysis is an established technique for the detection of measurement bias. Multigroup factor analysis (MGFA) can detect both uniform and nonuniform bias. Restricted factor analysis (RFA) can also be used to detect measurement bias, albeit only uniform measurement bias. Latent moderated structural equations (LMS) enable the estimation of nonlinear interaction effects in structural equation modelling. By extending the RFA method with LMS, the RFA method should be suited to detect nonuniform bias as well as uniform bias. In a simulation study, the RFA/LMS method and the MGFA method are compared in detecting uniform and nonuniform measurement bias under various conditions, varying the size of uniform bias, the size of nonuniform bias, the sample size, and the ability distribution. For each condition, 100 sets of data were generated and analysed through both detection methods. The RFA/LMS and MGFA methods turned out to perform equally well. Percentages of correctly identified items as biased (true positives) generally varied between 92% and 100%, except in small sample size conditions in which the bias was nonuniform and small. For both methods, the percentages of false positives were generally higher than the nominal levels of significance.
  • Baron-Cohen, S., Johnson, D., Asher, J. E., Wheelwright, S., Fisher, S. E., Gregersen, P. K., & Allison, C. (2013). Is synaesthesia more common in autism? Molecular Autism, 4(1): 40. doi:10.1186/2040-2392-4-40.

    Abstract

    BACKGROUND:
    Synaesthesia is a neurodevelopmental condition in which a sensation in one modality triggers a perception in a second modality. Autism (shorthand for Autism Spectrum Conditions) is a neurodevelopmental condition involving social-communication disability alongside resistance to change and unusually narrow interests or activities. Whilst on the surface they appear distinct, they have been suggested to share common atypical neural connectivity.

    METHODS:
    In the present study, we carried out the first prevalence study of synaesthesia in autism to formally test whether these conditions are independent. After exclusions, 164 adults with autism and 97 controls completed a synaesthesia questionnaire, autism spectrum quotient, and test of genuineness-revised (ToG-R) online.

    RESULTS:
    The rate of synaesthesia in adults with autism was 18.9% (31 out of 164), almost three times greater than in controls (7.22%, 7 out of 97, P <0.05). ToG-R proved unsuitable for synaesthetes with autism.

    CONCLUSIONS:
    The significant increase in synaesthesia prevalence in autism suggests that the two conditions may share some common underlying mechanisms. Future research is needed to develop more feasible validation methods of synaesthesia in autism.

    Files private

    Request files
  • Barr, D. J., & Seyfeddinipur, M. (2010). The role of fillers in listener attributions for speaker disfluency. Language and Cognitive Processes, 25, 441-455. doi:10.1080/01690960903047122.

    Abstract

    When listeners hear a speaker become disfluent, they expect the speaker to refer to something new. What is the mechanism underlying this expectation? In a mouse-tracking experiment, listeners sought to identify images that a speaker was describing. Listeners more strongly expected new referents when they heard a speaker say um than when they heard a matched utterance where the um was replaced by noise. This expectation was speaker-specific: it depended on what was new and old for the current speaker, not just on what was new or old for the listener. This finding suggests that listeners treat fillers as collateral signals.
  • Barrett, R. L. C., Dawson, M., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Croxson, P. L., Johnson, P. J., Howells, H., Forkel, S. J., Dell'Acqua, F., & Catani, M. (2020). Differences in Frontal Network Anatomy Across Primate Species. The Journal of Neuroscience, 40(10), 2094-2107. doi:10.1523/JNEUROSCI.1650-18.2019.

    Abstract

    The frontal lobe is central to distinctive aspects of human cognition and behavior. Some comparative studies link this to a larger frontal cortex and even larger frontal white matter in humans compared with other primates, yet others dispute these findings. The discrepancies between studies could be explained by limitations of the methods used to quantify volume differences across species, especially when applied to white matter connections. In this study, we used a novel tractography approach to demonstrate that frontal lobe networks, extending within and beyond the frontal lobes, occupy 66% of total brain white matter in humans and 48% in three monkey species: vervets (Chlorocebus aethiops), rhesus macaque (Macaca mulatta) and cynomolgus macaque (Macaca fascicularis), all male. The simian–human differences in proportional frontal tract volume were significant for projection, commissural, and both intralobar and interlobar association tracts. Among the long association tracts, the greatest difference was found for tracts involved in motor planning, auditory memory, top-down control of sensory information, and visuospatial attention, with no significant differences in frontal limbic tracts important for emotional processing and social behaviour. In addition, we found that a nonfrontal tract, the anterior commissure, had a smaller volume fraction in humans, suggesting that the disproportionally large volume of human frontal lobe connections is accompanied by a reduction in the proportion of some nonfrontal connections. These findings support a hypothesis of an overall rearrangement of brain connections during human evolution.
  • Barthel, M. (2020). Speech planning in dialogue: Psycholinguistic studies of the timing of turn taking. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Barthel, M., & Levinson, S. C. (2020). Next speakers plan word forms in overlap with the incoming turn: Evidence from gaze-contingent switch task performance. Language, Cognition and Neuroscience, 35(9), 1183-1202. doi:10.1080/23273798.2020.1716030.

    Abstract

    To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.

    Additional information

    Supplemental material
  • Barthel, M., & Sauppe, S. (2019). Speech planning at turn transitions in dialogue is associated with increased processing load. Cognitive Science, 43(7): e12768. doi:10.1111/cogs.12768.

    Abstract

    Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre‐recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence‐final verbs evokes larger task‐evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn‐taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
  • Basnakova, J. (2019). Beyond the language given: The neurobiological infrastructure for pragmatic inferencing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bastiaansen, M. C. M., Magyari, L., & Hagoort, P. (2010). Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. Journal of Cognitive Neuroscience, 22, 1333-1347. doi:10.1162/jocn.2009.21283.

    Abstract

    There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.
  • Bauer, B. L. M. (2020). Language sources and the reconstruction of early languages: Sociolinguistic discrepancies and evolution in Old French grammar. Diachronica, 37(3), 273-317. doi:10.1075/dia.18026.bau.

    Abstract

    This article argues that with the original emphasis on dialectal variation, using primarily literary texts from various regions, analysis of Old French has routinely neglected social variation, providing an incomplete picture of its grammar. Accordingly, Old French has been identified as typically featuring e.g. “pro-drop”, brace constructions, and single negation. Yet examination of these features in informal texts, as opposed to the formal texts typically dealt with, demonstrates that these documents do not corroborate the picture of Old French that is commonly presented in the linguistic literature. Our reconstruction of Old French grammar therefore needs adjustment and further refinement, in particular by implementing sociolinguistic data. With a broader scope, the call for inclusion of sociolinguistic variation may resonate in the investigation of other early languages, resulting in the reassessment of the sources used, and reopening the debate about social variation in dead languages and its role in language evolution.

    Files private

    Request files
  • Bauer, B. L. M. (2020). Appositive compounds in dialectal and sociolinguistic varieties of French. In M. Maiden, & S. Wolfe (Eds.), Variation and change in Gallo-Romance (pp. 326-346). Oxford: Oxford University Press.
  • Bauer, B. L. M. (2000). Archaic syntax in Indo-European: The spread of transitivity in Latin and French. Berlin: Mouton de Gruyter.

    Abstract

    Several grammatical features in early Indo-European traditionally have not been understood. Although Latin, for example, was a nominative language, a number of its inherited characteristics do not fit that typology and are difficult to account for, such as stative mihi est constructions to express possession, impersonal verbs, or absolute constructions. With time these archaic features have been replaced by transitive structures (e.g. possessive ‘have’). This book presents an extensive comparative and historical analysis of archaic features in early Indo-European languages and their gradual replacement in the history of Latin and early Romance, showing that the new structures feature transitive syntax and fit the patterns of a nominative language.
  • Bauer, B. L. M. (2010). Fore-runners of Romance -mente adverbs in Latin prose and poetry. In E. Dickey, & A. Chahoud (Eds.), Colloquial and literary Latin (pp. 339-353). Cambridge: Cambridge University Press.
  • Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (Eds.), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality.

    Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bauer, B. L. M. (2019). Language contact and language borrowing? Compound verb forms in the Old French translation of the Gospel of St. Mark. Belgian Journal of Linguistics, 33, 210-250. doi:10.1075/bjl.00028.bau.

    Abstract

    This study investigates the potential influence of Latin syntax on the development of analytic verb forms in a well-defined and concrete instance of language contact, the Old French translation of a Latin Gospel. The data show that the formation of verb forms in the Old French was remarkably independent from the Latin original. While the Old French text closely follows the narrative of the Latin Gospel, its usage of compound verb forms is not dictated by the source text, as reflected e.g. in the quasi-omnipresence of the relative sequence finite verb + pp, which – with a few exceptions – all trace back to a different structure in the Latin text. Engels (VerenigdeStaten) Another important innovative difference in the Old French is the widespread use of aveir ‘have’ as an auxiliary, unknown in Latin. The article examines in detail the relation between the verbal forms in the two texts, showing that the translation is in line with of grammar. The usage of compound verb forms in the Old French Gospel is therefore autonomous rather than contact stimulated, let alone contact induced. The results challenge Blatt’s (1957) assumption identifying compound verb forms as a shared feature in European languages that should be ascribed to Latin influence.

    Files private

    Request files
  • Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Becker, R., Pefkou, M., Michel, C. M., & Hervais-Adelman, A. (2013). Left temporal alpha-band activity reflects single word intelligibility. Frontiers in Systems Neuroscience, 7: 121. doi:10.3389/fnsys.2013.00121.

    Abstract

    The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
  • Begeer, S., Malle, B. F., Nieuwland, M. S., & Keysar, B. (2010). Using theory of mind to represent and take part in social interactions: Comparing individuals with high-functioning autism and typically developing controls. European Journal of Developmental Psychology, 7(1), 104-122. doi:10.1080/17405620903024263.

    Abstract

    The literature suggests that individuals with autism spectrum disorders (ASD) are deficient in their Theory of Mind (ToM) abilities. They sometimes do not seem to appreciate that behaviour is motivated by underlying mental states. If this is true, then individuals with ASD should also be deficient when they use their ToM to represent and take part in dyadic interactions. In the current study we compared the performance of normally intelligent adolescents and adults with ASD to typically developing controls. In one task they heard a narrative about an interaction and then retold it. In a second task they played a communication game that required them to take into account another person's perspective. We found that when they described people's behaviour the ASD individuals used fewer mental terms in their story narration, suggesting a lower tendency to represent interactions in mentalistic terms. Surprisingly, ASD individuals and control participants showed the same level of performance in the communication game that required them to distinguish between their beliefs and the other's beliefs. Given that ASD individuals show no deficiency in using their ToM in real interaction, it is unlikely that they have a systematically deficient ToM.
  • Behrens, B., Flecken, M., & Carroll, M. (2013). Progressive Attraction: On the Use and Grammaticalization of Progressive Aspect in Dutch, Norwegian, and German. Journal of Germanic linguistics, 25(2), 95-136. doi:10.1017/S1470542713000020.

    Abstract

    This paper investigates the use of aspectual constructions in Dutch, Norwegian, and German, languages in which aspect marking that presents events explicitly as ongoing, is optional. Data were elicited under similar conditions with native speakers in the three countries. We show that while German speakers make insignificant use of aspectual constructions, usage patterns in Norwegian and Dutch present an interesting case of overlap, as well as differences, with respect to a set of factors that attract or constrain the use of different constructions. The results indicate that aspect marking is grammaticalizing in Dutch, but there are no clear signs of a similar process in Norwegian.*
  • Beierholm, U., Rohe, T., Ferrari, A., Stegle, O., & Noppeney, U. (2020). Using the past to estimate sensory uncertainty. eLife, 9: e54172. doi:10.7554/eLife.54172.

    Abstract

    To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
  • Bekemeier, N., Brenner, D., Klepp, A., Biermann-Ruben, K., & Indefrey, P. (2019). Electrophysiological correlates of concept type shifts. PLoS One, 14(3): e0212624. doi:10.1371/journal.pone.0212624.

    Abstract

    A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.

    Additional information

    dataset
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.

    Abstract

    The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.

    Abstract

    The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
  • Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.

    Abstract

    Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability.
  • Berends, S., Veenstra, A., & Van Hout, A. (2010). 'Nee, ze heeft er twee': Acquisition of the Dutch quantitative 'er'. Groninger Arbeiten zur Germanistischen Linguistik, 51, 1-7. Retrieved from http://irs.ub.rug.nl/dbi/4ef4a0b3eafcb.

    Abstract

    We present the first study on the acquisition of the Dutch quantitative pronoun er in sentences such as de vrouw draagt er drie ‘the woman is carrying three.’ There is a large literature on Dutch children’s interpretation of pronouns and a few recent production studies, all specifically looking at 3rd person singular pronouns and the so-called Delay of Principle B effect (Coopmans & Philip, 1996; Koster, 1993; Spenader, Smits and Hendriks, 2009). However, no one has studied children’s use of quantitative er. Dutch is the only Germanic language with such a pronoun.
  • Bergelson*, E., Casillas*, M., Soderstrom, M., Seidl, A., Warlaumont, A. S., & Amatuni, A. (2019). What Do North American Babies Hear? A large-scale cross-corpus analysis. Developmental Science, 22(1): e12724. doi:10.1111/desc.12724.

    Abstract

    - * indicates joint first authorship - Abstract: A range of demographic variables influence how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically-valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2--3x more speech from females than males. Second, children in higher-maternal-education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for re-use and re-analysis by other researchers.

    Additional information

    desc12724-sup-0001-supinfo.pdf
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C., Paulus, M., & Fikkert, J. (2010). A closer look at pronoun comprehension: Comparing different methods. In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 53-61). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    1. Introduction External input is necessary to acquire language. Consequently, the comprehension of various constituents of language, such as lexical items or syntactic and semantic structures should emerge at the same time as or even precede their production. However, in the case of pronouns this general assumption does not seem to hold. On the contrary, while children at the age of four use pronouns and reflexives appropriately during production (de Villiers, et al. 2006), a number of comprehension studies across different languages found chance performance in pronoun trials up to the age of seven, which co-occurs with a high level of accuracy in reflexive trials (for an overview see e.g. Conroy, et al. 2009; Elbourne 2005).
  • Bergmann, C., Gubian, M., & Boves, L. (2010). Modelling the effect of speaker familiarity and noise on infant word recognition. In Proceedings of the 11th Annual Conference of the International Speech Communication Association [Interspeech 2010] (pp. 2910-2913). ISCA.

    Abstract

    In the present paper we show that a general-purpose word learning model can simulate several important findings from recent experiments in language acquisition. Both the addition of background noise and varying the speaker have been found to influence infants’ performance during word recognition experiments. We were able to replicate this behaviour in our artificial word learning agent. We use the results to discuss both advantages and limitations of computational models of language acquisition.
  • Bertamini, M., Rampone, G., Makin, A. D. J., & Jessop, A. (2019). Symmetry preference in shapes, faces, flowers and landscapes. PeerJ, 7: e7078. doi:10.7717/peerj.7078.

    Abstract

    Most people like symmetry, and symmetry has been extensively used in visual art and architecture. In this study, we compared preference for images of abstract and familiar objects in the original format or when containing perfect bilateral symmetry. We created pairs of images for different categories: male faces, female faces, polygons, smoothed version of the polygons, flowers, and landscapes. This design allows us to compare symmetry preference in different domains. Each observer saw all categories randomly interleaved but saw only one of the two images in a pair. After recording preference, we recorded a rating of how salient the symmetry was for each image, and measured how quickly observers could decide which of the two images in a pair was symmetrical. Results reveal a general preference for symmetry in the case of shapes and faces. For landscapes, natural (no perfect symmetry) images were preferred. Correlations with judgments of saliency were present but generally low, and for landscapes the salience of symmetry was negatively related to preference. However, even within the category where symmetry was not liked (landscapes), the separate analysis of original and modified stimuli showed an interesting pattern: Salience of symmetry was correlated positively (artificial) or negatively (original) with preference, suggesting different effects of symmetry within the same class of stimuli based on context and categorization.

    Additional information

    Supplemental Information
  • Bickel, B. (1991). Der Hang zur Exzentrik - Annäherungen an das kognitive Modell der Relativkonstruktion. In W. Bisang, & P. Rinderknecht (Eds.), Von Europa bis Ozeanien - von der Antinomie zum Relativsatz (pp. 15-37). Zurich, Switzerland: Seminar für Allgemeine Sprachwissenschaft der Universität.
  • Bidgood, A., Pine, J. M., Rowland, C. F., & Ambridge, B. (2020). Syntactic representations are both abstract and semantically constrained: Evidence from children’s and adults’ comprehension and production/priming of the English passive. Cognitive Science, 44(9): e12892. doi:10.1111/cogs.12892.

    Abstract

    All accounts of language acquisition agree that, by around age 4, children’s knowledge of grammatical constructions is abstract, rather than tied solely to individual lexical items. The aim of the present research was to investigate, focusing on the passive, whether children’s and adults’ performance is additionally semantically constrained, varying according to the distance between the semantics of the verb and those of the construction. In a forced‐choice pointing study (Experiment 1), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) showed support for the prediction of this semantic construction prototype account of an interaction such that the observed disadvantage for passives as compared to actives (i.e., fewer correct points/longer reaction time) was greater for experiencer‐theme verbs than for agent‐patient and theme‐experiencer verbs (e.g., Bob was seen/hit/frightened by Wendy). Similarly, in a production/priming study (Experiment 2), both 4‐ to 6‐year olds (N = 60) and adults (N = 60) produced fewer passives for experiencer‐theme verbs than for agent‐patient/theme‐experiencer verbs. We conclude that these findings are difficult to explain under accounts based on the notion of A(rgument) movement or of a monostratal, semantics‐free, level of syntax, and instead necessitate some form of semantic construction prototype account.

    Additional information

    Supplementary material
  • Bielczyk, N. Z., Piskała, K., Płomecka, M., Radziński, P., Todorova, L., & Foryś, U. (2019). Time-delay model of perceptual decision making in cortical networks. PLoS One, 14: e0211885. doi:10.1371/journal.pone.0211885.

    Abstract

    It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.

    Additional information

    supporting information
  • Blasi, D. E., Moran, S., Moisik, S. R., Widmer, P., Dediu, D., & Bickel, B. (2019). Human sound systems are shaped by post-Neolithic changes in bite configuration. Science, 363(6432): eaav3218. doi:10.1126/science.aav3218.

    Abstract

    Linguistic diversity, now and in the past, is widely regarded to be independent of biological changes that took place after the emergence of Homo sapiens. We show converging evidence from paleoanthropology, speech biomechanics, ethnography, and historical linguistics that labiodental sounds (such as “f” and “v”) were innovated after the Neolithic. Changes in diet attributable to food-processing technologies modified the human bite from an edge-to-edge configuration to one that preserves adolescent overbite and overjet into adulthood. This change favored the emergence and maintenance of labiodentals. Our findings suggest that language is shaped not only by the contingencies of its history, but also by culturally induced changes in human biology.

    Files private

    Request files
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Blythe, J. (2010). From ethical datives to number markers in Murriny Patha. In R. Hendery, & J. Hendriks (Eds.), Grammatical change: Theory and description (pp. 157-187). Canberra: Pacific Linguistics.
  • Blythe, J. (2010). Self-association in Murriny Patha talk-in-interaction. In I. Mushin, & R. Gardner (Eds.), Studies in Australian Indigenous Conversation [Special issue] (pp. 447-469). Australian Journal of Linguistics. doi:10.1080/07268602.2010.518555.

    Abstract

    When referring to persons in talk-in-interaction, interlocutors recruit the particular referential expressions that best satisfy both cultural and interactional contingencies, as well as the speaker’s own personal objectives. Regular referring practices reveal cultural preferences for choosing particular classes of reference forms for engaging in particular types of activities. When speakers of the northern Australian language Murriny Patha refer to each other, they display a clear preference for associating the referent to the current conversation’s participants. This preference for Association is normally achieved through the use of triangular reference forms such as kinterms. Triangulations are reference forms that link the person being spoken about to another specified person (e.g. Bill’s doctor). Triangulations are frequently used to associate the referent to the current speaker (e.g.my father), to an addressed recipient (your uncle) or co-present other (this bloke’s cousin). Murriny Patha speakers regularly associate key persons to themselves when making authoritative claims about items of business and important events. They frequently draw on kinship links when attempting to bolster their epistemic position. When speakers demonstrate their relatedness to the event’s protagonists, they ground their contribution to the discussion as being informed by appropriate genealogical connections (effectively, ‘I happen to know something about that. He was after all my own uncle’).
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bobadilla-Suarez, S., Guest, O., & Love, B. C. (2020). Subjective value and decision entropy are jointly encoded by aligned gradients across the human brain. Communications Biology, 3: 597. doi:10.1038/s42003-020-01315-3.

    Abstract

    Recent work has considered the relationship between value and confidence in both behavioural and neural representation. Here we evaluated whether the brain organises value and confidence signals in a systematic fashion that reflects the overall desirability of options. If so, regions that respond to either increases or decreases in both value and confidence should be widespread. We strongly confirmed these predictions through a model-based fMRI analysis of a mixed gambles task that assessed subjective value (SV) and inverse decision entropy (iDE), which is related to confidence. Purported value areas more strongly signalled iDE than SV, underscoring how intertwined value and confidence are. A gradient tied to the desirability of actions transitioned from positive SV and iDE in ventromedial prefrontal cortex to negative SV and iDE in dorsal medial prefrontal cortex. This alignment of SV and iDE signals could support retrospective evaluation to guide learning and subsequent decisions.

    Additional information

    supplemental information
  • Bocanegra, B. R., Poletiek, F. H., Ftitache, B., & Clark, A. (2019). Intelligent problem-solvers externalize cognitive operations. Nature Human Behaviour, 3, 136-142. doi:10.1038/s41562-018-0509-y.

    Abstract

    Humans are nature’s most intelligent and prolific users of external props and aids (such as written texts, slide-rules and software packages). Here we introduce a method for investigating how people make active use of their task environment during problem-solving and apply this approach to the non-verbal Raven Advanced Progressive Matrices test for fluid intelligence. We designed a click-and-drag version of the Raven test in which participants could create different external spatial configurations while solving the puzzles. In our first study, we observed that the click-and-drag test was better than the conventional static test at predicting academic achievement of university students. This pattern of results was partially replicated in a novel sample. Importantly, environment-altering actions were clustered in between periods of apparent inactivity, suggesting that problem-solvers were delicately balancing the execution of internal and external cognitive operations. We observed a systematic relationship between this critical phasic temporal signature and improved test performance. Our approach is widely applicable and offers an opportunity to quantitatively assess a powerful, although understudied, feature of human intelligence: our ability to use external objects, props and aids to solve complex problems.

Share this page