Publications

Displaying 301 - 400 of 1456
  • Drew, P., Hakulinen, A., Heinemann, T., Niemi, J., & Rossi, G. (2021). Hendiadys in naturally occurring interactions: A cross-linguistic study of double verb constructions. Journal of Pragmatics, 182, 322-347. doi:10.1016/j.pragma.2021.02.008.

    Abstract

    Double verb constructions known as hendiadys have been studied primarily in literary texts and corpora of written language. Much less is known about their properties and usage in spoken language, where expressions such as ‘come and see’, ‘go and tell’, ‘sit and talk’ are particularly common, and where we can find an even richer diversity of other constructions. In this study, we investigate hendiadys in corpora of naturally occurring social interactions in four languages, Danish, English (US and UK), Finnish and Italian, with the objective of exploring whether hendiadys is used systematically in recurrent interactional and sequential circumstances, from which it is possible to identify the pragmatic function(s) that hendiadys may serve. Examining hendiadys in conversation also offers us a special window into its grammatical properties, for example when a speaker self-corrects from a non-hendiadic to a hendiadic expression, exposing the boundary between related grammatical forms and demonstrating the distinctiveness of hendiadys in context. More broadly, we demonstrate that hendiadys is systematically associated with talk about complainable matters, in environments characterised by a conflict, dissonance, or friction that is ongoing in the interaction or that is being reported by one participant to another. We also find that the utterance in which hendiadys is used is typically in a subsequent and possibly terminal position in the sequence, summarising or concluding it. Another key finding is that the complainable or conflictual element in these interactions is expressed primarily by the first conjunct of the hendiadic construction. Whilst the first conjunct is semantically subsidiary to the second, it is pragmatically the most important one. This analysis leads us to revisit a long-established asymmetry between the verbal components of hendiadys, and to bring to light the synergy of grammar and pragmatics in language usage.
  • Drijvers, L., Jensen, O., & Spaak, E. (2021). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping, 42(4), 1138-1152. doi:10.1002/hbm.25282.

    Abstract

    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
  • Drijvers, L., & Holler, J. (2023). The multimodal facilitation effect in human communication. Psychonomic Bulletin & Review, 30(2), 792-801. doi:10.3758/s13423-022-02178-x.

    Abstract

    During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.
  • Drijvers, L., & Mazzini, S. (2023). Neural oscillations in audiovisual language and communication. In Oxford Research Encyclopedia of Neuroscience. Oxford: Oxford University Press. doi:10.1093/acrefore/9780190264086.013.455.

    Abstract

    How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication.

    There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Drude, S. (2009). Nasal harmony in Awetí ‐ A declarative account. ReVEL - Revista Virtual de Estudos da Linguagem, (3). Retrieved from http://www.revel.inf.br/en/edicoes/?mode=especial&id=16.

    Abstract

    This article describes and analyses nasal harmony (or spreading of nasality) in Awetí. It first shows generally how sounds in prefixes adapt to nasality or orality of stems, and how nasality in stems also ‘extends’ to the left. With abstract templates we show which phonetically nasal or oral sequences are possible in Awetí (focusing on stops, pre-nasalized stops and nasals) and which phonological analysis is appropriate for account for this regularities. In Awetí, there are intrinsically nasal and oral vowels and ‘neutral’ vowels which adapt phonetically to a following vowel or consonant, as is the case of sonorant consonants. Pre-nasalized stops such as “nt” are nasalized variants of stops, not post-oralized variants of nasals as in Tupí-Guaranian languages. For nasals and stops in syllable coda (end of morphemes), we postulate arqui-phonemes which adapt to the preceding vowel or a following consonant. Finally, using a declarative approach, the analysis formulates ‘rules’ (statements) which account for the ‘behavior’ of nasality in Awetí words, making use of “structured sequences” on both the phonetic and phonological levels. So, each unit (syllable, morpheme, word etc.) on any level has three components, a sequence of segments, a constituent structure (where pre-nasalized stops, like diphthongs, correspond to two segments), and an intonation structure. The statements describe which phonetic variants can be combined (concatenated) with which other variants, depending on their nasality or orality.
  • Duffield, N., Matsuo, A., & Roberts, L. (2009). Factoring out the parallelism effect in VP-ellipsis: English vs. Dutch contrasts. Second Language Research, 25, 427-467. doi:10.1177/0267658309349425.

    Abstract

    Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners’ overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel antecedents over those with non-parallel antecedents. However, these studies also suggest that, in contrast to English native speakers, L2 learners’ sensitivity to parallelism is strongly influenced by other non-syntactic formal factors, such that the constraint applies in a comparatively restricted range of construction-specific contexts. This article reports a set of follow-up experiments — from both computer-based as well as more traditional acceptability judgement tasks — that systematically manipulates these other factors. Convergent results from these tasks confirm a qualitative difference in the judgement patterns of the two groups, as well as important differences between theoreticians’ judgements and those of typical native speakers. We consider the implications of these findings for theories of ultimate attainment in second language acquisition (SLA), as well as for current theoretical accounts of ellipsis.
  • Düngen, D., Fitch, W. T., & Ravignani, A. (2023). Hoover the talking seal [quick guide]. Current Biology, 33, R50-R52. doi:10.1016/j.cub.2022.12.023.
  • Düngen, D., & Ravignani, A. (2023). The paradox of learned song in a semi-solitary mammal. Ethology, 129(9), 445-497. doi:10.1111/eth.13385.

    Abstract

    Learning can occur via trial and error; however, learning from conspecifics is faster and more efficient. Social animals can easily learn from conspecifics, but how do less social species learn? In particular, birds provide astonishing examples of social learning of vocalizations, while vocal learning from conspecifics is much less understood in mammals. We present a hypothesis aimed at solving an apparent paradox: how can harbor seals (Phoca vitulina) learn their song when their whole lives are marked by loose conspecific social contact? Harbor seal pups are raised individually by their mostly silent mothers. Pups' first few weeks of life show developed vocal plasticity; these weeks are followed by relatively silent years until sexually mature individuals start singing. How can this rather solitary life lead to a learned song? Why do pups display vocal plasticity at a few weeks of age, when this is apparently not needed? Our hypothesis addresses these questions and tries to explain how vocal learning fits into the natural history of harbor seals, and potentially other less social mammals. We suggest that harbor seals learn during a sensitive period within puppyhood, where they are exposed to adult males singing. In particular, we hypothesize that, to make this learning possible, the following happens concurrently: (1) mothers give birth right before male singing starts, (2) pups enter a sensitive learning phase around weaning time, which (3) coincides with their foraging expeditions at sea which, (4) in turn, coincide with the peak singing activity of adult males. In other words, harbor seals show vocal learning as pups so they can acquire elements of their future song from adults, and solitary adults can sing because they have acquired these elements as pups. We review the available evidence and suggest that pups learn adult vocalizations because they are born exactly at the right time to eavesdrop on singing adults. We conclude by advancing empirical predictions and testable hypotheses for future work.
  • Düngen, D., Sarfati, M., & Ravignani, A. (2023). Cross-species research in biomusicality: Methods, pitfalls, and prospects. In E. H. Margulis, P. Loui, & D. Loughridge (Eds.), The science-music borderlands: Reckoning with the past and imagining the future (pp. 57-95). Cambridge, MA, USA: The MIT Press. doi:10.7551/mitpress/14186.003.0008.
  • Dunn, M. (2009). Contact and phylogeny in Island Melanesia. Lingua, 11(11), 1664-1678. doi:10.1016/j.lingua.2007.10.026.

    Abstract

    This paper shows that despite evidence of structural convergence between some of the Austronesian and non-Austronesian (Papuan) languages of Island Melanesia, statistical methods can detect two independent genealogical signals derived from linguistic structural features. Earlier work by the author and others has presented a maximum parsimony analysis which gave evidence for a genealogical connection between the non-Austronesian languages of island Melanesia. Using the same data set, this paper demonstrates for the non-statistician the application of more sophisticated statistical techniques—including Bayesian methods of phylogenetic inference, and shows that the evidence for common ancestry is if anything stronger than originally supposed.
  • Dunn, M., Kruspe, N., & Burenhult, N. (2013). Time and place in the prehistory of the Aslian languages. Human Biology, 85, 383-399.

    Abstract

    The Aslian branch of Austroasiatic is recognised as the oldest recoverable language family in the Malay Peninsula, predating the now dominant Austronesian languages present today. In this paper we address the dynamics of the prehistoric spread of Aslian languages across the peninsula, including the languages spoken by Semang foragers, traditionally associated with the 'Negrito' phenotype. The received view of an early and uniform tripartite break-up of proto-Aslian in the Early Neolithic period, and subsequent differentiation driven by societal modes is challenged. We present a Bayesian phylogeographic analysis of our dataset of vocabulary from 28 Aslian varieties. An explicit geographic model of diffusion is combined with a cognate birth-word death model of lexical evolution to infer the location of the major events of Aslian cladogenesis. The resultant phylogenetic trees are calibrated against dates in the historical and archaeological record to extrapolate a detailed picture of Aslian language history. We conclude that a binary split between Southern Aslian and the rest of Aslian took place in the Early Neolithic (4000 BP). This was followed much later in the Late Neolithic (2000-3000 BP) by a tripartite branching into Central Aslian, Jah Hut and Northern Aslian. Subsequent internal divisions within these sub-clades took place in the Early Metal Phase (post-2000 BP). Significantly, a split in Northern Aslian between Ceq Wong and the languages of the Semang was a late development and is proposed here to coincide with the adoption of Aslian by the Semang foragers. Given the difficulties involved in associating archaeologically recorded activities with linguistic events, as well as the lack of historical sources, our results remain preliminary. However, they provide sufficient evidence to prompt a rethinking of previous models of both clado- and ethno-genesis within the Malay Peninsula.
  • Duprez, J., Stokkermans, M., Drijvers, L., & Cohen, M. X. (2021). Synchronization between keyboard typing and neural oscillations. Journal of Cognitive Neuroscience, 33(5), 887-901. doi:10.1162/jocn_a_01692.

    Abstract

    Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4 -7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta, and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by inter-keystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across subjects, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.
  • Durco, M., & Windhouwer, M. (2013). Semantic Mapping in CLARIN Component Metadata. In Proceedings of MTSR 2013, the 7th Metadata and Semantics Research Conference (pp. 163-168). New York: Springer.

    Abstract

    In recent years, large scale initiatives like CLARIN set out to overcome the notorious heterogeneity of metadata formats in the domain of language resource. The CLARIN Component Metadata Infrastructure established means for flexible resouce descriptions for the domain of language resources. The Data Category Registry ISOcat and the accompanying Relation Registry foster semantic interoperability within the growing heterogeneous collection of metadata records. This paper describes the CMD Infrastructure focusing on the facilities for semantic mapping, and gives also an overview of the current status in the joint component metadata domain.
  • Durrant, S., Jessop, A., Chang, F., Bidgood, A., Peter, M. S., Pine, J. M., & Rowland, C. F. (2021). Does the understanding of complex dynamic events at 10 months predict vocabulary development? Language and Cognition, 13(1), 66-98. doi:10.1017/langcog.2020.26.

    Abstract

    By the end of their first year, infants can interpret many different types of complex dynamic visual events, such as caused-motion, chasing, and goal-directed action. Infants of this age are also in the early stages of vocabulary development, producing their first words at around 12 months. The present work examined whether there are meaningful individual differences in infants’ ability to represent dynamic causal events in visual scenes, and whether these differences influence vocabulary development. As part of the longitudinal Language 0–5 Project, 78 10-month-old infants were tested on their ability to interpret three dynamic motion events, involving (a) caused-motion, (b) chasing behaviour, and (c) goal-directed movement. Planned analyses found that infants showed evidence of understanding the first two event types, but not the third. Looking behaviour in each task was not meaningfully related to vocabulary development, nor were there any correlations between the tasks. The results of additional exploratory analyses and simulations suggested that the infants’ understanding of each event may not be predictive of their vocabulary development, and that looking times in these tasks may not be reliably capturing any meaningful individual differences in their knowledge. This raises questions about how to convert experimental group designs to individual differences measures, and how to interpret infant looking time behaviour.
  • Eekhof, L. S., Kuijpers, M. M., Faber, M., Gao, X., Mak, M., Van den Hoven, E., & Willems, R. M. (2021). Lost in a story, detached from the words. Discourse Processes, 58(7), 595-616. doi:10.1080/0163853X.2020.1857619.

    Abstract

    This article explores the relationship between low- and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics—measured as the effect of these characteristics on gaze duration—were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency, position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

    Additional information

    Analysis scripts and data
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2023). Engagement with narrative characters: The role of social-cognitive abilities and linguistic viewpoint. Discourse Processes, 60(6), 411-439. doi:10.1080/0163853X.2023.2206773.

    Abstract

    This article explores the role of text and reader characteristics in character engagement experiences. In an online study, participants completed several self-report and behavioral measures of social-cognitive abilities and read two literary narratives in which the presence of linguistic viewpoint markers was varied using a highly controlled manipulation strategy. Afterward, participants reported on their character engagement experiences. A principal component analysis on participants’ responses revealed the multidimensional nature of character engagement, which included both self- and other-oriented emotional responses (e.g., empathy, personal distress) as well as more cognitive responses (e.g., identification, perspective taking). Furthermore, character engagement was found to rely on a wide range of social-cognitive abilities but not on the presence of viewpoint markers. Finally, and most importantly, we did not find convincing evidence for an interplay between social-cognitive abilities and the presence of viewpoint markers. These findings suggest that readers rely on their social-cognitive abilities to engage with the inner worlds of fictional others, more so than on the lexical cues of those inner worlds provided by the text.
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2021). Reading minds, reading stories: Social-cognitive abilities affect the linguistic processing of narrative viewpoint. Frontiers in Psychology, 12: 698986. doi:10.3389/fpsyg.2021.698986.

    Abstract

    Although various studies have shown that narrative reading draws on social-cognitive abilities, not much is known about the precise aspects of narrative processing that engage these abilities. We hypothesized that the linguistic processing of narrative viewpoint—expressed by elements that provide access to the inner world of characters—might play an important role in engaging social-cognitive abilities. Using eye tracking, we studied the effect of lexical markers of perceptual, cognitive, and emotional viewpoint on eye movements during reading of a 5,000-word narrative. Next, we investigated how this relationship was modulated by individual differences in social-cognitive abilities. Our results show diverging patterns of eye movements for perceptual viewpoint markers on the one hand, and cognitive and emotional viewpoint markers on the other. Whereas the former are processed relatively fast compared to non-viewpoint markers, the latter are processed relatively slow. Moreover, we found that social-cognitive abilities impacted the processing of words in general, and of perceptual and cognitive viewpoint markers in particular, such that both perspective-taking abilities and self-reported perspective-taking traits facilitated the processing of these markers. All in all, our study extends earlier findings that social cognition is of importance for story reading, showing that individual differences in social-cognitive abilities are related to the linguistic processing of narrative viewpoint.

    Additional information

    supplementary material
  • Egger, J. (2023). Need for speed? The role of speed of processing in early lexical development. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eicher, J. D., Powers, N. R., Miller, L. L., Akshoomoff, N., Amaral, D. G., Bloss, C. S., Libiger, O., Schork, N. J., Darst, B. F., Casey, B. J., Chang, L., Ernst, T., Frazier, J., Kaufmann, W. E., Keating, B., Kenet, T., Kennedy, D., Mostofsky, S., Murray, S. S., Sowell, E. R. and 11 moreEicher, J. D., Powers, N. R., Miller, L. L., Akshoomoff, N., Amaral, D. G., Bloss, C. S., Libiger, O., Schork, N. J., Darst, B. F., Casey, B. J., Chang, L., Ernst, T., Frazier, J., Kaufmann, W. E., Keating, B., Kenet, T., Kennedy, D., Mostofsky, S., Murray, S. S., Sowell, E. R., Bartsch, H., Kuperman, J. M., Brown, T. T., Hagler, D. J., Dale, A. M., Jernigan, T. L., St Pourcain, B., Davey Smith, G., Ring, S. M., Gruen, J. R., & Pediatric Imaging, Neurocognition, and Genetics Study (2013). Genome-wide association study of shared components of reading disability and language impairment. Genes, Brain and Behavior, 12(8), 792-801. doi:10.1111/gbb.12085.

    Abstract

    Written and verbal languages are neurobehavioral traits vital to the development of communication skills. Unfortunately, disorders involving these traits-specifically reading disability (RD) and language impairment (LI)-are common and prevent affected individuals from developing adequate communication skills, leaving them at risk for adverse academic, socioeconomic and psychiatric outcomes. Both RD and LI are complex traits that frequently co-occur, leading us to hypothesize that these disorders share genetic etiologies. To test this, we performed a genome-wide association study on individuals affected with both RD and LI in the Avon Longitudinal Study of Parents and Children. The strongest associations were seen with markers in ZNF385D (OR = 1.81, P = 5.45 × 10(-7) ) and COL4A2 (OR = 1.71, P = 7.59 × 10(-7) ). Markers within NDST4 showed the strongest associations with LI individually (OR = 1.827, P = 1.40 × 10(-7) ). We replicated association of ZNF385D using receptive vocabulary measures in the Pediatric Imaging Neurocognitive Genetics study (P = 0.00245). We then used diffusion tensor imaging fiber tract volume data on 16 fiber tracts to examine the implications of replicated markers. ZNF385D was a predictor of overall fiber tract volumes in both hemispheres, as well as global brain volume. Here, we present evidence for ZNF385D as a candidate gene for RD and LI. The implication of transcription factor ZNF385D in RD and LI underscores the importance of transcriptional regulation in the development of higher order neurocognitive traits. Further study is necessary to discern target genes of ZNF385D and how it functions within neural development of fluent language.
  • Eijk, L. (2023). Linguistic alignment: The syntactic, prosodic, and segmental phonetic levels. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eimer, M., Kiss, M., Press, C., & Sauter, D. (2009). The roles of feature-specific task set and bottom-up salience in attentional capture: An ERP study. Journal of Experimental Psychology: Human Perception and Performance, 35, 1316-1328. doi:10.1037/a0015872.

    Abstract

    We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. ERPs and behavioural performance were measured in two experiments where spatially nonpredictive cues preceded visual search arrays that included a colour-defined target. When cue arrays contained a target-colour singleton, behavioural spatial cueing effects were accompanied by a cue-induced N2pc component, indicative of attentional capture. Behavioural cueing effects and N2pc components were only minimally attenuated for non-singleton relative to singleton target-colour cues, demonstrating that top-down task set has a much greater impact on attentional capture than bottom-up salience. For nontarget-colour singleton cues, no N2pc was triggered, but an anterior N2 component indicative of top-down inhibition was observed. In Experiment 2, these cues produced an inverted behavioural cueing effect, which was accompanied by a delayed N2pc to targets presented at cued locations. These results suggest that perceptually salient visual stimuli without task-relevant features trigger a transient location-specific inhibition process that prevents attentional capture, but delays the selection of subsequent target events.
  • Eising, E., A Datson, N., van den Maagdenberg, A. M., & Ferrari, M. D. (2013). Epigenetic mechanisms in migraine: a promising avenue? BMC Medicine, 11(1): 26. doi:10.1186/1741-7015-11-26.

    Abstract

    Migraine is a disabling common brain disorder typically characterized by attacks of severe headache and associated with autonomic and neurological symptoms. Its etiology is far from resolved. This review will focus on evidence that epigenetic mechanisms play an important role in disease etiology. Epigenetics comprise both DNA methylation and post-translational modifications of the tails of histone proteins, affecting chromatin structure and gene expression. Besides playing a role in establishing cellular and developmental stage-specific regulation of gene expression, epigenetic processes are also important for programming lasting cellular responses to environmental signals. Epigenetic mechanisms may explain how non-genetic endogenous and exogenous factors such as female sex hormones, stress hormones and inflammation trigger may modulate attack frequency. Developing drugs that specifically target epigenetic mechanisms may open up exciting new avenues for the prophylactic treatment of migraine.
  • Eising, E., De Vries, B., Ferrari, M. D., Terwindt, G. M., & Van Den Maagdenberg, A. M. J. M. (2013). Pearls and pitfalls in genetic studies of migraine. Cephalalgia, 33(8), 614-625. doi:10.1177/0333102413484988.

    Abstract

    Purpose of review: Migraine is a prevalent neurovascular brain disorder with a strong genetic component, and different methodological approaches have been implemented to identify the genes involved. This review focuses on pearls and pitfalls of these approaches and genetic findings in migraine. Summary: Common forms of migraine (i.e. migraine with and without aura) are thought to have a polygenic make-up, whereas rare familial hemiplegic migraine (FHM) presents with a monogenic pattern of inheritance. Until a few years ago only studies in FHM yielded causal genes, which were identified by a classical linkage analysis approach. Functional analyses of FHM gene mutations in cellular and transgenic animal models suggest abnormal glutamatergic neurotransmission as a possible key disease mechanism. Recently, a number of genes were discovered for the common forms of migraine using a genome-wide association (GWA) approach, which sheds first light on the pathophysiological mechanisms involved. Conclusions: Novel technological strategies such as next-generation sequencing, which can be implemented in future genetic migraine research, may aid the identification of novel FHM genes and promote the search for the missing heritability of common migraine.
  • Eisner, F., Melinger, A., & Weber, A. (2013). Constraints on the transfer of perceptual learning in accented speech. Frontiers in Psychology, 4: 148. doi:10.3389/fpsyg.2013.00148.

    Abstract

    The perception of speech sounds can be re-tuned rapidly through a mechanism of lexically-driven learning (Norris et al 2003, Cogn.Psych. 47). Here we investigated this type of learning for English voiced stop consonants which are commonly de-voiced in word final position by Dutch learners of English . Specifically, this study asked under which conditions the change in pre-lexical representation encodes phonological information about the position of the critical sound within a word. After exposure to a Dutch learner’s productions of de-voiced stops in word-final position (but not in any other positions), British English listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., ‘seat’), facilitated recognition of visual targets with voiced final stops (e.g., SEED). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g. auditory primes such as ‘town’ facilitated recognition of visual targets like DOWN (Experiment 1). Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). These results suggest that word position can be encoded in the pre-lexical adjustment to the accented phoneme contrast. Lexcially-guided feedback, distributional properties of the input, and long-term representations of accents all appear to modulate the pre-lexical re-tuning of phoneme categories.
  • Ekerdt, C., Takashima, A., & McQueen, J. M. (2023). Memory consolidation in second language neurocognition. In K. Morgan-Short, & J. G. Van Hell (Eds.), The Routledge handbook of second language acquisition and neurolinguistics. Oxfordshire: Routledge.

    Abstract

    Acquiring a second language (L2) requires newly learned information to be integrated with existing knowledge. It has been proposed that several memory systems work together to enable this process of rapidly encoding new information and then slowly incorporating it with existing knowledge, such that it is consolidated and integrated into the language network without catastrophic interference. This chapter focuses on consolidation of L2 vocabulary. First, the complementary learning systems model is outlined, along with the model’s predictions regarding lexical consolidation. Next, word learning studies in first language (L1) that investigate the factors playing a role in consolidation, and the neural mechanisms underlying this, are reviewed. Using the L1 memory consolidation literature as background, the chapter then presents what is currently known about memory consolidation in L2 word learning. Finally, considering what is already known about L1 but not about L2, future research investigating memory consolidation in L2 neurocognition is proposed.
  • Emmendorfer, A. K., Bonte, M., Jansma, B. M., & Kotz, S. A. (2023). Sensitivity to syllable stress regularities in externally but not self‐triggered speech in Dutch. European Journal of Neuroscience, 58(1), 2297-2314. doi:10.1111/ejn.16003.

    Abstract

    Several theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from magnetoencephalography/electroencephalography (M/EEG) studies, showing reduced auditory N1 and P2 responses to self-generated compared to externally generated events, or when the timing and form of stimuli are more predictable. The current study examined the sensitivity of N1 and P2 responses to statistical speech regularities. We employed a motor-to-auditory paradigm comparing event-related potential (ERP) responses to externally and self-triggered pseudowords. Participants were presented with a cue indicating which button to press (motor-auditory condition) or which pseudoword would be presented (auditory-only condition). Stimuli consisted of the participant's own voice uttering pseudowords that varied in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-triggered stimuli, with greater suppression effects for more predictable features such as high phonotactic probability and first-syllable stress in pseudowords. In a temporal principal component analysis (PCA), we observed an interaction between syllable stress and condition for the N1, where second-syllable stress items elicited a larger N1 than first-syllable stress items, but only for externally generated stimuli. We further observed an effect of syllable stress on the P2, where first-syllable stress items elicited a larger P2. Strikingly, we did not observe motor-induced suppression for self-triggered stimuli for either the N1 or P2 component, likely due to the temporal predictability of the stimulus onset in both conditions. Taking into account previous findings, the current results suggest that sensitivity to syllable stress regularities depends on task demands.

    Additional information

    Supporting Information
  • Enard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J. and 36 moreEnard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J., Dalke, C., Ehrhardt, N., Favor, J., Fuchs, H., Gailus-Durner, V., Hans, W., Hölzlwimmer, G., Javaheri, A., Kalaydjiev, S., Kallnik, M., Kling, E., Kunder, S., Moßbrugger, I., Naton, B., Racz, I., Rathkolb, B., Rozman, J., Schrewe, A., Busch, D. H., Graw, J., Ivandic, B., Klingenspor, M., Klopstock, T., Ollert, M., Quintanilla-Martinez, L., Schulz, H., Wolf, E., Wurst, W., Zimmer, A., Fisher, S. E., Morgenstern, R., Arendt, T., Hrabé de Angelis, M., Fischer, J., Schwarz, J., & Pääbo, S. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137(5), 961-971. doi:10.1016/j.cell.2009.03.041.

    Abstract

    It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.
  • Enfield, N. J. (2013). Doing fieldwork on the body, language, and communication. In C. Müller, E. Fricke, S. Ladewig, A. Cienki, D. McNeill, & S. Teßendorf (Eds.), Handbook Body – Language – Communication. Volume 1 (pp. 974-981). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2009). Common tragedy [Review of the book The native mind and the cultural construction of nature by Scott Atran Douglas Medin]. The Times Literary Supplement, September 18,2009, 10-11.
  • Enfield, N. J. (2009). 'Case relations' in Lao, a radically isolating language. In A. L. Malčukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 808-819). Oxford: Oxford University Press.
  • Enfield, N. J. (2009). [Review of the book Serial verb constructions: A cross-linguistic typology ed. by Alexandra Y. Aikhenvald and R. M. W. Dixon]. Language, 85, 445-451. doi:10.1353/lan.0.0124.
  • Enfield, N. J. (2013). A ‘Composite Utterances’ approach to meaning. In C. Müller, E. Fricke, S. Ladewig, A. Cienki, D. McNeill, & S. Teßendorf (Eds.), Handbook Body – Language – Communication. Volume 1 (pp. 689-706). Berlin: Mouton de Gruyter.
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction

    A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use.

    The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Enfield, N. J., & Levinson, S. C. (2009). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 12 (pp. 51-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883559.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J. (2009). Language and culture. In L. Wei, & V. Cook (Eds.), Contemporary Applied Linguistics Volume 2 (pp. 83-97). London: Continuum.
  • Enfield, N. J. (2013). Language, culture, and mind: Trends and standards in the latest pendulum swing. Journal of the Royal Anthropological Institute, 19, 155-169. doi:10.1111/1467-9655.12008.

    Abstract

    The study of language in relation to anthropological questions has deep and varied roots, from Humboldt and Boas, Malinowski and Vygotsky, Sapir and Whorf, Wittgenstein and Austin, through to the linguistic anthropologists of now. A recent book by the linguist Daniel Everett, language: the cultural tool (2012), aims to bring some of the issues to a popular audience, with a focus on the idea that language is a tool for social action. I argue in this essay that the book does not represent the state of the art in this field, falling short on three central desiderata of a good account for the social functions of language and its relation to culture. I frame these desiderata in terms of three questions, here termed the cognition question, the causality question, and the culture question. I look at the relevance of this work for socio-cultural anthropology, in the context of a major interdisciplinary pendulum swing that is incipient in the study of language today, a swing away from formalist, innatist perspectives, and towards functionalist, empiricist perspectives. The role of human diversity and culture is foregrounded in all of this work. To that extent, Everett’s book is representative, but the quality of his argument is neither strong in itself nor representative of a movement that ought to be of special interest to socio-cultural anthropologists.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (2013). Hippie, interrupted. In J. Barker, & J. Lindquist (Eds.), Figures of Southeast Asian modernity (pp. 101-103). Honolulu: University of Hawaii Press.
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2013). Reference in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 433-454). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch21.

    Abstract

    This chapter contains sections titled: Introduction Lexical Selection in Reference: Introductory Examples of Reference to Times Multiple “Preferences” Future Directions Conclusion
  • Enfield, N. J. (2013). Rejoinder to Daniel Everett [Comment]. Journal of the Royal Anthropological Institute, 19(3), 649. doi:10.1111/1467-9655.12056.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Enfield, N. J. (2013). Relationship thinking: Agency, enchrony, and human sociality. New York: Oxford University Press.
  • Enfield, N. J. (2009). The anatomy of meaning: Speech, gesture, and composite utterances. Cambridge: Cambridge University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2009). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 12 (pp. 54-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883564.

    Abstract

    Human actions in the social world – like greeting, requesting, complaining, accusing, asking, confirming, etc. – are recognised through the interpretation of signs. Language is where much of the action is, but gesture, facial expression and other bodily actions matter as well. The goal of this task is to establish a maximally rich description of a representative, good quality piece of conversational interaction, which will serve as a reference point for comparative exploration of the status of social actions and their formulation across language
  • Enfield, N. J. (2013). The virtual you and the real you [Book review]. The Times Literary Supplement, April 12, 2013(5741), 31-32.

    Abstract

    Review of the books "Virtually you. The dangerous powers of the e-personality", by Elias Aboujaoude; "The big disconnect. The story of technology and loneliness", by Giles Slade; and "Net smart. How to thrive online", by Howard Rheingold.
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. The Journal of Neuroscience, 33, 10688-10697. doi:10.1523/​JNEUROSCI.4596-12.2013.

    Abstract

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an “executive” network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic “language” areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory–language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.
  • Ernestus, M. (2013). Halve woorden [Inaugural lecture]. Nijmegen: Radboud University.

    Abstract

    Rede uitgesproken bij de aanvaarding van het ambt van hoogleraar Psycholinguïstiek aan de Faculteit der Letteren van de Radboud Universiteit Nijmegen op vrijdag 18 januari 2013
  • Ernestus, M. (2009). The roles of reconstruction and lexical storage in the comprehension of regular pronunciation variants. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1875-1878). Causal Productions Pty Ltd.

    Abstract

    This paper investigates how listeners process regular pronunciation variants, resulting from simple general reduction processes. Study 1 shows that when listeners are presented with new words, they store the pronunciation variants presented to them, whether these are unreduced or reduced. Listeners thus store information on word-specific pronunciation variation. Study 2 suggests that if participants are presented with regularly reduced pronunciations, they also reconstruct and store the corresponding unreduced pronunciations. These unreduced pronunciations apparently have special status. Together the results support hybrid models of speech processing, assuming roles for both exemplars and abstract representations.
  • Escudero, P., Broersma, M., & Simon, E. (2013). Learning words in a third language: Effects of vowel inventory and language proficiency. Language and Cognitive Processes, 28, 746-761. doi:10.1080/01690965.2012.662279.

    Abstract

    This study examines the effect of L2 and L3 proficiency on L3 word learning. Native speakers of Spanish with different proficiencies in L2 English and L3 Dutch and a control group of Dutch native speakers participated in a Dutch word learning task involving minimal and non-minimal word pairs. The minimal word pairs were divided into ‘minimal-easy’ and ‘minimal-difficult’ pairs on the basis of whether or not they are known to pose perceptual problems for L1 Spanish learners. Spanish speakers’ proficiency in Dutch and English was independently established by their scores on general language comprehension tests. All participants were trained and subsequently tested on the mapping between pseudo-words and non-objects. The results revealed that, first, both native and non-native speakers produced more errors and longer reaction times for minimal than for non-minimal word pairs, and secondly, Spanish learners had more errors and longer reaction times for minimal-difficult than for minimal-easy pairs. The latter finding suggests that there is a strong continuity between sound perception and L3 word recognition. With respect to proficiency, only the learner’s proficiency in their L2, namely English, predicted their accuracy on L3 minimal pairs. This shows that learning an L2 with a larger vowel inventory than the L1 is also beneficial for word learning in an L3 with a similarly large vowel inventory.

    Files private

    Request files
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Evans, D. M., Zhu, G., Dy, V., Heath, A. C., Madden, P. A. F., Kemp, J. P., McMahon, G., St Pourcain, B., Timpson, N. J., Golding, J., Lawlor, D. A., Steer, C., Montgomery, G. W., Martin, N. G., Smith, G. D., & Whitfield, J. B. (2013). Genome-wide association study identifies loci affecting blood copper, selenium and zinc. Human Molecular Genetics, 22(19), 3998-4006. doi:10.1093/hmg/ddt239.

    Abstract

    Genetic variation affecting absorption, distribution or excretion of essential trace elements may lead to health effects related to sub-clinical deficiency. We have tested for allelic effects of single-nucleotide polymorphisms (SNPs) on blood copper, selenium and zinc in a genome-wide association study using two adult cohorts from Australia and the UK. Participants were recruited in Australia from twins and their families and in the UK from pregnant women. We measured erythrocyte Cu, Se and Zn (Australian samples) or whole blood Se (UK samples) using inductively coupled plasma mass spectrometry. Genotyping was performed with Illumina chips and > 2.5 m SNPs were imputed from HapMap data. Genome-wide significant associations were found for each element. For Cu, there were two loci on chromosome 1 (most significant SNPs rs1175550, P = 5.03 × 10(-10), and rs2769264, P = 2.63 × 10(-20)); for Se, a locus on chromosome 5 was significant in both cohorts (combined P = 9.40 × 10(-28) at rs921943); and for Zn three loci on chromosomes 8, 15 and X showed significant results (rs1532423, P = 6.40 × 10(-12); rs2120019, P = 1.55 × 10(-18); and rs4826508, P = 1.40 × 10(-12), respectively). The Se locus covers three genes involved in metabolism of sulphur-containing amino acids and potentially of the analogous Se compounds; the chromosome 8 locus for Zn contains multiple genes for the Zn-containing enzyme carbonic anhydrase. Where potentially relevant genes were identified, they relate to metabolism of the element (Se) or to the presence at high concentration of a metal-containing protein (Cu).
  • Evans, D. M., Brion, M. J. A., Paternoster, L., Kemp, J. P., McMahon, G., Munafò, M., Whitfield, J. B., Medland, S. E., Montgomery, G. W., Timpson, N. J., St Pourcain, B., Lawlor, D. A., Martin, N. G., Dehghan, A., Hirschhorn, J., Davey Smith, G., The GIANT consortium, The CRP consortium, & The TAG Consortium (2013). Mining the Human Phenome Using Allelic Scores That Index Biological Intermediates. PLoS Genet, 9(10): e1003919. doi:10.1371/journal.pgen.1003919.

    Abstract

    Author SummaryThe standard approach in genome-wide association studies is to analyse the relationship between genetic variants and disease one marker at a time. Significant associations between markers and disease are then used as evidence to implicate biological intermediates and pathways likely to be involved in disease aetiology. However, single genetic variants typically only explain small amounts of disease risk. Our idea is to construct allelic scores that explain greater proportions of the variance in biological intermediates than single markers, and then use these scores to data mine genome-wide association studies. We show how allelic scores derived from known variants as well as allelic scores derived from hundreds of thousands of genetic markers across the genome explain significant portions of the variance in body mass index, levels of C-reactive protein, and LDLc cholesterol, and many of these scores show expected correlations with disease. Power calculations confirm the feasibility of scaling our strategy to the analysis of tens of thousands of molecular phenotypes in large genome-wide meta-analyses. Our method represents a simple way in which tens of thousands of molecular phenotypes could be screened for potential causal relationships with disease.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Falk, J. J., Zhang, Y., Scheutz, M., & Yu, C. (2021). Parents adaptively use anaphora during parent-child social interaction. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1472-1478). Vienna: Cognitive Science Society.

    Abstract

    Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.
  • Fatemifar, G., Hoggart, C. J., Paternoster, L., Kemp, J. P., Prokopenko, I., Horikoshi, M., Wright, V. J., Tobias, J. H., Richmond, S., Zhurov, A. I., Toma, A. M., Pouta, A., Taanila, A., Sipila, K., Lähdesmäki, R., Pillas, D., Geller, F., Feenstra, B., Melbye, M., Nohr, E. A. and 6 moreFatemifar, G., Hoggart, C. J., Paternoster, L., Kemp, J. P., Prokopenko, I., Horikoshi, M., Wright, V. J., Tobias, J. H., Richmond, S., Zhurov, A. I., Toma, A. M., Pouta, A., Taanila, A., Sipila, K., Lähdesmäki, R., Pillas, D., Geller, F., Feenstra, B., Melbye, M., Nohr, E. A., Ring, S. M., St Pourcain, B., Timpson, N. J., Davey Smith, G., Jarvelin, M.-R., & Evans, D. M. (2013). Genome-wide association study of primary tooth eruption identifies pleiotropic loci associated with height and craniofacial distances. Human Molecular Genetics, 22(18), 3807-3817. doi:10.1093/hmg/ddt231.

    Abstract

    Twin and family studies indicate that the timing of primary tooth eruption is highly heritable, with estimates typically exceeding 80%. To identify variants involved in primary tooth eruption, we performed a population-based genome-wide association study of 'age at first tooth' and 'number of teeth' using 5998 and 6609 individuals, respectively, from the Avon Longitudinal Study of Parents and Children (ALSPAC) and 5403 individuals from the 1966 Northern Finland Birth Cohort (NFBC1966). We tested 2 446 724 SNPs imputed in both studies. Analyses were controlled for the effect of gestational age, sex and age of measurement. Results from the two studies were combined using fixed effects inverse variance meta-analysis. We identified a total of 15 independent loci, with 10 loci reaching genome-wide significance (P < 5 × 10(-8)) for 'age at first tooth' and 11 loci for 'number of teeth'. Together, these associations explain 6.06% of the variation in 'age of first tooth' and 4.76% of the variation in 'number of teeth'. The identified loci included eight previously unidentified loci, some containing genes known to play a role in tooth and other developmental pathways, including an SNP in the protein-coding region of BMP4 (rs17563, P = 9.080 × 10(-17)). Three of these loci, containing the genes HMGA2, AJUBA and ADK, also showed evidence of association with craniofacial distances, particularly those indexing facial width. Our results suggest that the genome-wide association approach is a powerful strategy for detecting variants involved in tooth eruption, and potentially craniofacial growth and more generally organ development.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Fedor, A., Pléh, C., Brauer, J., Caplan, D., Friederici, A. D., Gulyás, B., Hagoort, P., Nazir, T., & Singer, W. (2009). What are the brain mechanisms underlying syntactic operations? In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 299-324). Cambridge, MA: MIT Press.

    Abstract

    This chapter summarizes the extensive discussions that took place during the Forum as well as the subsequent months thereafter. It assesses current understanding of the neuronal mechanisms that underlie syntactic structure and processing.... It is posited that to understand the neurobiology of syntax, it might be worthwhile to shift the balance from comprehension to syntactic encoding in language production
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Lu, A. T., Fei, Z., Haghani, A., Robeck, T. R., Zoller, J. A., Li, C. Z., Lowe, R., Yan, Q., Zhang, J., Vu, H., Ablaeva, J., Acosta-Rodriguez, V. A., Adams, D. M., Almunia, J., Aloysius, A., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K. and 168 moreLu, A. T., Fei, Z., Haghani, A., Robeck, T. R., Zoller, J. A., Li, C. Z., Lowe, R., Yan, Q., Zhang, J., Vu, H., Ablaeva, J., Acosta-Rodriguez, V. A., Adams, D. M., Almunia, J., Aloysius, A., Ardehali, R., Arneson, A., Baker, C. S., Banks, G., Belov, K., Bennett, N. C., Black, P., Blumstein, D. T., Bors, E. K., Breeze, C. E., Brooke, R. T., Brown, J. L., Carter, G. G., Caulton, A., Cavin, J. M., Chakrabarti, L., Chatzistamou, I., Chen, H., Cheng, K., Chiavellini, P., Choi, O. W., Clarke, S. M., Cooper, L. N., Cossette, M. L., Day, J., DeYoung, J., DiRocco, S., Dold, C., Ehmke, E. E., Emmons, C. K., Emmrich, S., Erbay, E., Erlacher-Reid, C., Faulkes, C. G., Ferguson, S. H., Finno, C. J., Flower, J. E., Gaillard, J. M., Garde, E., Gerber, L., Gladyshev, V. N., Gorbunova, V., Goya, R. G., Grant, M. J., Green, C. B., Hales, E. N., Hanson, M. B., Hart, D. W., Haulena, M., Herrick, K., Hogan, A. N., Hogg, C. J., Hore, T. A., Huang, T., Izpisua Belmonte, J. C., Jasinska, A. J., Jones, G., Jourdain, E., Kashpur, O., Katcher, H., Katsumata, E., Kaza, V., Kiaris, H., Kobor, M. S., Kordowitzki, P., Koski, W. R., Krützen, M., Kwon, S. B., Larison, B., Lee, S. G., Lehmann, M., Lemaitre, J. F., Levine, A. J., Li, C., Li, X., Lim, A. R., Lin, D. T. S., Lindemann, D. M., Little, T. J., Macoretta, N., Maddox, D., Matkin, C. O., Mattison, J. A., McClure, M., Mergl, J., Meudt, J. J., Montano, G. A., Mozhui, K., Munshi-South, J., Naderi, A., Nagy, M., Narayan, P., Nathanielsz, P. W., Nguyen, N. B., Niehrs, C., O’Brien, J. K., O’Tierney Ginn, P., Odom, D. T., Ophir, A. G., Osborn, S., Ostrander, E. A., Parsons, K. M., Paul, K. C., Pellegrini, M., Peters, K. J., Pedersen, A. B., Petersen, J. L., Pietersen, D. W., Pinho, G. M., Plassais, J., Poganik, J. R., Prado, N. A., Reddy, P., Rey, B., Ritz, B. R., Robbins, J., Rodriguez, M., Russell, J., Rydkina, E., Sailer, L. L., Salmon, A. B., Sanghavi, A., Schachtschneider, K. M., Schmitt, D., Schmitt, T., Schomacher, L., Schook, L. B., Sears, K. E., Seifert, A. W., Seluanov, A., Shafer, A. B. A., Shanmuganayagam, D., Shindyapina, A. V., Simmons, M., Singh, K., Sinha, I., Slone, J., Snell, R. G., Soltanmaohammadi, E., Spangler, M. L., Spriggs, M. C., Staggs, L., Stedman, N., Steinman, K. J., Stewart, D. T., Sugrue, V. J., Szladovits, B., Takahashi, J. S., Takasugi, M., Teeling, E. C., Thompson, M. J., Van Bonn, B., Vernes, S. C., Villar, D., Vinters, H. V., Wallingford, M. C., Wang, N., Wayne, R. K., Wilkinson, G. S., Williams, C. K., Williams, R. W., Yang, X. W., Yao, M., Young, B. G., Zhang, B., Zhang, Z., Zhao, P., Zhao, Y., Zhou, W., Zimmermann, J., Ernst, J., Raj, K., & Horvath, S. (2023). Universal DNA methylation age across mammalian tissues. Nature aging, 3, 1144-1166. doi:10.1038/s43587-023-00462-6.

    Abstract

    Aging, often considered a result of random cellular damage, can be accurately estimated using DNA methylation profiles, the foundation of pan-tissue epigenetic clocks. Here, we demonstrate the development of universal pan-mammalian clocks, using 11,754 methylation arrays from our Mammalian Methylation Consortium, which encompass 59 tissue types across 185 mammalian species. These predictive models estimate mammalian tissue age with high accuracy (r > 0.96). Age deviations correlate with human mortality risk, mouse somatotropic axis mutations and caloric restriction. We identified specific cytosines with methylation levels that change with age across numerous species. These sites, highly enriched in polycomb repressive complex 2-binding locations, are near genes implicated in mammalian development, cancer, obesity and longevity. Our findings offer new evidence suggesting that aging is evolutionarily conserved and intertwined with developmental processes across all mammals.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R. (2021). Learning second language speech perception in natural settings. PhD Thesis, Radboud University, Nijmegen.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Ferré, G. (2023). Pragmatic gestures and prosody. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527215.

    Abstract

    The study presented here focuses on two pragmatic gestures:
    the hand flip (Ferré, 2011), a gesture of the Palm Up Open
    Hand/PUOH family (Müller, 2004) and the closed hand which
    can be considered as the opposite kind of movement to the open-
    ing of the hands present in the PUOH gesture. Whereas one of
    the functions of the hand flip has been described as presenting
    a new point in speech (Cienki, 2021), the closed hand gesture
    has not yet been described in the literature to the best of our
    knowledge. It can however be conceived of as having the oppo-
    site function of announcing the end of a point in discourse. The
    object of the present study is therefore to determine, with the
    study of prosodic features, if the two gestures are found in the
    same type of speech units and what their respective scope is.
    Drawing from a corpus of three TED Talks in French the
    prosodic characteristics of the speech that accompanies the two
    gestures will be examined. The hypothesis developed in the
    present paper is that their scope should be reflected in the
    prosody of accompanying speech, especially pitch key, tone,
    and relative pitch range. The prediction is that hand flips and
    closing hand gestures are expected to be located at the periph-
    ery of Intonation Phrases (IPs), Inter-Pausal Units (IPUs) or
    more conversational Turn Constructional Units (TCUs), and are
    likely to be co-occurrent with pauses in speech. But because of
    the natural slope of intonation in speech, the speech that accom-
    pany early gestures in Intonation Phrases should reveal different
    features from the speech at the end of intonational units. Tones
    should be different as well, considering the prosodic structure
    of spoken French.
  • Ferreira, F., & Huettig, F. (2023). Fast and slow language processing: A window into dual-process models of cognition. [Open Peer commentary on De Neys]. Behavioral and Brain Sciences, 46: e121. doi:10.1017/S0140525X22003041.

    Abstract

    Our understanding of dual-process models of cognition may benefit from a consideration of language processing, as language comprehension involves fast and slow processes analogous to those used for reasoning. More specifically, De Neys's criticisms of the exclusivity assumption and the fast-to-slow switch mechanism are consistent with findings from the literature on the construction and revision of linguistic interpretations.
  • Filippi, P. (2013). Connessioni regolate: la chiave ontologica alle specie-specificità? Epekeina, 2(1), 203-223. doi:10.7408/epkn.epkn.v2i1.41.

    Abstract

    This article focuses on “perceptual syntax”, the faculty to process patterns in sensory stimuli. Specifically, this study addresses the ability to perceptually connect elements that are: (1) of the same sensory modality; (2) spatially and temporally non-adjacent; or (3) within multiple sensorial domains. The underlying hypothesis is that in each animal species, this core cognitive faculty enables the perception of the environment-world (Umwelt) and consequently the possibility to survive within it. Importantly, it is suggested that in doing so, perceptual syntax determines (and guides) each species’ ontological access to the world. In support of this hypothesis, research on perceptual syntax in nonverbal individuals (preverbal infants and nonhuman animals) and humans is reviewed. This comparative approach results in theoretical remarks on human cognition and ontology, pointing to the conclusion that the ability to map cross-modal connections through verbal language is what makes humans’ form of life species-typical.
  • Filippi, P. (2013). Specifically Human: Going Beyond Perceptual Syntax. Biosemiotics, 7(1), 111-123. doi:10.1007/s12304-013-9187-3.

    Abstract

    The aim of this paper is to help refine the definition of humans as “linguistic animals” in light of a comparative approach on nonhuman animals’ cognitive systems. As Uexküll & Kriszat (1934/1992) have theorized, the epistemic access to each species-specific environment (Umwelt) is driven by different biocognitive processes. Within this conceptual framework, I identify the salient cognitive process that distinguishes each species typical perception of the world as the faculty of language meant in the following operational definition: the ability to connect different elements according to structural rules. In order to draw some conclusions about humans’ specific faculty of language, I review different empirical studies on nonhuman animals’ ability to recognize formal patterns of tokens. I suggest that what differentiates human language from other animals’ cognitive systems is the ability to categorize the units of a pattern, going beyond its perceptual aspects. In fact, humans are the only species known to be able to combine semantic units within a network of combinatorial logical relationships (Deacon 1997) that can be linked to the state of affairs in the external world (Wittgenstein 1922). I assume that this ability is the core cognitive process underlying a) the capacity to speak (or to reason) in verbal propositions and b) the general human faculty of language expressed, for instance, in the ability to draw visual conceptual maps or to compute mathematical expressions. In light of these considerations, I conclude providing some research questions that could lead to a more detailed comparative exploration of the faculty of language.
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E. (2013). Building bridges between genes, brains and language. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 425-454). Cambridge, Mass: MIT Press.
  • Fisher, S. E., & Ridley, M. (2013). Culture, genes, and the human revolution. Science, 340(6135), 929-930. doi:10.1126/science.1236171.

    Abstract

    State-of-the-art DNA sequencing is providing ever more detailed insights into the genomes of humans, extant apes, and even extinct hominins (1–3), offering unprecedented opportunities to uncover the molecular variants that make us human. A common assumption is that the emergence of behaviorally modern humans after 200,000 years ago required—and followed—a specific biological change triggered by one or more genetic mutations. For example, Klein has argued that the dawn of human culture stemmed from a single genetic change that “fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance” (4). But are evolutionary changes in our genome a cause or a consequence of cultural innovation (see the figure)?

    Files private

    Request files
  • Fisher, S. E., & Scharff, C. (2009). FOXP2 as a molecular window into speech and language [Review article]. Trends in Genetics, 25, 166-177. doi:10.1016/j.tig.2009.03.002.

    Abstract

    Rare mutations of the FOXP2 transcription factor gene cause a monogenic syndrome characterized by impaired speech development and linguistic deficits. Recent genomic investigations indicate that its downstream neural targets make broader impacts on common language impairments, bridging clinically distinct disorders. Moreover, the striking conservation of both FoxP2 sequence and neural expression in different vertebrates facilitates the use of animal models to study ancestral pathways that have been recruited towards human speech and language. Intriguingly, reduced FoxP2 dosage yields abnormal synaptic plasticity and impaired motor-skill learning in mice, and disrupts vocal learning in songbirds. Converging data indicate that Foxp2 is important for modulating the plasticity of relevant neural circuits. This body of research represents the first functional genetic forays into neural mechanisms contributing to human spoken language.
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • Fitneva, S. A., Lam, N. H. L., & Dunfield, K. A. (2013). The development of children's information gathering: To look or to ask? Developmental Psychology, 49(3), 533-542. doi:10.1037/a0031326.

    Abstract

    The testimony of others and direct experience play a major role in the development of children's knowledge. Children actively use questions to seek others' testimony and explore the environment. It is unclear though whether children distinguish when it is better to ask from when it is better to try to find an answer by oneself. In 2 experiments, we examined the ability of 4- and 6-year-olds to select between looking and asking to determine visible and invisible properties of entities (e.g., hair color vs. knowledge of French). All children chose to look more often for visible than invisible properties. However, only 6-year-olds chose above chance to look for visible properties and to ask for invisible properties. Four-year-olds showed a preference for looking in one experiment and asking in the other. The results suggest substantial development in the efficacy of children's learning in early childhood.
  • Fitz, H. (2009). Neural syntax. PhD Thesis, Universiteit van Amsterdam, Institute for Logic, Language, and Computation.

    Abstract

    Children learn their mother tongue spontaneously and effortlessly through communicative interaction with their environment; they do not have to be taught explicitly or learn how to learn first. The ambient language to which children are exposed, however, is highly variable and arguably deficient with regard to the learning target. Nonetheless, most normally developing children learn their native language rapidly and with ease. To explain this accomplishment, many theories of acquisition posit innate constraints on learning, or even a biological endowment for language which is specific to language. Usage-based theories, on the other hand, place more emphasis on the role of experience and domain-general learning mechanisms than on innate language-specific knowledge. But languages are lexically open and combinatorial in structure, so no amount of experience covers their expressivity. Usage-based theories therefore have to explain how children can generalize the properties of their linguistic input to an adult-like grammar. In this thesis I provide an explicit computational mechanism with which usage-based theories of language can be tested and evaluated. The focus of my work lies on complex syntax and the human ability to form sentences which express more than one proposition by means of relativization. This `capacity for recursion' is a hallmark of an adult grammar and, as some have argued, the human language faculty itself. The manuscript is organized as follows. In the second chapter, I give an overview of results that characterize the properties of neural networks as mathematical objects and review previous attempts at modelling the acquisition of complex syntax with such networks. The chapter introduces the conceptual landscape in which the current work is located. In the third chapter, I argue that the construction and use of meaning is essential in child language acquisition and adult processing. Neural network models need to incorporate this dimension of human linguistic behavior. I introduce the Dual-path model of sentence production and syntactic development which is able to represent semantics and learns from exposure to sentences paired with their meaning (cf. Chang et al. 2006). I explain the architecture of this model, motivate critical assumptions behind its design, and discuss existing research using this model. The fourth chapter describes and compares several extensions of the basic architecture to accommodate the processing of multi-clause utterances. These extensions are evaluated against computational desiderata, such as good learning and generalization performance and the parsimony of input representations. A single-best solution for encoding the meaning of complex sentences with restrictive relative clauses is identified, which forms the basis for all subsequent simulations. Chapter five analyzes the learning dynamics in more detail. I first examine the model's behavior for different relative clause types. Syntactic alternations prove to be particularly difficult to learn because they complicate the meaning-to-form mapping the model has to acquire. In the second part, I probe the internal representations the model has developed during learning. It is argued that the model acquires the argument structure of the construction types in its input language and represents the hierarchical organization of distinct multi-clause utterances. The juice of this thesis is contained in chapters six to eight. In chapter six, I test the Dual-path model's generalization capacities in a variety of tasks. I show that its syntactic representations are sufficiently transparent to allow structural generalization to novel complex utterances. Semantic similarities between novel and familiar sentence types play a critical role in this task. The Dual-path model also has a capacity for generalizing familiar words to novel slots in novel constructions (strong semantic systematicity). Moreover, I identify learning conditions under which the model displays recursive productivity. It is argued that the model's behavior is consistent with human behavior in that production accuracy degrades with depth of embedding, and right-branching is learned faster than center-embedding recursion. In chapter seven, I address the issue of learning complex polar interrogatives in the absence of positive exemplars in the input. I show that the Dual-path model can acquire the syntax of these questions from simpler and similar structures which are warranted in a child's linguistic environment. The model's errors closely match children's errors, and it is suggested that children might not require an innate learning bias to acquire auxiliary fronting. Since the model does not implement a traditional kind of language-specific universal grammar, these results are relevant to the poverty of the stimulus debate. English relative clause constructions give rise to similar performance orderings in adult processing and child language acquisition. This pattern matches the typological universal called the noun phrase accessibility hierarchy. I propose an input-based explanation of this data in chapter eight. The Dual-path model displays this ordering in syntactic development when exposed to plausible input distributions. But it is possible to manipulate and completely remove the ordering by varying properties of the input from which the model learns. This indicates, I argue, that patterns of interference and facilitation among input structures can explain the hierarchy when all structures are simultaneously learned and represented over a single set of connection weights. Finally, I draw conclusions from this work, address some unanswered questions, and give a brief outlook on how this research might be continued.

    Additional information

    http://dare.uva.nl/record/328271
  • Fitz, H., & Chang, F. (2009). Syntactic generalization in a connectionist model of sentence production. In J. Mayor, N. Ruh, & K. Plunkett (Eds.), Connectionist models of behaviour and cognition II: Proceedings of the 11th Neural Computation and Psychology Workshop (pp. 289-300). River Edge, NJ: World Scientific Publishing.

    Abstract

    We present a neural-symbolic learning model of sentence production which displays strong semantic systematicity and recursive productivity. Using this model, we provide evidence for the data-driven learnability of complex yes/no- questions.
  • Fiveash, A., Ferreri, L., Bouwer, F. L., Kösem, A., Moghimi, S., Ravignani, A., Keller, P. E., & Tillmann, B. (2023). Can rhythm-mediated reward boost learning, memory, and social connection? Perspectives for future research. Neuroscience and Biobehavioral Reviews, 149: 105153. doi:10.1016/j.neubiorev.2023.105153.

    Abstract

    Studies of rhythm processing and of reward have progressed separately, with little connection between the two. However, consistent links between rhythm and reward are beginning to surface, with research suggesting that synchronization to rhythm is rewarding, and that this rewarding element may in turn also boost this synchronization. The current mini review shows that the combined study of rhythm and reward can be beneficial to better understand their independent and combined roles across two central aspects of cognition: 1) learning and memory, and 2) social connection and interpersonal synchronization; which have so far been studied largely independently. From this basis, it is discussed how connections between rhythm and reward can be applied to learning and memory and social connection across different populations, taking into account individual differences, clinical populations, human development, and animal research. Future research will need to consider the rewarding nature of rhythm, and that rhythm can in turn boost reward, potentially enhancing other cognitive and social processes.
  • Flecken, M., & Gerwien, J. (2013). Grammatical aspect modulates event duration estimations: findings from Dutch. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 2309-2314). Austin,TX: Cognitive Science Society.
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2013). Principles of information organization in L2 use: Complex patterns of conceptual transfer. International review of applied linguistics, 51(2), 229-242. doi:10.1515/iral-2013-0010.
  • Floyd, S. (2013). [Review of the book Lessons from a Quechua strongwoman: ideophony, dialogue and perspective. by Janis Nuckolls. 2010]. Journal of Linguistic Anthropology, 22, 256-258. doi:10.1111/j.1548-1395.2012.01166.x.
  • Floyd, S. (2009). Nexos históricos, gramaticales y culturales de los números en cha'palaa [Historical, grammatical and cultural connections of Cha'palaa numerals]. In Proceedings of the Conference on Indigenous Languages of Latin America (CILLA) -IV.

    Abstract

    Los idiomas sudamericanas tienen una diversidad de sistemas numéricos, desde sistemas con solamente dos o tres términos en algunos idiomas amazónicos hasta sistemas con numerales extendiendo a miles. Una mirada al sistema del idioma cha’palaa de Ecuador demuestra rasgos de base-2, base-5, base-10 y base-20, ligados a diferentes etapas de cambio, desarrollo y contacto lingüístico. Conocer estas etapas nos permite proponer algunas correlaciones con lo que conocemos de la historia de contactos culturales en la región. The South American languages have diverse types of numeral systems, from systems of just two or three terms in some Amazonian languages to systems extending into the thousands. A look a the system of the Cha'palaa language of Ecuador demonstrates base-2, base-5, base-10 and base-20 features, linked to different stages of change, development and language contact. Learning about these stages permits up to propose some correlations between them and what we know about the history of cultural contact in the region.
  • Floyd, S. (2013). Semantic transparency and cultural calquing in the Northwest Amazon. In P. Epps, & K. Stenzel (Eds.), Upper Rio Negro: Cultural and linguistic interaction in northwestern Amazonia (pp. 271-308). Rio de Janiero: Museu do Indio. Retrieved from http://www.museunacional.ufrj.br/ppgas/livros_ele.html.

    Abstract

    The ethnographic literature has sometimes described parts of the northwest Amazon as areas of shared culture across linguistic groups. This paper illustrates how a principle of semantic transparency across languages is a key means of establishing elements of a common regional culture through practices like the calquing of ethnonyms and toponyms so that they are semantically, but not phonologically, equivalent across languages. It places the upper Rio Negro area of the northwest Amazon in a general discussion of cross-linguistic naming practices in South America and considers the extent to which a preference for semantic transparency can be linked to cases of widespread cultural ‘calquing’, in which culturally-important meanings are kept similar across different linguistic systems. It also addresses the principle of semantic transparency beyond specific referential phrases and into larger discourse structures. It concludes that an attention to semiotic practices in multilingual settings can provide new and more complex ways of thinking about the idea of shared culture.
  • Foley, W., & Van Valin Jr., R. D. (2009). Functional syntax and universal grammar (Repr.). Cambridge University Press.

    Abstract

    The key argument of this book, originally published in 1984, is that when human beings communicate with each other by means of a natural language they typically do not do so in simple sentences but rather in connected discourse - complex expressions made up of a number of clauses linked together in various ways. A necessary precondition for intelligible discourse is the speaker’s ability to signal the temporal relations between the events that are being discussed and to refer to the participants in those events in such a way that it is clear who is being talked about. A great deal of the grammatical machinery in a language is devoted to this task, and Functional Syntax and Universal Grammar explores how different grammatical systems accomplish it. This book is an important attempt to integrate the study of linguistic form with the study of language use and meaning. It will be of particular interest to field linguists and those concerned with typology and language universals, and also to anthropologists involved in the study of language function.
  • Folia, V., Forkstam, C., Hagoort, P., & Petersson, K. M. (2009). Language comprehension: The interplay between form and content. In N. Taatgen, & H. van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society (pp. 1686-1691). Austin, TX: Cognitive Science Society.

    Abstract

    In a 2x2 event-related FMRI study we find support for the idea that the inferior frontal cortex, centered on Broca’s region and its homologue, is involved in constructive unification operations during the structure-building process in parsing for comprehension. Tentatively, we provide evidence for a role of the dorsolateral prefrontal cortex centered on BA 9/46 in the control component of the language system. Finally, the left temporo-parietal cortex, in the vicinity of Wernicke’s region, supports the interaction between the syntax of gender agreement and sentence-level semantics.
  • Forkstam, C., Jansson, A., Ingvar, M., & Petersson, K. M. (2009). Modality transfer of acquired structural regularities: A preference for an acoustic route. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a simple model for aspects of natural language acquisition. In this paper we investigate the remaining effect of modality transfer in syntactic classification of an acquired grammatical sequence structure after implicit grammar acquisition. Participants practiced either on acoustically presented syllable sequences or visually presented consonant letter sequences. During classification we independently manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli. Participants performed reliably above chance on the within modality classification task although more so for those working on syllable sequence acquisition. These subjects were also the only group that kept a significant performance level in transfer classification. We speculate that this finding is of particular relevance in consideration of an ecological validity in the input signal in the use of artificial grammar learning and in language learning paradigms at large.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.

    Additional information

    supplementary information
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across
    languages (cognates) are easier to process than words that share only meaning. This facilitatory
    phenomenon is known as the cognate effect. Most previous studies have shown this effect visually,
    whereas the auditory modality as well as the interplay between type of similarity and modality
    remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out
    a lexical decision task in their second language, both visually and auditorily. Words had high or low
    phonological and orthographic similarity, fully crossed. We also included orthographically identical
    words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic
    similarity in the visual modality and phonological similarity in the auditory modality) leads to
    improved signal detection, whereas similarity across modalities hinders it. We provide support for
    the idea that perfect cognates are a special category within cognates. Results suggest a need for a
    conceptual and practical separation between types of similarity in cognate studies. The theoretical
    implication is that the representations of items are active in both modalities of the non‑target
    language during language processing, which needs to be incorporated to our current processing
    models.
  • Frances, C. (2021). Semantic richness, semantic context, and language learning. PhD Thesis, Universidad del País Vasco-Euskal Herriko Unibertsitatea, Donostia.

    Abstract

    As knowing a foreign language becomes a necessity in the modern world, a large portion of
    the population is faced with the challenge of learning a language in a classroom. This, in turn,
    presents a unique set of difficulties. Acquiring a language with limited and artificial exposure makes
    learning new information and vocabulary particularly difficult. The purpose of this thesis is to help us
    understand how we can compensate—at least partially—for these difficulties by presenting
    information in a way that aids learning. In particular, I focused on variables that affect semantic
    richness—meaning the amount and variability of information associated with a word. Some factors
    that affect semantic richness are intrinsic to the word and others pertain to that word’s relationship
    with other items and information. This latter group depends on the context around the to-be-
    learned items rather than the words themselves. These variables are easier to manipulate than
    intrinsic qualities, making them more accessible tools for teaching and understanding learning. I
    focused on two factors: emotionality of the surrounding semantic context and contextual diversity.
    Publication 1 (Frances, de Bruin, et al., 2020b) focused on content learning in a foreign
    language and whether the emotionality—positive or neutral—of the semantic context surrounding
    key information aided its learning. This built on prior research that showed a reduction in
    emotionality in a foreign language. Participants were taught information embedded in either
    positive or neutral semantic contexts in either their native or foreign language. When they were
    then tested on these embedded facts, participants’ performance decreased in the foreign language.
    But, more importantly, they remembered better the information from the positive than the neutral
    semantic contexts.
    In Publication 2 (Frances, de Bruin, et al., 2020a), I focused on how emotionality affected
    vocabulary learning. I taught participants the names of novel items described either in positive or
    neutral terms in either their native or foreign language. Participants were then asked to recall and
    recognize the object's name—when cued with its image. The effects of language varied with the
    difficulty of the task—appearing in recall but not recognition tasks. Most importantly, learning the
    words in a positive context improved learning, particularly of the association between the image of
    the object and its name.
    In Publication 3 (Frances, Martin, et al., 2020), I explored the effects of contextual
    diversity—namely, the number of texts a word appears in—on native and foreign language word
    learning. Participants read several texts that had novel pseudowords. The total number of
    encounters with the novel words was held constant, but they appeared in 1, 2, 4, or 8 texts in either
    their native or foreign language. Increasing contextual diversity—i.e., the number of texts a word
    appeared in—improved recall and recognition, as well as the ability to match the word with its
    meaning. Using a foreign language only affected performance when participants had to quickly
    identify the meaning of the word.
    Overall, I found that the tested contextual factors related to semantic richness—i.e.,
    emotionality of the semantic context and contextual diversity—can be manipulated to improve
    learning in a foreign language. Using positive emotionality not only improved learning in the foreign
    language, but it did so to the same extent as in the native language. On a theoretical level, this
    suggests that the reduction in emotionality in a foreign language is not ubiquitous and might relate
    to the way in which that language as learned.
    The third article shows an experimental manipulation of contextual diversity and how this
    can affect learning of a lexical item, even if the amount of information known about the item is kept
    constant. As in the case of emotionality, the effects of contextual diversity were also the same
    between languages. Although deducing words from context is dependent on vocabulary size, this
    does not seem to hinder the benefits of contextual diversity in the foreign language.
    Finally, as a whole, the articles contained in this compendium provide evidence that some
    aspects of semantic richness can be manipulated contextually to improve learning and memory. In
    addition, the effects of these factors seem to be independent of language status—meaning, native
    or foreign—when learning new content. This suggests that learning in a foreign and a native
    language is not as different as I initially hypothesized, allowing us to take advantage of native
    language learning tools in the foreign language, as well.

Share this page