Publications

Displaying 301 - 400 of 1849
  • Dingemanse, M., Perlman, M., & Perniss, P. (2020). Construals of iconicity: Experimental approaches to form-meaning resemblances in language. Language and Cognition, 12(1), 1-14. doi:10.1017/langcog.2019.48.

    Abstract

    While speculations on form–meaning resemblances in language go back millennia, the experimental study of iconicity is only about a century old. Here we take stock of experimental work on iconicity and present a double special issue with a diverse set of new contributions. We contextualise the work by introducing a typology of approaches to iconicity in language. Some approaches construe iconicity as a discrete property that is either present or absent; others treat it as involving semiotic relationships that come in kinds; and yet others see it as a gradient substance that comes in degrees. We show the benefits and limitations that come with each of these construals and stress the importance of developing accounts that can fluently switch between them. With operationalisations of iconicity that are well defined yet flexible enough to deal with differences in tasks, modalities, and levels of analysis, experimental research on iconicity is well equipped to contribute to a comprehensive science of language.
  • Dingemanse, M. (2020). Resource-rationality beyond individual minds: The case of interactive language use. Behavioral and Brain Sciences, 43, 23-24. doi:10.1017/S0140525X19001638.

    Abstract

    Resource-rational approaches offer much promise for understanding human cognition, especially if they can reach beyond the confines of individual minds. Language allows people to transcend individual resource limitations by augmenting computation and enabling distributed cognition. Interactive language use, an environment where social rational agents routinely deal with resource constraints together, offers a natural laboratory to test resource-rationality in the wild.
  • Dingemanse, M. (2020). Between sound and speech: Liminal signs in interaction. Research on Language and Social Interaction, 53(1), 188-196. doi:10.1080/08351813.2020.1712967.

    Abstract

    When people talk, they recruit a wide range of expressive devices for interactional work, from sighs, sniffs, clicks, and whistles to other conduct that borders on the linguistic. These resources represent some of the more elusive yet no less powerful aspects of the interactional machinery as they are used in the management of turn and sequence and the marking of stance and affect. Phenomena long assumed to be beyond the purview of linguistic inquiry emerge as systematically deployed practices whose ambiguous degree of control and convention allows participants to carry out subtle interactional work without committing to specific words. While these resources have been characterised as non-lexical, non-verbal, or non-conventional, I propose they are unified in their liminality: they work well precisely because they equivocate between sound and speech. The empirical study of liminal signs shows the promise of sequential analysis for building a science of language on interactional foundations.
  • Dingemanse, M., & Van Leeuwen, T. (2015). Boekoeboekoe is mollig: Taal als samenspel van de zintuigen. Onze Taal, (12), 344-345.
  • Dingemanse, M. (2020). Der Raum zwischen unseren Köpfen. Technology Review, 2020(13), 10-15.

    Abstract

    Aktuelle Vorstellungen von Gehirn-zu-Gehirn-Schnittstellen versprechen, die Sprache zu umgehen. Aber wenn wir sie verfeinern, um ihr kollaboratives Potenzial voll auszuschöpfen, sehen wir Sprache — oder zumindest ein sprachähnliches Infrastruktur für Kommunika­tion und Koordination — durch die Hintertür wieder hereinkommen. Es wäre nicht das erste Mal, dass sich die Sprache neu erfindet.

    Current conceptions of brain-to-brain interfaces attempt to bypass language. But when we refine them to more fully realise their collaborative potential we find language —or at least a language-like infrastructure for communication and coordination— slipping through the back door. It wouldn't be the first time that language reinvented itself.
  • Dingemanse, M., Blasi, D. E., Lupyan, G., Christiansen, M. H., & Monaghan, P. (2015). Arbitrariness, iconicity and systematicity in language. Trends in Cognitive Sciences, 19(10), 603-615. doi:10.1016/j.tics.2015.07.013.

    Abstract

    The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested form to meaning correspondences in the world’s languages. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form to meaning correspondences serve different functions in language processing, development and communication: systematicity facilities category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help explain how these competing motivations shape vocabulary structure.
  • Dingemanse, M. (2019). 'Ideophone' as a comparative concept. In K. Akita, & P. Pardeshi (Eds.), Ideophones, Mimetics, and Expressives (pp. 13-33). Amsterdam: John Benjamins. doi:10.1075/ill.16.02din.

    Abstract

    This chapter makes the case for ‘ideophone’ as a comparative concept: a notion that captures a recurrent typological pattern and provides a template for understanding language-specific phenomena that prove similar. It revises an earlier definition to account for the observation that ideophones typically form an open lexical class, and uses insights from canonical typology to explore the larger typological space. According to the resulting definition, a canonical ideophone is a member of an open lexical class of marked words that depict sensory imagery. The five elements of this definition can be seen as dimensions that together generate a possibility space to characterise cross-linguistic diversity in depictive means of expression. This approach allows for the systematic comparative treatment of ideophones and ideophone-like phenomena. Some phenomena in the larger typological space are discussed to demonstrate the utility of the approach: phonaesthemes in European languages, specialised semantic classes in West-Chadic, diachronic diversions in Aslian, and depicting constructions in signed languages.
  • Dingemanse, M. (2015). Folk definitions in linguistic fieldwork. In J. Essegbey, B. Henderson, & F. Mc Laughlin (Eds.), Language documentation and endangerment in Africa (pp. 215-238). Amsterdam: Benjamins. doi:10.1075/clu.17.09din.

    Abstract

    Informal paraphrases by native speaker consultants are crucial tools in linguistic fieldwork. When recorded, archived, and analysed, they offer rich data that can be mined for many purposes, from lexicography to semantic typology and from ethnography to the investigation of gesture and speech. This paper describes a procedure for the collection and analysis of folk definitions that are native (in the language under study rather than the language of analysis), informal (spoken rather than written), and multi-modal (preserving the integrity of gesture-speech composite utterances). The value of folk definitions is demonstrated using the case of ideophones, words that are notoriously hard to study using traditional elicitation methods. Three explanatory strategies used in a set of folk definitions of ideophones are examined: the offering of everyday contexts of use, the use of depictive gestures, and the use of sense relations as semantic anchoring points. Folk definitions help elucidate word meanings that are hard to capture, bring to light cultural background knowledge that often remains implicit, and take seriously the crucial involvement of native speaker consultants in linguistic fieldwork. They provide useful data for language documentation and are an essential element of any toolkit for linguistic and ethnographic field research.
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2018). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. In I. Nikolaeva (Ed.), Linguistic Typology: Critical Concepts in Linguistics. Vol. 4 (pp. 322-357). London: Routledge.

    Abstract

    In conversation, people regularly deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them.
  • Dingemanse, M. (2015). Ideophones and Reduplication: Depiction, Description, and the Interpretation of Repeated Talk in Discourse. Studies in Language, 39(4), 946-970. doi:10.1075/sl.39.4.05din.

    Abstract

    Repetition is one of the most basic operations on talk, often discussed for its iconic meanings. Ideophones are marked words that depict sensory imagery, often identified by their reduplicated forms. Yet not all reduplication is iconic, and not all ideophones are reduplicated. This paper discusses the semantics and pragmatics of repeated talk (repetition as well as reduplication), with special focus on the intersection of reduplicative processes and ideophonic words. Various formal features of ideophones suggest that it is fruitful to distinguish two modes of representation in language —description and depiction— along with cues like prosodic foregrounding that can steer listeners’ interpretation from one to the other. What is special about reduplication is that it can naturally partake in both of these modes of representation, which is why it is so common in ideophones as well as in other areas of grammar. Using evidence from Siwu, Korean, Semai and a range of other languages, this paper shows how the study of ideophones sheds light on the interpretation of repeated talk and can lead to a more holistic understanding of reduplicative phenomena in language.
  • Dingemanse, M., & Enfield, N. J. (2015). Other-initiated repair across languages: Towards a typology of conversational structures. Open Linguistics, 1, 98-118. doi:10.2478/opli-2014-0007.

    Abstract

    This special issue reports on a cross-linguistic study of other-initiated repair, a domain at the crossroads of language, mind, and social life. Other-initiated repair is part of a system of practices that people use to deal with problems of speaking, hearing and understanding. The contributions in this special issue describe the linguistic resources and interactional practices associated with other-initiated repair in ten different languages. Here we provide an overview of the research methods and the conceptual framework. The empirical base for the project consists of corpora of naturally occurring conversations, collected in fieldsites around the world. Methodologically, we combine qualitative analysis with a comparative-typological perspective, and we formulate principles for the cross-linguistic comparison of conversational structures. A key move, of broad relevance to pragmatic typology, is the recognition that formats for repair initiation form paradigm-like systems that are ultimately language-specific, and that comparison is best done at the level of the constitutive properties of these formats. These properties can be functional (concerning aspects of linguistic formatting) as well as sequential (concerning aspects of the interactional environment). We show how functional and sequential aspects of conversational structure can capture patterns of commonality and diversity in conversational structures within and across languages
  • Dingemanse, M. (2015). Other-initiated repair in Siwu. Open Linguistics, 1, 232-255. doi:10.1515/opli-2015-0001.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair in Siwu, a Kwa language spoken in eastern Ghana. Other-initiated repair is the set of techniques used by people to deal with problems in speaking, hearing and understanding. Formats for repair initiation in Siwu exploit language-specific resources like question words and noun class morphology. At the same time, the basic structure of the system bears a strong similarity other-initiated repair in other languages. Practices described for Siwu thus are potentially of broader relevance to the study of other-initiated repair. This article documents how different prosodic realisations of repair initiators may index social actions and features of the speech event; how two distinct roles of repetition in repair initiators are kept apart by features of turn design; and what kinds of items can be treated as ‘dispensable’ in resayings. By charting how other-initiated repair uses local linguistic resources and yet is shaped by interactional needs that transcend particular languages, this study contributes to the growing field of pragmatic typology: the study of systems of language use and the principles that shape them
  • Dingemanse, M. (2020). Recruiting assistance and collaboration: A West-African corpus study. In S. Floyd, G. Rossi, & N. J. Enfield (Eds.), Getting others to do things: A pragmatic typology of recruitments (pp. 369-241). Berlin: Language Science Press. doi:10.5281/zenodo.4018388.

    Abstract

    Doing things for and with others is one of the foundations of human social life. This chapter studies a systematic collection of 207 requests for assistance and collaboration from a video corpus of everyday conversations in Siwu, a Kwa language of Ghana. A range of social action formats and semiotic resources reveals how language is adapted to the interactional challenges posed by recruiting assistance. While many of the formats bear a language-specific signature, their sequential and interactional properties show important commonalities across languages. Two tentative findings are put forward for further cross-linguistic examination: a “rule of three” that may play a role in the organisation of successive response pursuits, and a striking commonality in animal-oriented recruitments across languages that may be explained by convergent cultural evolution. The Siwu recruitment system emerges as one instance of a sophisticated machinery for organising collaborative action that transcends language and culture.
  • Dingemanse, M. (2018). Redrawing the margins of language: Lessons from research on ideophones. Glossa: a journal of general linguistics, 3(1): 4. doi:10.5334/gjgl.444.

    Abstract

    Ideophones (also known as expressives or mimetics, and including onomatopoeia) have been systematically studied in linguistics since the 1850s, when they were first described as a lexical class of vivid sensory words in West-African languages. This paper surveys the research history of ideophones, from its roots in African linguistics to its fruits in general linguistics and typology around the globe. It shows that despite a recurrent narrative of marginalisation, work on ideophones has made an impact in many areas of linguistics, from theories of phonological features to typologies of manner and motion, and from sound symbolism to sensory language. Due to their hybrid nature as gradient vocal gestures that grow roots in discrete linguistic systems, ideophones provide opportunities to reframe typological questions, reconsider the role of language ideology in linguistic scholarship, and rethink the margins of language. With ideophones increasingly being brought into the fold of the language sciences, this review synthesises past theoretical insights and empirical findings in order to enable future work to build on them.
  • Dingemanse, M., & Thompson, B. (2020). Playful iconicity: Structural markedness underlies the relation between funniness and iconicity. Language and Cognition, 12(1), 203-224. doi:10.1017/langcog.2019.49.

    Abstract

    Words like ‘waddle’, ‘flop’ and ‘zigzag’ combine playful connotations with iconic form-meaning resemblances. Here we propose that structural markedness may be a common factor underlying perceptions of playfulness and iconicity. Using collected and estimated lexical ratings covering a total of over 70,000 English words, we assess the robustness of this assocation. We identify cues of phonotactic complexity that covary with funniness and iconicity ratings and that, we propose, serve as metacommunicative signals to draw attention to words as playful and performative. To assess the generalisability of the findings we develop a method to estimate lexical ratings from distributional semantics and apply it to a dataset 20 times the size of the original set of human ratings. The method can be used more generally to extend coverage of lexical ratings. We find that it reliably reproduces correlations between funniness and iconicity as well as cues of structural markedness, though it also amplifies biases present in the human ratings. Our study shows that the playful and the poetic are part of the very texture of the lexicon.
  • Dingemanse, M., Roberts, S. G., Baranova, J., Blythe, J., Drew, P., Floyd, S., Gisladottir, R. S., Kendrick, K. H., Levinson, S. C., Manrique, E., Rossi, G., & Enfield, N. J. (2015). Universal Principles in the Repair of Communication Problems. PLoS One, 10(9): e0136100. doi:10.1371/journal.pone.0136100.

    Abstract

    There would be little adaptive value in a complex communication system like human language if there were no ways to detect and correct problems. A systematic comparison of conversation in a broad sample of the world’s languages reveals a universal system for the real-time resolution of frequent breakdowns in communication. In a sample of 12 languages of 8 language families of varied typological profiles we find a system of ‘other-initiated repair’, where the recipient of an unclear message can signal trouble and the sender can repair the original message. We find that this system is frequently used (on average about once per 1.4 minutes in any language), and that it has detailed common properties, contrary to assumptions of radical cultural variation. Unrelated languages share the same three functionally distinct types of repair initiator for signalling problems and use them in the same kinds of contexts. People prefer to choose the type that is the most specific possible, a principle that minimizes cost both for the sender being asked to fix the problem and for the dyad as a social unit. Disruption to the conversation is kept to a minimum, with the two-utterance repair sequence being on average no longer that the single utterance which is being fixed. The findings, controlled for historical relationships, situation types and other dependencies, reveal the fundamentally cooperative nature of human communication and offer support for the pragmatic universals hypothesis: while languages may vary in the organization of grammar and meaning, key systems of language use may be largely similar across cultural groups. They also provide a fresh perspective on controversies about the core properties of language, by revealing a common infrastructure for social interaction which may be the universal bedrock upon which linguistic diversity rests.
  • Dolscheid, S., Çelik, S., Erkan, H., Küntay, A., & Majid, A. (2020). Space-pitch associations differ in their susceptibility to language. Cognition, 196: 104073. doi:10.1016/j.cognition.2019.104073.

    Abstract

    To what extent are links between musical pitch and space universal, and to what extent are they shaped by
    language? There is contradictory evidence in support of both universality and linguistic relativity presently,
    leaving the question open. To address this, speakers of Dutch who talk about pitch in terms of spatial height and
    speakers of Turkish who use a thickness metaphor were tested in simple nonlinguistic space-pitch association
    tasks. Both groups showed evidence of a thickness-pitch association, but differed significantly in their heightpitch
    associations, suggesting the latter may be more susceptible to language. When participants had to match
    pitches to spatial stimuli where height and thickness were opposed (i.e., a thick line high in space vs. a thin line
    low in space), Dutch and Turkish differed in their relative preferences. Whereas Turkish participants predominantly
    opted for a thickness-pitch interpretation—even if this meant a reversal of height-pitch
    mappings—Dutch participants favored a height-pitch interpretation more often. These findings provide new
    evidence that speakers of different languages vary in their space-pitch associations, while at the same time
    showing such associations are not equally susceptible to linguistic influences. Some space-pitch (i.e., heightpitch)
    associations are more malleable than others (i.e., thickness-pitch).
  • Dolscheid, S., Hunnius, S., & Majid, A. (2015). When high pitches sound low: Children's acquisition of space-pitch metaphors. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 584-598). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0109/index.html.

    Abstract

    Some languages describe musical pitch in terms of spatial height; others in terms of thickness. Differences in pitch metaphors also shape adults’ nonlinguistic space-pitch representations. At the same time, 4-month-old infants have both types of space-pitch mappings available. This tension between prelinguistic space-pitch associations and their subsequent linguistic mediation raises questions about the acquisition of space-pitch metaphors. To address this issue, 5-year-old Dutch children were tested on their linguistic knowledge of pitch metaphors, and nonlinguistic space-pitch associations. Our results suggest 5-year-olds understand height-pitch metaphors in a reversed fashion (high pitch = low). Children displayed good comprehension of a thickness-pitch metaphor, despite its absence in Dutch. In nonlinguistic tasks, however, children did not show consistent space-pitch associations. Overall, pitch representations do not seem to be influenced by linguistic metaphors in 5-year-olds, suggesting that effects of language on musical pitch arise rather late during development.
  • Donnelly, S., & Kidd, E. (2020). Individual differences in lexical processing efficiency and vocabulary in toddlers: A longitudinal investigation. Journal of Experimental Child Psychology, 192: 104781. doi:10.1016/j.jecp.2019.104781.

    Abstract

    Research on infants’ online lexical processing by Fernald, Perfors, and Marchman (2006) revealed substantial individual differences that are related to vocabulary development, such that infants with better lexical processing efficiency show greater vocabulary growth across time. Although it is clear that individual differences in lexical processing efficiency exist and are meaningful, the theoretical nature of lexical processing efficiency and its relation to vocabulary size is less clear. In the current study, we asked two questions: (a) Is lexical processing efficiency better conceptualized as a central processing capacity or as an emergent capacity reflecting a collection of word-specific capacities? and (b) Is there evidence for a causal role for lexical processing efficiency in early vocabulary development? In the study, 120 infants were tested on a measure of lexical processing at 18, 21, and 24 months, and their vocabulary was measured via parent report. Structural equation modeling of the 18-month time point data revealed that both theoretical constructs represented in the first question above (a) fit the data. A set of regression analyses on the longitudinal data revealed little evidence for a causal effect of lexical processing on vocabulary but revealed a significant effect of vocabulary size on lexical processing efficiency early in development. Overall, the results suggest that lexical processing efficiency is a stable construct in infancy that may reflect the structure of the developing lexicon.
  • Donnelly, S., & Kidd, E. (2021). Onset neighborhood density slows lexical access in high vocabulary 30‐month olds. Cognitive Science, 45(9): e13022. doi:10.1111/cogs.13022.

    Abstract

    There is consensus that the adult lexicon exhibits lexical competition. In particular, substantial evidence demonstrates that words with more phonologically similar neighbors are recognized less efficiently than words with fewer neighbors. How and when these effects emerge in the child's lexicon is less clear. In the current paper, we build on previous research by testing whether phonological onset density slows lexical access in a large sample of 100 English-acquiring 30-month-olds. The children participated in a visual world looking-while-listening task, in which their attention was directed to one of two objects on a computer screen while their eye movements were recorded. We found moderate evidence of inhibitory effects of onset neighborhood density on lexical access and clear evidence for an interaction between onset neighborhood density and vocabulary, with larger effects of onset neighborhood density for children with larger vocabularies. Results suggest the lexicons of 30-month-olds exhibit lexical-level competition, with competition increasing with vocabulary size.
  • Donnelly, S., & Kidd, E. (2021). On the structure and source of individual differences in toddlers' comprehension of transitive sentences. Frontiers in Psychology, 12: 661022. doi:10.3389/fpsyg.2021.661022.

    Abstract

    How children learn grammar is one of the most fundamental questions in cognitive science. Two theoretical accounts, namely, the Early Abstraction and Usage-Based accounts, propose competing answers to this question. To compare the predictions of these accounts, we tested the comprehension of 92 24-month old children of transitive sentences with novel verbs (e.g., “The boy is gorping the girl!”) with the Intermodal Preferential Looking (IMPL) task. We found very little evidence that children looked to the target video at above-chance levels. Using mixed and mixture models, we tested the predictions the two accounts make about: (i) the structure of individual differences in the IMPL task and (ii) the relationship between vocabulary knowledge, lexical processing, and performance in the IMPL task. However, the results did not strongly support either of the two accounts. The implications for theories on language acquisition and for tasks developed for examining individual differences are discussed.

    Additional information

    data via OSF
  • Donnelly, S., & Kidd, E. (2021). The longitudinal relationship between conversational turn-taking and vocabulary growth in early language development. Child Development, 92(2), 609-625. doi:10.1111/cdev.13511.

    Abstract

    Children acquire language embedded within the rich social context of interaction. This paper reports on a longitudinal study investigating the developmental relationship between conversational turn‐taking and vocabulary growth in English‐acquiring children (N = 122) followed between 9 and 24 months. Daylong audio recordings obtained every 3 months provided several indices of the language environment, including the number of adult words children heard in their environment and their number of conversational turns. Vocabulary was measured independently via parental report. Growth curve analyses revealed a bidirectional relationship between conversational turns and vocabulary growth, controlling for the amount of words in children’s environments. The results are consistent with theoretical approaches that identify social interaction as a core component of early language acquisition.
  • Doumas, L. A. A., & Martin, A. E. (2021). A model for learning structured representations of similarity and relative magnitude from experience. Current Opinion in Behavioral Sciences, 37, 158-166. doi:10.1016/j.cobeha.2021.01.001.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require abstract representations of stimulus properties and relations. How we acquire such representations has central importance in an account of human cognition. We briefly describe a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured, relational representations can be learned from initially unstructured inputs. Two operations, comparing distributed representations and learning from the concomitant network dynamics in time, underpin the ability to learn these representations and to respond to invariance in the environment. Comparing analog representations of absolute magnitude produces invariant signals that carry information about similarity and relative magnitude. We describe how a system can then use this information to bootstrap learning structured (i.e., symbolic) concepts of relative magnitude from experience without assuming such representations a priori.
  • Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place.
  • Doumas, L. A. A., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.

    Abstract

    Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
    generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network firing to bind predicates to arguments. To our knowledge,
    this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa-
    tions to begin with.
  • Doust, C., Gordon, S. D., Garden, N., Fisher, S. E., Martin, N. G., Bates, T. C., & Luciano, M. (2020). The association of dyslexia and developmental speech and language disorder candidate genes with reading and language abilities in adults. Twin Research and Human Genetics, 23(1), 22-32. doi:10.1017/thg.2020.7.

    Abstract

    Reading and language abilities are critical for educational achievement and success in adulthood. Variation in these traits is highly heritable, but the underlying genetic architecture is largely undiscovered. Genetic studies of reading and language skills traditionally focus on children with developmental disorders; however, much larger unselected adult samples are available, increasing power to identify associations with specific genetic variants of small effect size. We introduce an Australian adult population cohort (41.7–73.2 years of age, N = 1505) in which we obtained data using validated measures of several aspects of reading and language abilities. We performed genetic association analysis for a reading and spelling composite score, nonword reading (assessing phonological processing: a core component in learning to read), phonetic spelling, self-reported reading impairment and nonword repetition (a marker of language ability). Given the limited power in a sample of this size (~80% power to find a minimum effect size of 0.005), we focused on analyzing candidate genes that have been associated with dyslexia and developmental speech and language disorders in prior studies. In gene-based tests, FOXP2, a gene implicated in speech/language disorders, was associated with nonword repetition (p < .001), phonetic spelling (p = .002) and the reading and spelling composite score (p < .001). Gene-set analyses of candidate dyslexia and speech/language disorder genes were not significant. These findings contribute to the assessment of genetic associations in reading and language disorders, crucial for understanding their etiology and informing intervention strategies, and validate the approach of using unselected adult samples for gene discovery in language and reading.

    Additional information

    Supplementary materials
  • Dowell, C., Hajnal, A., Pouw, W., & Wagman, J. B. (2020). Visual and haptic perception of affordances of feelies. Perception, 49(9), 905-925. doi:10.1177/0301006620946532.

    Abstract

    Most objects have well-defined affordances. Investigating perception of affordances of objects that were not created for a specific purpose would provide insight into how affordances are perceived. In addition, comparison of perception of affordances for such objects across different exploratory modalities (visual vs. haptic) would offer a strong test of the lawfulness of information about affordances (i.e., the invariance of such information over transformation). Along these lines, “feelies”— objects created by Gibson with no obvious function and unlike any common object—could shed light on the processes underlying affordance perception. This study showed that when observers reported potential uses for feelies, modality significantly influenced what kind of affordances were perceived. Specifically, visual exploration resulted in more noun labels (e.g., “toy”) than haptic exploration which resulted in more verb labels (i.e., “throw”). These results suggested that overlapping, but distinct classes of action possibilities are perceivable using vision and haptics. Semantic network analyses revealed that visual exploration resulted in object-oriented responses focused on object identification, whereas haptic exploration resulted in action-oriented responses. Cluster analyses confirmed these results. Affordance labels produced in the visual condition were more consistent, used fewer descriptors, were less diverse, but more novel than in the haptic condition.
  • Drew, P., Hakulinen, A., Heinemann, T., Niemi, J., & Rossi, G. (2021). Hendiadys in naturally occurring interactions: A cross-linguistic study of double verb constructions. Journal of Pragmatics, 182, 322-347. doi:10.1016/j.pragma.2021.02.008.

    Abstract

    Double verb constructions known as hendiadys have been studied primarily in literary texts and corpora of written language. Much less is known about their properties and usage in spoken language, where expressions such as ‘come and see’, ‘go and tell’, ‘sit and talk’ are particularly common, and where we can find an even richer diversity of other constructions. In this study, we investigate hendiadys in corpora of naturally occurring social interactions in four languages, Danish, English (US and UK), Finnish and Italian, with the objective of exploring whether hendiadys is used systematically in recurrent interactional and sequential circumstances, from which it is possible to identify the pragmatic function(s) that hendiadys may serve. Examining hendiadys in conversation also offers us a special window into its grammatical properties, for example when a speaker self-corrects from a non-hendiadic to a hendiadic expression, exposing the boundary between related grammatical forms and demonstrating the distinctiveness of hendiadys in context. More broadly, we demonstrate that hendiadys is systematically associated with talk about complainable matters, in environments characterised by a conflict, dissonance, or friction that is ongoing in the interaction or that is being reported by one participant to another. We also find that the utterance in which hendiadys is used is typically in a subsequent and possibly terminal position in the sequence, summarising or concluding it. Another key finding is that the complainable or conflictual element in these interactions is expressed primarily by the first conjunct of the hendiadic construction. Whilst the first conjunct is semantically subsidiary to the second, it is pragmatically the most important one. This analysis leads us to revisit a long-established asymmetry between the verbal components of hendiadys, and to bring to light the synergy of grammar and pragmatics in language usage.
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Jensen, O., & Spaak, E. (2021). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping, 42(4), 1138-1152. doi:10.1002/hbm.25282.

    Abstract

    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Drijvers, L., & Ozyurek, A. (2020). Non-native listeners benefit less from gestures and visible speech than native listeners during degraded speech comprehension. Language and Speech, 63(2), 209-220. doi:10.1177/0023830919831311.

    Abstract

    Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient.
  • Drijvers, L. (2019). On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Drijvers, L., Zaadnoordijk, L., & Dingemanse, M. (2015). Sound-symbolism is disrupted in dyslexia: Implications for the role of cross-modal abstraction processes. In D. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 602-607). Austin, Tx: Cognitive Science Society.

    Abstract

    Research into sound-symbolism has shown that people can
    consistently associate certain pseudo-words with certain referents;
    for instance, pseudo-words with rounded vowels and
    sonorant consonants are linked to round shapes, while pseudowords
    with unrounded vowels and obstruents (with a noncontinuous
    airflow), are associated with sharp shapes. Such
    sound-symbolic associations have been proposed to arise from
    cross-modal abstraction processes. Here we assess the link between
    sound-symbolism and cross-modal abstraction by testing
    dyslexic individuals’ ability to make sound-symbolic associations.
    Dyslexic individuals are known to have deficiencies
    in cross-modal processing. We find that dyslexic individuals
    are impaired in their ability to make sound-symbolic associations
    relative to the controls. Our results shed light on the cognitive
    underpinnings of sound-symbolism by providing novel
    evidence for the role —and disruptability— of cross-modal abstraction
    processes in sound-symbolic eects.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2015). The effect of non-nativeness and background noise on lexical retuning. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    Previous research revealed remarkable flexibility of native and non-native listeners’ perceptual system, i.e., native and non-native phonetic category boundaries can be quickly recalibrated in the face of ambiguous input.
    The present study investigates the limitations of the flexibility of the non-native perceptual system. In two lexically-guided perceptual learning experiments, Dutch listeners were exposed to a short story in English, where either all /l/ or all /ɹ/ sounds were replaced by an ambiguous [l/ɹ] sound. In the first experiment, the story was presented in clean, while in the second experiment, intermittent noise was added to the story, although never on the critical words. Lexically-guided perceptual learning was only observed in the clean condition. It is argued that the introduction of intermittent noise reduced the reliability of the evidence of hearing a particular word, which in turn blocked retuning of the phonetic categories.
  • Drozdova, P. (2018). The effects of nativeness and background noise on the perceptual learning of voices and ambiguous sounds. PhD Thesis, Radboud University, Nijmegen.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Duarri, A., Meng-Chin, A. L., Fokkens, M. R., Meijer, M., Smeets, C. J. L. M., Nibbeling, E. A. R., Boddeke, E., Sinke, R. J., Kampinga, H. H., Papazian, D. M., & Verbeek, D. S. (2015). Spinocerebellar ataxia type 19/22 mutations alter heterocomplex Kv4.3 channel function and gating in a dominant manner. Cellular and Molecular Life Sciences, 72(17), 3387-3399. doi:10.1007/s00018-015-1894-2.

    Abstract

    The dominantly inherited cerebellar ataxias are a heterogeneous group of neurodegenerative disorders caused by Purkinje cell loss in the cerebellum. Recently, we identified loss-of-function mutations in the KCND3 gene as the cause of spinocerebellar ataxia type 19/22 (SCA19/22), revealing a previously unknown role for the voltage-gated potassium channel, Kv4.3, in Purkinje cell survival. However, how mutant Kv4.3 affects wild-type Kv4.3 channel functioning remains unknown. We provide evidence that SCA19/22-mutant Kv4.3 exerts a dominant negative effect on the trafficking and surface expression of wild-type Kv4.3 in the absence of its regulatory subunit, KChIP2. Notably, this dominant negative effect can be rescued by the presence of KChIP2. We also found that all SCA19/22-mutant subunits either suppress wild-type Kv4.3 current amplitude or alter channel gating in a dominant manner. Our findings suggest that altered Kv4.3 channel localization and/or functioning resulting from SCA19/22 mutations may lead to Purkinje cell loss, neurodegeneration and ataxia.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Duprez, J., Stokkermans, M., Drijvers, L., & Cohen, M. X. (2021). Synchronization between keyboard typing and neural oscillations. Journal of Cognitive Neuroscience, 33(5), 887-901. doi:10.1162/jocn_a_01692.

    Abstract

    Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4 -7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta, and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by inter-keystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across subjects, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.
  • Durrant, S., Jessop, A., Chang, F., Bidgood, A., Peter, M. S., Pine, J. M., & Rowland, C. F. (2021). Does the understanding of complex dynamic events at 10 months predict vocabulary development? Language and Cognition, 13(1), 66-98. doi:10.1017/langcog.2020.26.

    Abstract

    By the end of their first year, infants can interpret many different types of complex dynamic visual events, such as caused-motion, chasing, and goal-directed action. Infants of this age are also in the early stages of vocabulary development, producing their first words at around 12 months. The present work examined whether there are meaningful individual differences in infants’ ability to represent dynamic causal events in visual scenes, and whether these differences influence vocabulary development. As part of the longitudinal Language 0–5 Project, 78 10-month-old infants were tested on their ability to interpret three dynamic motion events, involving (a) caused-motion, (b) chasing behaviour, and (c) goal-directed movement. Planned analyses found that infants showed evidence of understanding the first two event types, but not the third. Looking behaviour in each task was not meaningfully related to vocabulary development, nor were there any correlations between the tasks. The results of additional exploratory analyses and simulations suggested that the infants’ understanding of each event may not be predictive of their vocabulary development, and that looking times in these tasks may not be reliably capturing any meaningful individual differences in their knowledge. This raises questions about how to convert experimental group designs to individual differences measures, and how to interpret infant looking time behaviour.
  • Eekhof, L. S., Kuijpers, M. M., Faber, M., Gao, X., Mak, M., Van den Hoven, E., & Willems, R. M. (2021). Lost in a story, detached from the words. Discourse Processes, 58(7), 595-616. doi:10.1080/0163853X.2020.1857619.

    Abstract

    This article explores the relationship between low- and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics—measured as the effect of these characteristics on gaze duration—were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency, position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.

    Additional information

    Analysis scripts and data
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eekhof, L. S., Van Krieken, K., Sanders, J., & Willems, R. M. (2021). Reading minds, reading stories: Social-cognitive abilities affect the linguistic processing of narrative viewpoint. Frontiers in Psychology, 12: 698986. doi:10.3389/fpsyg.2021.698986.

    Abstract

    Although various studies have shown that narrative reading draws on social-cognitive abilities, not much is known about the precise aspects of narrative processing that engage these abilities. We hypothesized that the linguistic processing of narrative viewpoint—expressed by elements that provide access to the inner world of characters—might play an important role in engaging social-cognitive abilities. Using eye tracking, we studied the effect of lexical markers of perceptual, cognitive, and emotional viewpoint on eye movements during reading of a 5,000-word narrative. Next, we investigated how this relationship was modulated by individual differences in social-cognitive abilities. Our results show diverging patterns of eye movements for perceptual viewpoint markers on the one hand, and cognitive and emotional viewpoint markers on the other. Whereas the former are processed relatively fast compared to non-viewpoint markers, the latter are processed relatively slow. Moreover, we found that social-cognitive abilities impacted the processing of words in general, and of perceptual and cognitive viewpoint markers in particular, such that both perspective-taking abilities and self-reported perspective-taking traits facilitated the processing of these markers. All in all, our study extends earlier findings that social cognition is of importance for story reading, showing that individual differences in social-cognitive abilities are related to the linguistic processing of narrative viewpoint.

    Additional information

    supplementary material
  • Eekhof, L. S., Van Krieken, K., & Sanders, J. (2020). VPIP: A lexical identification procedure for perceptual, cognitive, and emotional viewpoint in narrative discourse. Open Library of Humanities, 6(1): 18. doi:10.16995/olh.483.

    Abstract

    Although previous work on viewpoint techniques has shown that viewpoint is ubiquitous in narrative discourse, approaches to identify and analyze the linguistic manifestations of viewpoint are currently scattered over different disciplines and dominated by qualitative methods. This article presents the ViewPoint Identification Procedure (VPIP), the first systematic method for the lexical identification of markers of perceptual, cognitive and emotional viewpoint in narrative discourse. Use of this step-wise procedure is facilitated by a large appendix of Dutch viewpoint markers. After the introduction of the procedure and discussion of some special cases, we demonstrate its application by discussing three types of narrative excerpts: a literary narrative, a news narrative, and an oral narrative. Applying the identification procedure to the full news narrative, we show that the VPIP can be reliably used to detect viewpoint markers in long stretches of narrative discourse. As such, the systematic identification of viewpoint has the potential to benefit both established viewpoint scholars and researchers from other fields interested in the analytical and experimental study of narrative and viewpoint. Such experimental studies could complement qualitative studies, ultimately advancing our theoretical understanding of the relation between the linguistic presentation and cognitive processing of viewpoint. Suggestions for elaboration of the VPIP, particularly in the realm of pragmatic viewpoint marking, are formulated in the final part of the paper.

    Additional information

    appendix
  • Egger, J., Rowland, C. F., & Bergmann, C. (2020). Improving the robustness of infant lexical processing speed measures. Behavior Research Methods, 52, 2188-2201. doi:10.3758/s13428-020-01385-5.

    Abstract

    Visual reaction times to target pictures after naming events are an informative measurement in language acquisition research, because gaze shifts measured in looking-while-listening paradigms are an indicator of infants’ lexical speed of processing. This measure is very useful, as it can be applied from a young age onwards and has been linked to later language development. However, to obtain valid reaction times, the infant is required to switch the fixation of their eyes from a distractor to a target object. This means that usually at least half the trials have to be discarded—those where the participant is already fixating the target at the onset of the target word—so that no reaction time can be measured. With few trials, reliability suffers, which is especially problematic when studying individual differences. In order to solve this problem, we developed a gaze-triggered looking-while-listening paradigm. The trials do not differ from the original paradigm apart from the fact that the target object is chosen depending on the infant’s eye fixation before naming. The object the infant is looking at becomes the distractor and the other object is used as the target, requiring a fixation switch, and thus providing a reaction time. We tested our paradigm with forty-three 18-month-old infants, comparing the results to those from the original paradigm. The Gaze-triggered paradigm yielded more valid reaction time trials, as anticipated. The results of a ranked correlation between the conditions confirmed that the manipulated paradigm measures the same concept as the original paradigm.
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Eielts, C., Pouw, W., Ouwehand, K., Van Gog, T., Zwaan, R. A., & Paas, F. (2020). Co-thought gesturing supports more complex problem solving in subjects with lower visual working-memory capacity. Psychological Research, 84, 502-513. doi:10.1007/s00426-018-1065-9.

    Abstract

    During silent problem solving, hand gestures arise that have no communicative intent. The role of such co-thought gestures in
    cognition has been understudied in cognitive research as compared to co-speech gestures. We investigated whether gesticulation
    during silent problem solving supported subsequent performance in a Tower of Hanoi problem-solving task, in relation
    to visual working-memory capacity and task complexity. Seventy-six participants were assigned to either an instructed gesture
    condition or a condition that allowed them to gesture, but without explicit instructions to do so. This resulted in three
    gesture groups: (1) non-gesturing; (2) spontaneous gesturing; (3) instructed gesturing. In line with the embedded/extended
    cognition perspective on gesture, gesturing benefited complex problem-solving performance for participants with a lower
    visual working-memory capacity, but not for participants with a lower spatial working-memory capacity.
  • Eijk, L., Fletcher, A., McAuliffe, M., & Janse, E. (2020). The effects of word frequency and word probability on speech rhythm in dysarthria. Journal of Speech, Language, and Hearing Research, 63, 2833-2845. doi:10.1044/2020_JSLHR-19-00389.

    Abstract

    Purpose

    In healthy speakers, the more frequent and probable a word is in its context, the shorter the word tends to be. This study investigated whether these probabilistic effects were similarly sized for speakers with dysarthria of different severities.
    Method

    Fifty-six speakers of New Zealand English (42 speakers with dysarthria and 14 healthy speakers) were recorded reading the Grandfather Passage. Measurements of word duration, frequency, and transitional word probability were taken.
    Results

    As hypothesized, words with a higher frequency and probability tended to be shorter in duration. There was also a significant interaction between word frequency and speech severity. This indicated that the more severe the dysarthria, the smaller the effects of word frequency on speakers' word durations. Transitional word probability also interacted with speech severity, but did not account for significant unique variance in the full model.
    Conclusions

    These results suggest that, as the severity of dysarthria increases, the duration of words is less affected by probabilistic variables. These findings may be due to reductions in the control and execution of muscle movement exhibited by speakers with dysarthria.
  • Eijk, L., Ernestus, M., & Schriefers, H. (2019). Alignment of pitch and articulation rate. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 2690-2694). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Previous studies have shown that speakers align their speech to each other at multiple linguistic levels. This study investigates whether alignment is mostly the result of priming from the immediately preceding
    speech materials, focussing on pitch and articulation rate (AR). Native Dutch speakers completed sentences, first by themselves (pre-test), then in alternation with Confederate 1 (Round 1), with Confederate 2 (Round 2), with Confederate 1 again
    (Round 3), and lastly by themselves again (post-test). Results indicate that participants aligned to the confederates and that this alignment lasted during the post-test. The confederates’ directly preceding sentences were not good predictors for the participants’ pitch and AR. Overall, the results indicate that alignment is more of a global effect than a local priming effect.
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eisner, F., & McQueen, J. M. (2018). Speech perception. In S. Thompson-Schill (Ed.), Stevens’ handbook of experimental psychology and cognitive neuroscience (4th ed.). Volume 3: Language & thought (pp. 1-46). Hoboken: Wiley. doi:10.1002/9781119170174.epcn301.

    Abstract

    This chapter reviews the computational processes that are responsible for recognizing word forms in the speech stream. We outline the different stages in a processing hierarchy from the extraction of general acoustic features, through speech‐specific prelexical processes, to the retrieval and selection of lexical representations. We argue that two recurring properties of the system as a whole are abstraction and adaptability. We also present evidence for parallel processing of information on different timescales, more specifically that segmental material in the speech stream (its consonants and vowels) is processed in parallel with suprasegmental material (the prosodic structures of spoken words). We consider evidence from both psycholinguistics and neurobiology wherever possible, and discuss how the two fields are beginning to address common computational problems. The challenge for future research in speech perception will be to build an account that links these computational problems, through functional mechanisms that address them, to neurobiological implementation.
  • Emmendorfer, A. K., Correia, J. M., Jansma, B. M., Kotz, S. A., & Bonte, M. (2020). ERP mismatch response to phonological and temporal regularities in speech. Scientific Reports, 10: 9917. doi:10.1038/s41598-020-66824-x.

    Abstract

    Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system’s sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.

    Additional information

    supplementary information
  • Enfield, N. J. (2015). Linguistic relativity from reference to agency. Annual Review of Anthropology, 44, 207-224. doi:10.1146/annurev-anthro-102214-014053.

    Abstract

    How are language, thought, and reality related? Interdisciplinary research on this question over the past two decades has made significant progress. Most of the work has been Neo-Whorfian in two senses: One, it has been driven by research questions that were articulated most explicitly and most famously by the linguistic anthropologist Benjamin Lee Whorf, and two, it has limited the scope of inquiry to Whorf's narrow interpretations of the key terms “language,” “thought,” and “reality.” This article first reviews some of the ideas and results of Neo-Whorfian work, concentrating on the special role of linguistic categorization in heuristic decision making. It then considers new and potential directions in work on linguistic relativity, taken broadly to mean the ways in which the perspective offered by a given language can affect thought (or mind) and reality. New lines of work must reconsider the idea of linguistic relativity by exploring the range of available interpretations of the key terms: in particular, “language” beyond reference, “thought” beyond nonsocial processing, and “reality” beyond brute, nonsocial facts.
  • Enfield, N. J. (2015). Other-initiated repair in Lao. Open linguistics, 1(1), 119-144. doi:10.2478/opli-2014-0006.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversation in the Lao language (a Southwestern Tai language spoken in Laos, Thailand, and Cambodia). The article reports findings specific to the Lao language from the comparative project that is the topic of this special issue. While the scope is general to the overall pattern of other-initiated repair as a set of practices and a system of semiotic resources, special attention is given to (1) the range of repair operations that are elicited by open other-initiators of repair in Lao, especially the subtle changes made when problem turns are repeated, and (2) the use of phrase-final particles—a characteristic feature of Lao grammar—in the marking of both other-initiations of repair and repair solution turns
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2015). The utility of meaning: What words mean and why. Oxford: Oxford University Press.

    Abstract

    This book argues that the complex, anthropocentric, and often culture-specific meanings of words have been shaped directly by their history of 'utility' for communication in social life. N. J. Enfield draws on semantic and pragmatic case studies from his extensive fieldwork in Laos to investigate a range of semantic fields including emotion terms, culinary terms, landscape terminology, and honorific pronouns, among many others. These studies form the building blocks of a conceptual framework for understanding meaning in language. The book argues that the goals and relevancies of human communication are what bridge the gap between the private representation of language in the mind and its public processes of usage, acquisition, and conventionalization in society. Professor Enfield argues that in order to understand this process, we first need to understand the ways in which linguistic meaning is layered, multiple, anthropocentric, cultural, distributed, and above all, useful. This wide-ranging account brings together several key strands of research across disciplines including semantics, pragmatics, cognitive linguistics, and sociology of language, and provides a rich account of what linguistic meaning is like and why.
  • Erard, M. (2019). Language aptitude: Insights from hyperpolyglots. In Z. Wen, P. Skehan, A. Biedroń, S. Li, & R. L. Sparks (Eds.), Language aptitude: Advancing theory, testing, research and practice (pp. 153-167). Abingdon, UK: Taylor & Francis.

    Abstract

    Over the decades, high-intensity language learners scattered over the globe referred to as “hyperpolyglots” have undertaken a natural experiment into the limits of learning and acquiring proficiencies in multiple languages. This chapter details several ways in which hyperpolyglots are relevant to research on aptitude. First, historical hyperpolyglots Cardinal Giuseppe Mezzofanti, Emil Krebs, Elihu Burritt, and Lomb Kató are described in terms of how they viewed their own exceptional outcomes. Next, I draw on results from an online survey with 390 individuals to explore how contemporary hyperpolyglots consider the explanatory value of aptitude. Third, the challenges involved in studying the genetic basis of hyperpolyglottism (and by extension of language aptitude) are discussed. This mosaic of data is meant to inform the direction of future aptitude research that takes hyperpolyglots, one type of exceptional language learner and user, into account.
  • Erard, M. (2015). What's in a name? Science, 347(6225), 941-943. doi:10.1126/science.347.6225.941.
  • Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ergin, R., Senghas, A., Jackendoff, R., & Gleitman, L. (2018). Structural cues for symmetry, asymmetry, and non-symmetry in Central Taurus Sign Language. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 104-106). Toruń, Poland: NCU Press. doi:10.12775/3991-1.025.
  • Ernestus, M., & Cutler, A. (2015). BALDEY: A database of auditory lexical decisions. Quarterly Journal of Experimental Psychology, 68, 1469-1488. doi:10.1080/17470218.2014.984730.

    Abstract

    In an auditory lexical decision experiment, 5,541 spoken content words and pseudo-words were presented to 20 native speakers of Dutch. The words vary in phonological makeup and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudo-words were matched in these respects to the real words. The BALDEY data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbors, and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles and frequency ratings by 70 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
  • Ernestus, M., & Giezenaar, G. (2015). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Klassiek vakwerk II: Achtergronden van het NT2-onderwijs (pp. 143-155). Amsterdam: Boom.
  • Ernestus, M., & Smith, R. (2018). Qualitative and quantitative aspects of phonetic variation in Dutch eigenlijk. In F. Cangemi, M. Clayards, O. Niebuhr, B. Schuppler, & M. Zellers (Eds.), Rethinking reduction: Interdisciplinary perspectives on conditions, mechanisms, and domains for phonetic variation (pp. 129-163). Berlin/Boston: De Gruyter Mouton.
  • Ernestus, M., Hanique, I., & Verboom, E. (2015). The effect of speech situation on the occurrence of reduced word pronunciation variants. Journal of Phonetics, 48, 60-75. doi:10.1016/j.wocn.2014.08.001.

    Abstract

    This article presents two studies investigating how the situation in which speech is uttered affects the frequency with which words are reduced. Study 1 is based on the Spoken Dutch Corpus, which consists of 15 components, nearly all representing a different speech situation. This study shows that the components differ in how often ten semantically weak words are highly reduced. The differences are especially large between the components with scripted and unscripted speech. Within the component group of unscripted speech, the formality of the situation shows an effect. Study 2 investigated segment reduction in a shadowing experiment in which participants repeated Dutch carefully and casually articulated sentences. Prefixal schwa and suffixal /t/ were absent in participants' responses to both sentences types as often as in formal interviews. If a segment was absent, this appeared to be mostly due to extreme co-articulation, unlike in speech produced in less formal situations. Speakers thus adapted more to the formal situation of the experiment than to the stimuli to be shadowed. We conclude that speech situation affects the occurrence of reduced word pronunciation variants, which should be accounted for by psycholinguistic models of speech production and comprehension
  • Esling, J. H., Benner, A., & Moisik, S. R. (2015). Laryngeal articulatory function and speech origins. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 2-7). Glasgow: ICPhS.

    Abstract

    The larynx is the essential articulatory mechanism that primes the vocal tract. Far from being only a glottal source of voicing, the complex laryngeal mechanism entrains the ontogenetic acquisition of speech and, through coarticulatory coupling, guides the production of oral sounds in the infant vocal tract. As such, it is not possible to speculate as to the origins of the speaking modality in humans without considering the fundamental role played by the laryngeal articulatory mechanism. The Laryngeal Articulator Model, which divides the vocal tract into a laryngeal component and an oral component, serves as a basis for describing early infant speech and for positing how speech sounds evolving in various hominids may be related phonetically. To this end, we offer some suggestions for how the evolution and development of vocal tract anatomy fit with our infant speech acquisition data and discuss the implications this has for explaining phonetic learning and for interpreting the biological evolution of the human vocal tract in relation to speech and speech acquisition.
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Estruch, S. B. (2018). Characterization of transcription factors in monogenic disorders of speech and language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2015). Climate, vocal folds, and tonal languages: Connecting the physiological and geographic dots. Proceedings of the National Academy of Sciences of the United States of America, 112, 1322-1327. doi:10.1073/pnas.1417413112.

    Abstract

    We summarize a number of findings in laryngology demonstrating that perturbations of phonation, including increased jitter and shimmer, are associated with desiccated ambient air. We predict that, given the relative imprecision of vocal fold vibration in desiccated versus humid contexts, arid and cold ecologies should be less amenable, when contrasted to warm and humid ecologies, to the development of languages with phonemic tone, especially complex tone. This prediction is supported by data from two large independently coded databases representing 3,700+ languages. Languages with complex tonality have generally not developed in very cold or otherwise desiccated climates, in accordance with the physiologically based predictions. The predicted global geographic–linguistic association is shown to operate within continents, within major language families, and across language isolates. Our results offer evidence that human sound systems are influenced by environmental factors.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Faber, M., Mak, M., & Willems, R. M. (2020). Word skipping as an indicator of individual reading style during literary reading. Journal of Eye Movement Research, 13(3): 2. doi:10.16910/jemr.13.3.2.

    Abstract

    Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in ‘stable’ eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤ .11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r > .51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Fairs, A. (2019). Linguistic dual-tasking: Understanding temporal overlap between production and comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Falk, J. J., Zhang, Y., Scheutz, M., & Yu, C. (2021). Parents adaptively use anaphora during parent-child social interaction. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1472-1478). Vienna: Cognitive Science Society.

    Abstract

    Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fawcett, C., & Liszkowski, U. (2015). Social referencing during infancy and early childhood across cultures. In J. D. Wright (Ed.), International encyclopedia of the social & behavioral sciences (2nd ed., pp. 556-562). doi:10.1016/B978-0-08-097086-8.23169-3.
  • Fazekas, J., Jessop, A., Pine, J., & Rowland, C. F. (2020). Do children learn from their prediction mistakes? A registered report evaluating error-based theories of language acquisition. Royal Society Open Science, 7(11): 180877. doi:10.1098/rsos.180877.

    Abstract

    Error-based theories of language acquisition suggest that children, like adults, continuously make and evaluate predictions in order to reach an adult-like state of language use. However, while these theories have become extremely influential, their central claim - that unpredictable
    input leads to higher rates of lasting change in linguistic representations – has scarcely been
    tested. We designed a prime surprisal-based intervention study to assess this claim.
    As predicted, both 5- to 6-year-old children (n=72) and adults (n=72) showed a pre- to post-test shift towards producing the dative syntactic structure they were exposed to in surprising sentences. The effect was significant in both age groups together, and in the child group separately when participants with ceiling performance in the pre-test were excluded. Secondary
    predictions were not upheld: we found no verb-based learning effects and there was only reliable evidence for immediate prime surprisal effects in the adult, but not in the child group. To our knowledge this is the first published study demonstrating enhanced learning rates for the same syntactic structure when it appeared in surprising as opposed to predictable contexts, thus
    providing crucial support for error-based theories of language acquisition.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This paper shows that the dictation task, a well-
    known testing instrument in language education, has
    untapped potential as a research tool for studying
    speech perception. We describe how transcriptions
    can be scored on measures of lexical, orthographic,
    phonological, and semantic similarity to target
    phrases to provide comprehensive information about
    accuracy at different processing levels. The former
    three measures are automatically extractable,
    increasing objectivity, and the middle two are
    gradient, providing finer-grained information than
    traditionally used. We evaluate the measures in an
    English dictation task featuring phonetically reduced
    continuous speech. Whereas the lexical and
    orthographic measures emphasize listeners’ word
    identification difficulties, the phonological measure
    demonstrates that listeners can often still recover
    phonological features, and the semantic measure
    captures their ability to get the gist of the utterances.
    Correlational analyses and a discussion of practical
    and theoretical considerations show that combining
    multiple measures improves the dictation task’s
    utility as a research tool.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.

    Abstract

    Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R. (2021). Learning second language speech perception in natural settings. PhD Thesis, Radboud University, Nijmegen.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material

Share this page