Publications

Displaying 601 - 681 of 681
  • Trompenaars, T., Kaluge, T. A., Sarabi, R., & De Swart, P. (2021). Cognitive animacy and its relation to linguistic animacy: Evidence from Japanese and Persian. Language Sciences, 86: 101399. doi:10.1016/j.langsci.2021.101399.

    Abstract

    Animacy, commonly defined as the distinction between living and non-living entities, is a useful notion in cognitive science and linguistics employed to describe and predict variation in psychological and linguistic behaviour. In the (psycho)linguistics literature we find linguistic animacy dichotomies which are (implicitly) assumed to correspond to biological dichotomies. We argue this is problematic, as it leaves us without a cognitively grounded, universal description for non-prototypical cases. We show that ‘animacy’ in language can be better understood as universally emerging from a gradual, cognitive property by collecting animacy ratings for a great range of nouns from Japanese and Persian. We used these cognitive ratings in turn to predict linguistic variation in these languages traditionally explained through dichotomous distinctions. We show that whilst (speakers of) languages may subtly differ in their conceptualisation of animacy, universality may be found in the process of mapping conceptual animacy to linguistic variation.
  • Trujillo, J. P., & Holler, J. (2021). The kinematics of social action: Visual signals provide cues for what interlocutors do in conversation. Brain Sciences, 11: 996. doi:10.3390/brainsci11080996.

    Abstract

    During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation

    Additional information

    analyses scripts
  • Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.

    Abstract

    In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

    Additional information

    supplementary material
  • Trujillo, J. P., Ozyurek, A., Kan, C. C., Sheftel-Simanova, I., & Bekkering, H. (2021). Differences in the production and perception of communicative kinematics in autism. Autism Research, 14(12), 2640-2653. doi:10.1002/aur.2611.

    Abstract

    In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.

    Additional information

    supporting information
  • Tsoukala, C., Frank, S. L., Van Den Bosch, A., Valdés Kroff, J., & Broersma, M. (2021). Modeling the auxiliary phrase asymmetry in code-switched Spanish–English. Bilingualism: Language and Cognition, 24(2), 271-280. doi:10.1017/S1366728920000449.

    Abstract

    Spanish–English bilinguals rarely code-switch in the perfect structure between the Spanish auxiliary haber (“to have”) and the participle (e.g., “Ella ha voted”; “She has voted”). However, they are somewhat likely to switch in the progressive structure between the Spanish auxiliary estar (“to be”) and the participle (“Ella está voting”; “She is voting”). This phenomenon is known as the “auxiliary phrase asymmetry”. One hypothesis as to why this occurs is that estar has more semantic weight as it also functions as an independent verb, whereas haber is almost exclusively used as an auxiliary verb. To test this hypothesis, we employed a connectionist model that produces spontaneous code-switches. Through simulation experiments, we showed that i) the asymmetry emerges in the model and that ii) the asymmetry disappears when using haber also as a main verb, which adds semantic weight. Therefore, the lack of semantic weight of haber may indeed cause the asymmetry.
  • Tsoukala, C., Broersma, M., Van den Bosch, A., & Frank, S. L. (2021). Simulating code-switching using a neural network model of bilingual sentence production. Computational Brain & Behavior, 4, 87-100. doi:10.1007/s42113-020-00088-6.

    Abstract

    Code-switching is the alternation from one language to the other during bilingual speech. We present a novel method of researching this phenomenon using computational cognitive modeling. We trained a neural network of bilingual sentence production to simulate early balanced Spanish–English bilinguals, late speakers of English who have Spanish as a dominant native language, and late speakers of Spanish who have English as a dominant native language. The model produced code-switches even though it was not exposed to code-switched input. The simulations predicted how code-switching patterns differ between early balanced and late non-balanced bilinguals; the balanced bilingual simulation code-switches considerably more frequently, which is in line with what has been observed in human speech production. Additionally, we compared the patterns produced by the simulations with two corpora of spontaneous bilingual speech and identified noticeable commonalities and differences. To our knowledge, this is the first computational cognitive model simulating the code-switched production of non-balanced bilinguals and comparing the simulated production of balanced and non-balanced bilinguals with that of human bilinguals.

    Additional information

    dual-path model
  • Tsuji, S., Gonzalez Gomez, N., Medina, V., Nazzi, T., & Mazuka, R. (2012). The labial–coronal effect revisited: Japanese adults say pata, but hear tapa. Cognition, 125, 413-428. doi:10.1016/j.cognition.2012.07.017.

    Abstract

    The labial–coronal effect has originally been described as a bias to initiate a word with a labial consonant–vowel–coronal consonant (LC) sequence. This bias has been explained with constraints on the human speech production system, and its perceptual correlates have motivated the suggestion of a perception–production link. However, previous studies exclusively considered languages in which LC sequences are globally more frequent than their counterpart. The current study examined the LC bias in speakers of Japanese, a language that has been claimed to possess more CL than LC sequences. We first conducted an analysis of Japanese corpora that qualified this claim, and identified a subgroup of consonants (plosives) exhibiting a CL bias. Second, focusing on this subgroup of consonants, we found diverging results for production and perception such that Japanese speakers exhibited an articulatory LC bias, but a perceptual CL bias. The CL perceptual bias, however, was modulated by language of presentation, and was only present for stimuli recorded by a Japanese, but not a French, speaker. A further experiment with native speakers of French showed the opposite effect, with an LC bias for French stimuli only. Overall, we find support for a universal, articulatory motivated LC bias in production, supporting a motor explanation of the LC effect, while perceptual biases are influenced by distributional frequencies of the native language.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2012). Resolving ambiguity in familiar and unfamiliar casual speech. Journal of Memory and Language, 66, 530-544. doi:10.1016/j.jml.2012.02.001.

    Abstract

    In British English, the phrase Canada aided can sound like Canada raided if the speaker
    links the two vowels at the word boundary with an intrusive /r/. There are subtle phonetic
    differences between an onset /r/ and an intrusive /r/, however. With cross-modal priming
    and eye-tracking, we examine how native British English listeners and non-native
    (Dutch) listeners deal with the lexical ambiguity arising from this language-specific
    connected speech process. Together the results indicate that the presence of /r/ initially
    activates competing words for both listener groups; however, the native listeners rapidly
    exploit the phonetic cues and achieve correct lexical selection. In contrast, these
    advanced L2 listeners to English failed to recover from the /r/-induced competition, and
    failed to match native performance in either task. The /r/-intrusion process, which adds a
    phoneme to speech input, thus causes greater difficulty for L2 listeners than connectedspeech
    processes which alter or delete phonemes.
  • Udden, J., & Bahlmann, J. (2012). A rostro-caudal gradient of structured sequence processing in the left inferior frontal gyrus [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2023-2032. doi:10.1098/rstb.2012.0009.

    Abstract

    In this paper, we present two novel perspectives on the function of the left inferior frontal gyrus (LIFG). First, a structured sequence processing perspective facilitates the search for functional segregation within the LIFG and provides a way to express common aspects across cognitive domains including language, music and action. Converging evidence from functional magnetic resonance imaging and transcranial magnetic stimulation studies suggests that the LIFG is engaged in sequential processing in artificial grammar learning, independently of particular stimulus features of the elements (whether letters, syllables or shapes are used to build up sequences). The LIFG has been repeatedly linked to processing of artificial grammars across all different grammars tested, whether they include non-adjacent dependencies or mere adjacent dependencies. Second, we apply the sequence processing perspective to understand how the functional segregation of semantics, syntax and phonology in the LIFG can be integrated in the general organization of the lateral prefrontal cortex (PFC). Recently, it was proposed that the functional organization of the lateral PFC follows a rostro-caudal gradient, such that more abstract processing in cognitive control is subserved by more rostral regions of the lateral PFC. We explore the literature from the viewpoint that functional segregation within the LIFG can be embedded in a general rostro-caudal abstraction gradient in the lateral PFC. If the lateral PFC follows a rostro-caudal abstraction gradient, then this predicts that the LIFG follows the same principles, but this prediction has not yet been tested or explored in the LIFG literature. Integration might provide further insights into the functional architecture of the LIFG and the lateral PFC.
  • Udden, J., Ingvar, M., Hagoort, P., & Petersson, K. M. (2012). Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model. Cognitive Science, 36, 1078-1101. doi:10.1111/j.1551-6709.2012.01235.x.

    Abstract

    A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.
  • Urrutia, M., de Vega, M., & Bastiaansen, M. C. M. (2012). Understanding counterfactuals in discourse modulates ERP and oscillatory gamma rhythms in the EEG. Brain Research, 1455, 40-55. doi:10.1016/j.brainres.2012.03.032.

    Abstract

    This study provides ERP and oscillatory dynamics data associated with the comprehension of narratives involving counterfactual events. Participants were given short stories describing an initial situation (“Marta wanted to plant flowers in her garden…”), followed by a critical sentence describing a new situation in either a factual (“Since she found a spade, she started to dig a hole”) or counterfactual format (“If she had found a spade, she would have started to dig a hole”), and then a continuation sentence that was either related to the initial situation (“she bought a spade”) or to the new one (“she planted roses”). The ERPs recorded for the continuation sentences related to the initial situation showed larger negativity after factuals than after counterfactuals, suggesting that the counterfactual's presupposition – the events did not occur – prevents updating the here-and-now of discourse. By contrast, continuation sentences related to the new situation elicited similar ERPs under both factual and counterfactual contexts, suggesting that counterfactuals also activate momentarily an alternative “as if” meaning. However, the reduction of gamma power following counterfactuals, suggests that the “as if” meaning is not integrated into the discourse, nor does it contribute to semantic unification processes.
  • Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.

    Abstract

    A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.

    Additional information

    supplementary materials
  • Van Bergen, G., & Hogeweg, L. (2021). Managing interpersonal discourse expectations: a comparative analysis of contrastive discourse particles in Dutch. Linguistics, 59(2), 333-360. doi:10.1515/ling-2021-0020.

    Abstract

    In this article we investigate how speakers manage discourse expectations in dialogue by comparing the meaning and use of three Dutch discourse particles, i.e. wel, toch and eigenlijk, which all express a contrast between their host utterance and a discourse-based expectation. The core meanings of toch, wel and eigenlijk are formally distinguished on the basis of two intersubjective parameters: (i) whether the particle marks alignment or misalignment between speaker and addressee discourse beliefs, and (ii) whether the particle requires an assessment of the addressee’s representation of mutual discourse beliefs. By means of a quantitative corpus study, we investigate to what extent the intersubjective meaning distinctions between wel, toch and eigenlijk are reflected in statistical usage patterns across different social situations. Results suggest that wel, toch and eigenlijk are lexicalizations of distinct generalized politeness strategies when expressing contrast in social interaction. Our findings call for an interdisciplinary approach to discourse particles in order to enhance our understanding of their functions in language.
  • Van Heukelum, S., Tulva, K., Geers, F. E., van Dulm, S., Ruisch, I. H., Mill, J., Viana, J. F., Beckmann, C. F., Buitelaar, J. K., Poelmans, G., Glennon, J. C., Vogt, B. A., Havenith, M. N., & França, A. S. (2021). A central role for anterior cingulate cortex in the control of pathological aggression. Current Biology, 31, 2321-2333.e5. doi:10.1016/j.cub.2021.03.062.

    Abstract

    Controlling aggression is a crucial skill in social species like rodents and humans and has been associated with anterior cingulate cortex (ACC). Here, we directly link the failed regulation of aggression in BALB/cJ mice to ACC hypofunction. We first show that ACC in BALB/cJ mice is structurally degraded: neuron density is decreased, with pervasive neuron death and reactive astroglia. Gene-set enrichment analysis suggested that this process is driven by neuronal degeneration, which then triggers toxic astrogliosis. cFos expression across ACC indicated functional consequences: during aggressive encounters, ACC was engaged in control mice, but not BALB/cJ mice. Chemogenetically activating ACC during aggressive encounters drastically suppressed pathological aggression but left species-typical aggression intact. The network effects of our chemogenetic perturbation suggest that this behavioral rescue is mediated by suppression of amygdala and hypothalamus and activation of mediodorsal thalamus. Together, these findings highlight the central role of ACC in curbing pathological aggression.
  • Ip, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E. and 129 moreIp, H. F., Van der Laan, C. M., Krapohl, E. M. L., Brikell, I., Sánchez-Mora, C., Nolte, I. M., St Pourcain, B., Bolhuis, K., Palviainen, T., Zafarmand, H., Colodro-Conde, L., Gordon, S., Zayats, T., Aliev, F., Jiang, C., Wang, C. A., Saunders, G., Karhunen, V., Hammerschlag, A. R., Adkins, D. E., Border, R., Peterson, R. E., Prinz, J. A., Thiering, E., Seppälä, I., Vilor-Tejedor, N., Ahluwalia, T. S., Day, F. R., Hottenga, J.-J., Allegrini, A. G., Rimfeld, K., Chen, Q., Lu, Y., Martin, J., Soler Artigas, M., Rovira, P., Bosch, R., Español, G., Ramos Quiroga, J. A., Neumann, A., Ensink, J., Grasby, K., Morosoli, J. J., Tong, X., Marrington, S., Middeldorp, C., Scott, J. G., Vinkhuyzen, A., Shabalin, A. A., Corley, R., Evans, L. M., Sugden, K., Alemany, S., Sass, L., Vinding, R., Ruth, K., Tyrrell, J., Davies, G. E., Ehli, E. A., Hagenbeek, F. A., De Zeeuw, E., Van Beijsterveldt, T. C., Larsson, H., Snieder, H., Verhulst, F. C., Amin, N., Whipp, A. M., Korhonen, T., Vuoksimaa, E., Rose, R. J., Uitterlinden, A. G., Heath, A. C., Madden, P., Haavik, J., Harris, J. R., Helgeland, Ø., Johansson, S., Knudsen, G. P. S., Njolstad, P. R., Lu, Q., Rodriguez, A., Henders, A. K., Mamun, A., Najman, J. M., Brown, S., Hopfer, C., Krauter, K., Reynolds, C., Smolen, A., Stallings, M., Wadsworth, S., Wall, T. L., Silberg, J. L., Miller, A., Keltikangas-Järvinen, L., Hakulinen, C., Pulkki-Råback, L., Havdahl, A., Magnus, P., Raitakari, O. T., Perry, J. R. B., Llop, S., Lopez-Espinosa, M.-J., Bønnelykke, K., Bisgaard, H., Sunyer, J., Lehtimäki, T., Arseneault, L., Standl, M., Heinrich, J., Boden, J., Pearson, J., Horwood, L. J., Kennedy, M., Poulton, R., Eaves, L. J., Maes, H. H., Hewitt, J., Copeland, W. E., Costello, E. J., Williams, G. M., Wray, N., Järvelin, M.-R., McGue, M., Iacono, W., Caspi, A., Moffitt, T. E., Whitehouse, A., Pennell, C. E., Klump, K. L., Burt, S. A., Dick, D. M., Reichborn-Kjennerud, T., Martin, N. G., Medland, S. E., Vrijkotte, T., Kaprio, J., Tiemeier, H., Davey Smith, G., Hartman, C. A., Oldehinkel, A. J., Casas, M., Ribasés, M., Lichtenstein, P., Lundström, S., Plomin, R., Bartels, M., Nivard, M. G., & Boomsma, D. I. (2021). Genetic association study of childhood aggression across raters, instruments, and age. Translational Psychiatry, 11: 413. doi:10.1038/s41398-021-01480-x.
  • van der Burght, C. L., Friederici, A. D., Goucha, T., & Hartwigsen, G. (2021). Pitch accents create dissociable syntactic and semantic expectations during sentence processing. Cognition, 212: 104702. doi:10.1016/j.cognition.2021.104702.

    Abstract

    The language system uses syntactic, semantic, as well as prosodic cues to efficiently guide auditory sentence comprehension. Prosodic cues, such as pitch accents, can build expectations about upcoming sentence elements. This study investigates to what extent syntactic and semantic expectations generated by pitch accents can be dissociated and if so, which cues take precedence when contradictory information is present. We used sentences in which one out of two nominal constituents was placed in contrastive focus with a third one. All noun phrases carried overt syntactic information (case-marking of the determiner) and semantic information (typicality of the thematic role of the noun). Two experiments (a sentence comprehension and a sentence completion task) show that focus, marked by pitch accents, established expectations in both syntactic and semantic domains. However, only the syntactic expectations, when violated, were strong enough to interfere with sentence comprehension. Furthermore, when contradictory cues occurred in the same sentence, the local syntactic cue (case-marking) took precedence over the semantic cue (thematic role), and overwrote previous information cued by prosody. The findings indicate that during auditory sentence comprehension the processing system integrates different sources of information for argument role assignment, yet primarily relies on syntactic information.
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Van Leeuwen, T. M., Wilsson, L., Norrman, H. N., Dingemanse, M., Bölte, S., & Neufeld, J. (2021). Perceptual processing links autism and synesthesia: A co-twin control study. Cortex, 145, 236-249. doi:10.1016/j.cortex.2021.09.016.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Berkum, J. J. A. (1996). De taalpsychologie van genus. NEDER-L, Electronisch Tijdschrift voor de Neerlandistiek, (9601.a ): 9601.04.
  • Van den Brink, D., Van Berkum, J. J. A., Bastiaansen, M. C. M., Tesink, C. M. J. Y., Kos, M., Buitelaar, J. K., & Hagoort, P. (2012). Empathy matters: ERP evidence for inter-individual differences in social language processing. Social, Cognitive and Affective Neuroscience, 7, 173-182. doi:10.1093/scan/nsq094.

    Abstract

    When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Leeuwen, E. J. C., Cronin, K. A., Haun, D. B. M., Mundry, R., & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society B: Biological Sciences, 279, 4362-4367. doi:10.1098/rspb.2012.1543.

    Abstract

    Grooming handclasp (GHC) behaviour was originally advocated as the first evidence of social culture in chimpanzees owing to the finding that some populations engaged in the behaviour and others do not. To date, however, the validity of this claim and the extent to which this social behaviour varies between groups is unclear. Here, we measured (i) variation, (ii) durability and (iii) expansion of the GHC behaviour in four chimpanzee communities that do not systematically differ in their genetic backgrounds and live in similar ecological environments. Ninety chimpanzees were studied for a total of 1029 h; 1394 GHC bouts were observed between 2010 and 2012. Critically, GHC style (defined by points of bodily contact) could be systematically linked to the chimpanzee’s group identity, showed temporal consistency both withinand between-groups, and could not be accounted for by the arm-length differential between partners. GHC has been part of the behavioural repertoire of the chimpanzees under study for more than 9 years (surpassing durability criterion) and spread across generations (surpassing expansion criterion). These results strongly indicate that chimpanzees’ social behaviour is not only motivated by innate predispositions and individual inclinations, but may also be partly cultural in nature.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Tiel, B., Deliens, G., Geelhand, P., Murillo Oosterwijk, A., & Kissine, M. (2021). Strategic deception in adults with autism spectrum disorder. Journal of Autism and Developmental Disorders, 51, 255-266. doi:10.1007/s10803-020-04525-0.

    Abstract

    Autism Spectrum Disorder (ASD) is often associated with impaired perspective-taking skills. Deception is an important indicator of perspective-taking, and therefore may be thought to pose difficulties to people with ASD (e.g., Baron-Cohen in J Child Psychol Psychiatry 3:1141–1155, 1992). To test this hypothesis, we asked participants with and without ASD to play a computerised deception game. We found that participants with ASD were equally likely—and in complex cases of deception even more likely—to deceive and detect deception, and learned deception at a faster rate. However, participants with ASD initially deceived less frequently, and were slower at detecting deception. These results suggest that people with ASD readily engage in deception but may do so through conscious and effortful reasoning about other people’s perspectiv
  • Van Paridon, J., & Thompson, B. (2021). subs2vec: Word embeddings from subtitles in 55 languages. Behavior Research Methods, 53(2), 629-655. doi:10.3758/s13428-020-01406-3.

    Abstract

    This paper introduces a novel collection of word embeddings, numerical representations of lexical semantics, in 55 languages, trained on a large corpus of pseudo-conversational speech transcriptions from television shows and movies. The embeddings were trained on the OpenSubtitles corpus using the fastText implementation of the skipgram algorithm. Performance comparable with (and in some cases exceeding) embeddings trained on non-conversational (Wikipedia) text is reported on standard benchmark evaluation datasets. A novel evaluation method of particular relevance to psycholinguists is also introduced: prediction of experimental lexical norms in multiple languages. The models, as well as code for reproducing the models and all analyses reported in this paper (implemented as a user-friendly Python package), are freely available at: https://github.com/jvparidon/subs2vec.

    Additional information

    https://github.com/jvparidon/subs2vec
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2012). Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late. Frontiers in Psychology, 3, 190. doi:10.3389/fpsyg.2012.00190.

    Abstract

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like 'day' in 'daisy', or 'dean' in 'sardine'. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding ('day' in 'daisy') did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding ('dean' in 'sardine') did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.
  • Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.

    Abstract

    Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
  • Van de Ven, M., Ernestus, M., & Schreuder, R. (2012). Predicting acoustically reduced words in spontaneous speech: The role of semantic/syntactic and acoustic cues in context. Laboratory Phonology, 3, 455-481. doi:10.1515/lp-2012-0020.

    Abstract

    In spontaneous speech, words may be realised shorter than in formal speech (e.g., English yesterday may be pronounced like [jɛʃeɩ]). Previous research has shown that context is required to understand highly reduced pronunciation variants. We investigated the extent to which listeners can predict low predictability reduced words on the basis of the semantic/syntactic and acoustic cues in their context. In four experiments, participants were presented with either the preceding context or the preceding and following context of reduced words, and either heard these fragments of conversational speech, or read their orthographic transcriptions. Participants were asked to predict the missing reduced word on the basis of the context alone, choosing from four plausible options. Participants made use of acoustic cues in the context, although casual speech typically has a high speech rate, and acoustic cues are much more unclear than in careful speech. Moreover, they relied on semantic/syntactic cues. Whenever there was a conflict between acoustic and semantic/syntactic contextual cues, measured as the word's probability given the surrounding words, listeners relied more heavily on acoustic cues. Further, context appeared generally insufficient to predict the reduced words, underpinning the significance of the acoustic characteristics of the reduced words themselves.
  • Van Berkum, J. J. A. (2012). Zonder gevoel geen taal. Neerlandistiek.nl. Wetenschappelijk tijdschrift voor de Nederlandse taal- en letterkunde, 12(01).

    Abstract

    Geïllustreerde herpublicatie van de oratie, uitgesproken bij het aanvaarden van de leeropdracht Discourse, cognitie en communicatie op 30 september 2011 (Universiteit Utrecht). In tegenstelling tot de oorspronkelijke oratie-tekst bevat deze herpublicatie ook diverse illustraties en links. Daarnaast is er in twee aansluitende artikelen door vakgenoten op gereageerd (zie http://www.neerlandistiek.nl/12.01a/ en http://www.neerlandistiek.nl/12.01b/)
  • Varola*, M., Verga*, L., Sroka, M., Villanueva, S., Charrier, I., & Ravignani, A. (2021). Can harbor seals (Phoca vitulina) discriminate familiar conspecific calls after long periods of separation? PeerJ, 9: e12431. doi:10.7717/peerj.12431.

    Abstract

    * - indicates joint first authorship -
    The ability to discriminate between familiar and unfamiliar calls may play a key role in pinnipeds’ communication and survival, as in the case of mother-pup interactions. Vocal discrimination abilities have been suggested to be more developed in pinniped species with the highest selective pressure such as the otariids; yet, in some group-living phocids, such as harbor seals (Phoca vitulina), mothers are also able to recognize their pup’s voice. Conspecifics’ vocal recognition in pups has never been investigated; however, the repeated interaction occurring between pups within the breeding season suggests that long-term vocal discrimination may occur. Here we explored this hypothesis by presenting three rehabilitated seal pups with playbacks of vocalizations from unfamiliar or familiar pups. It is uncommon for seals to come into rehabilitation for a second time in their lifespan, and this study took advantage of these rare cases. A simple visual inspection of the data plots seemed to show more reactions, and of longer duration, in response to familiar as compared to unfamiliar playbacks in two out of three pups. However, statistical analyses revealed no significant difference between the experimental conditions. We also found no significant asymmetry in orientation (left vs. right) towards familiar and unfamiliar sounds. While statistics do not support the hypothesis of an established ability to discriminate familiar vocalizations from unfamiliar ones in harbor seal pups, further investigations with a larger sample size are needed to confirm or refute this hypothesis.

    Additional information

    dataset
  • Vega-Mendoza, M., Pickering, M. J., & Nieuwland, M. S. (2021). Concurrent use of animacy and event-knowledge during comprehension: Evidence from event-related potentials. Neuropsychologia, 152: 107724. doi:10.1016/j.neuropsychologia.2020.107724.

    Abstract

    In two ERP experiments, we investigated whether readers prioritize animacy over real-world event-knowledge during sentence comprehension. We used the paradigm of Paczynski and Kuperberg (2012), who argued that animacy is prioritized based on the observations that the ‘related anomaly effect’ (reduced N400s for context-related anomalous words compared to unrelated words) does not occur for animacy violations, and that animacy violations but not relatedness violations elicit P600 effects. Participants read passive sentences with plausible agents (e.g., The prescription for the mental disorder was written by the psychiatrist) or implausible agents that varied in animacy and semantic relatedness (schizophrenic/guard/pill/fence). In Experiment 1 (with a plausibility judgment task), plausible sentences elicited smaller N400s relative to all types of implausible sentences. Crucially, animate words elicited smaller N400s than inanimate words, and related words elicited smaller N400s than unrelated words, but Bayesian analysis revealed substantial evidence against an interaction between animacy and relatedness. Moreover, at the P600 time-window, we observed more positive ERPs for animate than inanimate words and for related than unrelated words at anterior regions. In Experiment 2 (without judgment task), we observed an N400 effect with animacy violations, but no other effects. Taken together, the results of our experiments fail to support a prioritized role of animacy information over real-world event-knowledge, but they support an interactive, constraint-based view on incremental semantic processing.
  • Verdonschot, R. G., Han, J.-I., & Kinoshita, S. (2021). The proximate unit in Korean speech production: Phoneme or syllable? Quarterly Journal of Experimental Psychology, 74, 187-198. doi:10.1177/1747021820950239.

    Abstract

    We investigated the “proximate unit” in Korean, that is, the initial phonological unit selected in speech production by Korean speakers. Previous studies have shown mixed evidence indicating either a phoneme-sized or a syllable-sized unit. We conducted two experiments in which participants named pictures while ignoring superimposed non-words. In English, for this task, when the picture (e.g., dog) and distractor phonology (e.g., dark) initially overlap, typically the picture target is named faster. We used a range of conditions (in Korean) varying from onset overlap to syllabic overlap, and the results indicated an important role for the syllable, but not the phoneme. We suggest that the basic unit used in phonological encoding in Korean is different from Germanic languages such as English and Dutch and also from Japanese and possibly also Chinese. Models dealing with the architecture of language production can use these results when providing a framework suitable for all languages in the world, including Korean.
  • Verdonschot, R. G., Middelburg, R., Lensink, S. E., & Schiller, N. O. (2012). Morphological priming survives a language switch. Cognition, 124(3), 343-349. doi:10.1016/j.cognition.2012.05.019.

    Abstract

    In a long-lag morphological priming experiment, Dutch (L1)-English (L2) bilinguals were asked to name pictures and read aloud words. A design using non-switch blocks, consisting solely of Dutch stimuli, and switch-blocks, consisting of Dutch primes and targets with intervening English trials, was administered. Target picture naming was facilitated by morphologically related primes in both non-switch and switch blocks with equal magnitude. These results contrast some assumptions of sustained reactive inhibition models. However, models that do not assume bilinguals having to reactively suppress all activation of the non-target language can account for these data. (C) 2012 Elsevier B.V. All rights reserved.
  • Verga, L., & Ravignani, A. (2021). Strange seal sounds: Claps, slaps, and multimodal pinniped rhythms. Frontiers in Ecology and Evolution, 9: 644497. doi:10.3389/fevo.2021.644497.
  • Verga, L., Schwartze, M., Stapert, S., Winkens, I., & Kotz, S. A. (2021). Dysfunctional timing in traumatic brain injury patients: Co-occurrence of cognitive, motor, and perceptual deficits. Frontiers in Psychology, 12: 731898. doi:10.3389/fpsyg.2021.731898.

    Abstract

    Timing is an essential part of human cognition and of everyday life activities, such as walking or holding a conversation. Previous studies showed that traumatic brain injury (TBI) often affects cognitive functions such as processing speed and time-sensitive abilities, causing long-term sequelae as well as daily impairments. However, the existing evidence on timing capacities in TBI is mostly limited to perception and the processing of isolated intervals. It is therefore open whether the observed deficits extend to motor timing and to continuous dynamic tasks that more closely match daily life activities. The current study set out to answer these questions by assessing audio motor timing abilities and their relationship with cognitive functioning in a group of TBI patients (n=15) and healthy matched controls. We employed a comprehensive set of tasks aiming at testing timing abilities across perception and production and from single intervals to continuous auditory sequences. In line with previous research, we report functional impairments in TBI patients concerning cognitive processing speed and perceptual timing. Critically, these deficits extended to motor timing: The ability to adjust to tempo changes in an auditory pacing sequence was impaired in TBI patients, and this motor timing deficit covaried with measures of processing speed. These findings confirm previous evidence on perceptual and cognitive timing deficits resulting from TBI and provide first evidence for comparable deficits in motor behavior. This suggests basic co-occurring perceptual and motor timing impairments that may factor into a wide range of daily activities. Our results thus place TBI into the wider range of pathologies with well-documented timing deficits (such as Parkinson’s disease) and encourage the search for novel timing-based therapeutic interventions (e.g., employing dynamic and/or musical stimuli) with high transfer potential to everyday life activities.

    Additional information

    supplementary material
  • Verhoef, T., & Ravignani, A. (2021). Melodic universals emerge or are sustained through cultural evolution. Frontiers in Psychology, 12: 668300. doi:10.3389/fpsyg.2021.668300.

    Abstract

    To understand why music is structured the way it is, we need an explanation that accounts for both the universality and variability found in musical traditions. Here we test whether statistical universals that have been identified for melodic structures in music can emerge as a result of cultural adaptation to human biases through iterated learning. We use data from an experiment in which artificial whistled systems, where sounds were produced with a slide whistle, were learned by human participants and transmitted multiple times from person to person. These sets of whistled signals needed to be memorized and recalled and the reproductions of one participant were used as the input set for the next. We tested for the emergence of seven different melodic features, such as discrete pitches, motivic patterns, or phrase repetition, and found some evidence for the presence of most of these statistical universals. We interpret this as promising evidence that, similarly to rhythmic universals, iterated learning experiments can also unearth melodic statistical universals. More, ideally cross-cultural, experiments are nonetheless needed. Simulating the cultural transmission of artificial proto-musical systems can help unravel the origins of universal tendencies in musical structures.
  • Verhoef, E., Grove, J., Shapland, C. Y., Demontis, D., Burgess, S., Rai, D., Børglum, A. D., & St Pourcain, B. (2021). Discordant associations of educational attainment with ASD and ADHD implicate a polygenic form of pleiotropy. Nature Communications, 12: 6534. doi:10.1038/s41467-021-26755-1.

    Abstract

    Autism Spectrum Disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD) are complex co-occurring neurodevelopmental conditions. Their genetic architectures reveal striking similarities but also differences, including strong, discordant polygenic associations with educational attainment (EA). To study genetic mechanisms that present as ASD-related positive and ADHD-related negative genetic correlations with EA, we carry out multivariable regression analyses using genome-wide summary statistics (N = 10,610–766,345). Our results show that EA-related genetic variation is shared across ASD and ADHD architectures, involving identical marker alleles. However, the polygenic association profile with EA, across shared marker alleles, is discordant for ASD versus ADHD risk, indicating independent effects. At the single-variant level, our results suggest either biological pleiotropy or co-localisation of different risk variants, implicating MIR19A/19B microRNA mechanisms. At the polygenic level, they point to a polygenic form of pleiotropy that contributes to the detectable genome-wide correlation between ASD and ADHD and is consistent with effect cancellation across EA-related regions.

    Additional information

    supplementary information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental origins of genetic factors influencing language and literacy: Associations with early-childhood vocabulary. Journal of Child Psychology and Psychiatry, 62(6), 728-738. doi:10.1111/jcpp.13327.

    Abstract

    Background

    The heritability of language and literacy skills increases from early‐childhood to adolescence. The underlying mechanisms are little understood and may involve (a) the amplification of genetic influences contributing to early language abilities, and/or (b) the emergence of novel genetic factors (innovation). Here, we investigate the developmental origins of genetic factors influencing mid‐childhood/early‐adolescent language and literacy. We evaluate evidence for the amplification of early‐childhood genetic factors for vocabulary, in addition to genetic innovation processes.
    Methods

    Expressive and receptive vocabulary scores at 38 months, thirteen language‐ and literacy‐related abilities and nonverbal cognition (7–13 years) were assessed in unrelated children from the Avon Longitudinal Study of Parents and Children (ALSPAC, Nindividuals ≤ 6,092). We investigated the multivariate genetic architecture underlying early‐childhood expressive and receptive vocabulary, and each of 14 mid‐childhood/early‐adolescent language, literacy or cognitive skills with trivariate structural equation (Cholesky) models as captured by genome‐wide genetic relationship matrices. The individual path coefficients of the resulting structural models were finally meta‐analysed to evaluate evidence for overarching patterns.
    Results

    We observed little support for the emergence of novel genetic sources for language, literacy or cognitive abilities during mid‐childhood or early adolescence. Instead, genetic factors of early‐childhood vocabulary, especially those unique to receptive skills, were amplified and represented the majority of genetic variance underlying many of these later complex skills (≤99%). The most predictive early genetic factor accounted for 29.4%(SE = 12.9%) to 45.1%(SE = 7.6%) of the phenotypic variation in verbal intelligence and literacy skills, but also for 25.7%(SE = 6.4%) in performance intelligence, while explaining only a fraction of the phenotypic variation in receptive vocabulary (3.9%(SE = 1.8%)).
    Conclusions

    Genetic factors contributing to many complex skills during mid‐childhood and early adolescence, including literacy, verbal cognition and nonverbal cognition, originate developmentally in early‐childhood and are captured by receptive vocabulary. This suggests developmental genetic stability and overarching aetiological mechanisms.

    Additional information

    supporting information
  • Verhoef, E., Shapland, C. Y., Fisher, S. E., Dale, P. S., & St Pourcain, B. (2021). The developmental genetic architecture of vocabulary skills during the first three years of life: Capturing emerging associations with later-life reading and cognition. PLoS Genetics, 17(2): e1009144. doi:10.1371/journal.pgen.1009144.

    Abstract

    Individual differences in early-life vocabulary measures are heritable and associated with subsequent reading and cognitive abilities, although the underlying mechanisms are little understood. Here, we (i) investigate the developmental genetic architecture of expressive and receptive vocabulary in early-life and (ii) assess timing of emerging genetic associations with mid-childhood verbal and non-verbal skills. We studied longitudinally assessed early-life vocabulary measures (15–38 months) and later-life verbal and non-verbal skills (7–8 years) in up to 6,524 unrelated children from the population-based Avon Longitudinal Study of Parents and Children (ALSPAC) cohort. We dissected the phenotypic variance of rank-transformed scores into genetic and residual components by fitting multivariate structural equation models to genome-wide genetic-relationship matrices. Our findings show that the genetic architecture of early-life vocabulary involves multiple distinct genetic factors. Two of these genetic factors are developmentally stable and also contribute to genetic variation in mid-childhood skills: One genetic factor emerging with expressive vocabulary at 24 months (path coefficient: 0.32(SE = 0.06)) was also related to later-life reading (path coefficient: 0.25(SE = 0.12)) and verbal intelligence (path coefficient: 0.42(SE = 0.13)), explaining up to 17.9% of the phenotypic variation. A second, independent genetic factor emerging with receptive vocabulary at 38 months (path coefficient: 0.15(SE = 0.07)), was more generally linked to verbal and non-verbal cognitive abilities in mid-childhood (reading path coefficient: 0.57(SE = 0.07); verbal intelligence path coefficient: 0.60(0.10); performance intelligence path coefficient: 0.50(SE = 0.08)), accounting for up to 36.1% of the phenotypic variation and the majority of genetic variance in these later-life traits (≥66.4%). Thus, the genetic foundations of mid-childhood reading and cognitive abilities are diverse. They involve at least two independent genetic factors that emerge at different developmental stages during early language development and may implicate differences in cognitive processes that are already detectable during toddlerhood.

    Additional information

    supporting information
  • Vernes, S. C., Kriengwatana, B. P., Beeck, V. C., Fischer, J., Tyack, P. L., Ten Cate, C., & Janik, V. M. (2021). The multi-dimensional nature of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200236. doi:10.1098/rstb.2020.0236.

    Abstract

    How learning affects vocalizations is a key question in the study of animal
    communication and human language. Parallel efforts in birds and humans
    have taught us much about how vocal learning works on a behavioural
    and neurobiological level. Subsequent efforts have revealed a variety of
    cases among mammals in which experience also has a major influence on
    vocal repertoires. Janik and Slater (Anim. Behav. 60, 1–11. (doi:10.1006/
    anbe.2000.1410)) introduced the distinction between vocal usage and pro-
    duction learning, providing a general framework to categorize how
    different types of learning influence vocalizations. This idea was built on
    by Petkov and Jarvis (Front. Evol. Neurosci. 4, 12. (doi:10.3389/fnevo.2012.
    00012)) to emphasize a more continuous distribution between limited and
    more complex vocal production learners. Yet, with more studies providing
    empirical data, the limits of the initial frameworks become apparent.
    We build on these frameworks to refine the categorization of vocal learning
    in light of advances made since their publication and widespread agreement
    that vocal learning is not a binary trait. We propose a novel classification
    system, based on the definitions by Janik and Slater, that deconstructs
    vocal learning into key dimensions to aid in understanding the mechanisms
    involved in this complex behaviour. We consider how vocalizations can
    change without learning, and a usage learning framework that considers
    context specificity and timing. We identify dimensions of vocal production
    learning, including the copying of auditory models (convergence/
    divergence on model sounds, accuracy of copying), the degree of change
    (type and breadth of learning) and timing (when learning takes place, the
    length of time it takes and how long it is retained). We consider grey
    areas of classification and current mechanistic understanding of these beha-
    viours. Our framework identifies research needs and will help to inform
    neurobiological and evolutionary studies endeavouring to uncover the
    multi-dimensional nature of vocal learning.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (Eds.). (2021). Vocal learning in animals and humans [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (2021). Vocal learning in animals and humans. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200234. doi:10.1098/rstb.2020.0234.
  • von Stutterheim, C., Andermann, M., Carroll, M., Flecken, M., & Schmiedtova, B. (2012). How grammaticized concepts shape event conceptualization in language production: Insights from linguistic analysis, eye tracking data, and memory performance. Linguistics, 50(4), 833-867. doi:10.1515/ling-2012-0026.

    Abstract

    The role of grammatical systems in profiling particular conceptual categories is used as a key in exploring questions concerning language specificity during the conceptualization phase in language production. This study focuses on the extent to which crosslinguistic differences in the concepts profiled by grammatical means in the domain of temporality (grammatical aspect) affect event conceptualization and distribution of attention when talking about motion events. The analyses, which cover native speakers of Standard Arabic, Czech, Dutch, English, German, Russian and Spanish, not only involve linguistic evidence, but also data from an eye tracking experiment and a memory test. The findings show that direction of attention to particular parts of motion events varies to some extent with the existence of grammaticized means to express imperfective/progressive aspect. Speakers of languages that do not have grammaticized aspect of this type are more likely to take a holistic view when talking about motion events and attend to as well as refer to endpoints of motion events, in contrast to speakers of aspect languages.

    Files private

    Request files
  • Von Holzen, K., & Bergmann, C. (2021). The development of infants’ responses to mispronunciations: A meta-analysis. Developmental Psychology, 57(1), 1-18. doi:10.1037/dev0001141.

    Abstract

    As they develop into mature speakers of their native language, infants must not only learn words but also the sounds that make up those words. To do so, they must strike a balance between accepting speaker dependent variation (e.g. mood, voice, accent), but appropriately rejecting variation when it (potentially) changes a word's meaning (e.g. cat vs. hat). This meta-analysis focuses on studies investigating infants' ability to detect mispronunciations in familiar words, or mispronunciation sensitivity. Our goal was to evaluate the development of infants' phonological representations for familiar words as well as explore the role of experimental manipulations related to theoretical questions and analysis choices. The results show that although infants are sensitive to mispronunciations, they still accept these altered forms as labels for target objects. Interestingly, this ability is not modulated by age or vocabulary size, suggesting that a mature understanding of native language phonology may be present in infants from an early age, possibly before the vocabulary explosion. These results also support several theoretical assumptions made in the literature, such as sensitivity to mispronunciation size and position of the mispronunciation. We also shed light on the impact of data analysis choices that may lead to different conclusions regarding the development of infants' mispronunciation sensitivity. Our paper concludes with recommendations for improved practice in testing infants' word and sentence processing on-line.
  • De Vos, C., & Palfreyman, N. (2012). [Review of the book Deaf around the World: The impact of language / ed. by Mathur & Napoli]. Journal of Linguistics, 48, 731 -735.

    Abstract

    First paragraph. Since its advent half a century ago, the field of sign language linguistics has had close ties to education and the empowerment of deaf communities, a union that is fittingly celebrated by Deaf around the world: The impact of language. With this fruitful relationship in mind, sign language researchers and deaf educators gathered in Philadelphia in 2008, and in the volume under review, Gaurav Mathur & Donna Jo Napoli (henceforth M&N) present a selection of papers from this conference, organised in two parts: ‘Sign languages: Creation, context, form’, and ‘Social issues/civil rights ’. Each of the chapters is accompanied by a response chapter on the same or a related topic. The first part of the volume focuses on the linguistics of sign languages and includes papers on the impact of language modality on morphosyntax, second language acquisition, and grammaticalisation, highlighting the fine balance that sign linguists need to strike when conducting methodologically sound research. The second part of the book includes accounts by deaf activists from countries including China, India, Japan, Kenya, South Africa and Sweden who are considered prominent figures in areas such as deaf education, politics, culture and international development.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.

    Abstract

    Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., & Verhoeven, L. (2012). The nature of rhyme processing in preliterate children. British Journal of Educational Psychology, 82, 672-689. doi:10.1111/j.2044-8279.2011.02055.x.

    Abstract

    Background. Rhyme awareness is one of the earliest forms of phonological awareness to develop and is assessed in many developmental studies by means of a simple rhyme task. The influence of more demanding experimental paradigms on rhyme judgment performance is often neglected. Addressing this issue may also shed light on whether rhyme processing is more global or analytical in nature. Aims. The aim of the present study was to examine whether lexical status and global similarity relations influenced rhyme judgments in kindergarten children and if so, if there is an interaction between these two factors. Sample. Participants were 41 monolingual Dutch-speaking preliterate kindergartners (average age 6.0 years) who had not yet received any formal reading education. Method. To examine the effects of lexical status and phonological similarity processing, the kindergartners were asked to make rhyme judgements on (pseudo) word targets that rhymed, phonologically overlapped or were unrelated to (pseudo) word primes. Results. Both a lexicality effect (pseudo-words were more difficult than words) and a global similarity effect (globally similar non-rhyming items were more difficult to reject than unrelated items) were observed. In addition, whereas in words the global similarity effect was only present in accuracy outcomes, in pseudo-words it was also observed in the response latencies. Furthermore, a large global similarity effect in pseudo-words correlated with a low score on short-term memory skills and grapheme knowledge. Conclusions. Increasing task demands led to a more detailed assessment of rhyme processing skills. Current assessment paradigms should therefore be extended with more demanding conditions. In light of the views on rhyme processing, we propose that a combination of global and analytical strategies is used to make a correct rhyme judgment.
  • Wagner, M. A., Broersma, M., McQueen, J. M., Dhaene, S., & Lemhöfer, K. (2021). Phonetic convergence to non-native speech: Acoustic and perceptual evidence. Journal of Phonetics, 88: 101076. doi:10.1016/j.wocn.2021.101076.

    Abstract

    While the tendency of speakers to align their speech to that of others acoustic-phonetically has been widely studied among native speakers, very few studies have examined whether natives phonetically converge to non-native speakers. Here we measured native Dutch speakers’ convergence to a non-native speaker with an unfamiliar accent in a novel non-interactive task. Furthermore, we assessed the role of participants’ perceptions of the non-native accent in their tendency to converge. In addition to a perceptual measure (AXB ratings), we examined convergence on different acoustic dimensions (e.g., vowel spectra, fricative CoG, speech rate, overall f0) to determine what dimensions, if any, speakers converge to. We further combined these two types of measures to discover what dimensions weighed in raters’ judgments of convergence. The results reveal overall convergence to our non-native speaker, as indexed by both perceptual and acoustic measures. However, the ratings suggest the stronger participants rated the non-native accent to be, the less likely they were to converge. Our findings add to the growing body of evidence that natives can phonetically converge to non-native speech, even without any apparent socio-communicative motivation to do so. We argue that our results are hard to integrate with a purely social view of convergence.
  • Walker, R. M., Hill, A. E., Newman, A. C., Hamilton, G., Torrance, H. S., Anderson, S. M., Ogawa, F., Derizioti, P., Nicod, J., Vernes, S. C., Fisher, S. E., Thomson, P. A., Porteous, D. J., & Evans, K. L. (2012). The DISC1 promoter: Characterization and regulation by FOXP2. Human Molecular Genetics, 21, 2862-2872. doi:10.1093/hmg/dds111.

    Abstract

    Disrupted in schizophrenia 1 (DISC1) is a leading candidate susceptibility gene for schizophrenia, bipolar disorder, and recurrent major depression, which has been implicated in other psychiatric illnesses of neurodevelopmental origin, including autism. DISC1 was initially identified at the breakpoint of a balanced chromosomal translocation, t(1;11) (q42.1;14.3), in a family with a high incidence of psychiatric illness. Carriers of the translocation show a 50% reduction in DISC1 protein levels, suggesting altered DISC1 expression as a pathogenic mechanism in psychiatric illness. Altered DISC1 expression in the post-mortem brains of individuals with psychiatric illness and the frequent implication of non-coding regions of the gene by association analysis further support this assertion. Here, we provide the first characterisation of the DISC1 promoter region. Using dual luciferase assays, we demonstrate that a region -300bp to -177bp relative to the transcription start site (TSS) contributes positively to DISC1 promoter activity, whilst a region -982bp to -301bp relative to the TSS confers a repressive effect. We further demonstrate inhibition of DISC1 promoter activity and protein expression by FOXP2, a transcription factor implicated in speech and language function. This inhibition is diminished by two distinct FOXP2 point mutations, R553H and R328X, which were previously found in families affected by developmental verbal dyspraxia (DVD). Our work identifies an intriguing mechanistic link between neurodevelopmental disorders that have traditionally been viewed as diagnostically distinct but which do share varying degrees of phenotypic overlap.
  • Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.

    Abstract

    The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.

    Abstract

    Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion.
  • Wang, L., Zhu, Z., & Bastiaansen, M. C. M. (2012). Integration or predictability? A further specification of the functional role of gamma oscillations in language comprehension. Frontiers in Psychology, 3, 187. doi:10.3389/fpsyg.2012.00187.

    Abstract

    Gamma-band neuronal synchronization during sentence-level language comprehension has previously been linked with semantic unification. Here, we attempt to further narrow down the functional significance of gamma during language comprehension, by distinguishing between two aspects of semantic unification: successful integration of word meaning into the sentence context, and prediction of upcoming words. We computed event-related potentials (ERPs) and frequency band-specific electroencephalographic (EEG) power changes while participants read sentences that contained a critical word (CW) that was (1) both semantically congruent and predictable (high cloze, HC), (2) semantically congruent but unpredictable (low cloze, LC), or (3) semantically incongruent (and therefore also unpredictable; semantic violation, SV). The ERP analysis showed the expected parametric N400 modulation (HC < LC < SV). The time-frequency analysis showed qualitatively different results. In the gamma-frequency range, we observed a power increase in response to the CW in the HC condition, but not in the LC and the SV conditions. Additionally, in the theta frequency range we observed a power increase in the SV condition only. Our data provide evidence that gamma power increases are related to the predictability of an upcoming word based on the preceding sentence context, rather than to the integration of the incoming word’s semantics into the preceding context. Further, our theta band data are compatible with the notion that theta band synchronization in sentence comprehension might be related to the detection of an error in the language input.
  • Weber, A., & Scharenborg, O. (2012). Models of spoken-word recognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 387-401. doi:10.1002/wcs.1178.

    Abstract

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012
  • Weber, A., & Crocker, M. W. (2012). On the nature of semantic constraints on lexical access. Journal of Psycholinguistic Research, 41, 195-214. doi:10.1007/s10936-011-9184-0.

    Abstract

    We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, ‘flower’) when instructed to click on low-frequency targets (e.g., Bluse, ‘blouse’). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
  • Weterman, M. A. J., Wilbrink, M. J. M., Janssen, I. M., Janssen, H. A. P., Berg, E. v. d., Fisher, S. E., Craig, I., & Geurts van Kessel, A. H. M. (1996). Molecular cloning of the papillary renal cell carcinoma-associated translocation (X;1)(p11;q21) breakpoint. Cytogenetic and genome research, 75(1), 2-6. doi:10.1159/000134444.

    Abstract

    A combination of Southern blot analysis on a panel of tumor-derived somatic cell hybrids and fluorescence in situ hybridization techniques was used to map YACs, cosmids and DNA markers from the Xp11.2 region relative to the X chromosome breakpoint of the renal cell carcinoma-associated t(X;1)(p11;q21). The position of the breakpoint could be determined as follows: Xcen-OATL2-DXS146-DXS255-SYP-t(X;1)-TFE 3-OATL1-Xpter. Fluorescence in situ hybridization experiments using TFE3-containing YACs and cosmids revealed split signals indicating that the corresponding DNA inserts span the breakpoint region. Subsequent Southern blot analysis showed that a 2.3-kb EcoRI fragment which is present in all TFE3 cosmids identified, hybridizes to aberrant restriction fragments in three independent t(X;1)-positive renal cell carcinoma DNAs. The breakpoints in these tumors are not the same, but map within a region of approximately 6.5 kb. Through preparative gel electrophoresis an (X;1) chimaeric 4.4-kb EcoRI fragment could be isolated which encompasses the breakpoint region present on der(X). Preliminary characterization of this fragment revealed the presence of a 150-bp region with a strong homology to the 5' end of the mouse TFE3 cDNA in the X-chromosome part, and a 48-bp segment in the chromosome 1-derived part identical to the 5' end of a known EST (accession number R93849). These observations suggest that a fusion gene is formed between the two corresponding genes in t(X;1)(p11;q21)-positive papillary renal cell carcinomas.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2012). Corrigendum to CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 11, 501. doi:10.1111/j.1601-183X.2012.00806.x.

    Abstract

    Corrigendum to CNTNAP2 variants affect early language development in the general population A. J. O. Whitehouse, D. V. M. Bishop, Q. W. Ang, C. E. Pennell and S. E. Fisher Genes Brain Behav (2011) doi: 10.1111/j.1601-183X.2011.00684.x. The authors have detected a typographical error in the Abstract of this paper. The error is in the fifth sentence, which reads: ‘‘On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG,P = .0014).’’ Rather than ‘‘GCAG’’, the final haplotype should read ‘‘CGAG’’. This typographical error was made in the Abstract only and this has no bearing on the results or conclusions of the study, which remain unchanged. Reference Whitehouse, A. J. O., Bishop, D. V. M., Ang, Q. W., Pennell, C. E. & Fisher, S. E. (2011) CNTNAP2 variants affect early language development in the general population. Genes Brain Behav 10, 451–456. doi: 10.1111/j.1601-183X.2011.00684.x.
  • Whitehouse, H., & Cohen, E. (2012). Seeking a rapprochement between anthropology and the cognitive sciences: A problem-driven approach. Topics in Cognitive Science, 4, 404-412. doi:10.1111/j.1756-8765.2012.01203.x.

    Abstract

    Beller, Bender, and Medin question the necessity of including social anthropology within the cognitive sciences. We argue that there is great scope for fruitful rapprochement while agreeing that there are obstacles (even if we might wish to debate some of those specifically identified by Beller and colleagues). We frame the general problem differently, however: not in terms of the problem of reconciling disciplines and research cultures, but rather in terms of the prospects for collaborative deployment of expertise (methodological and theoretical) in problem-driven research. For the purposes of illustration, our focus in this article is on the evolution of cooperation
  • Wilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z. and 17 moreWilkinson, G. S., Adams, D. M., Haghani, A., Lu, A. T., Zoller, J., Breeze, C. E., Arnold, B. D., Ball, H. C., Carter, G. G., Cooper, L. N., Dechmann, D. K. N., Devanna, P., Fasel, N. J., Galazyuk, A. V., Günther, L., Hurme, E., Jones, G., Knörnschild, M., Lattenkamp, E. Z., Li, C. Z., Mayer, F., Reinhardt, J. A., Medellin, R. A., Nagy, M., Pope, B., Power, M. L., Ransome, R. D., Teeling, E. C., Vernes, S. C., Zamora-Mejías, D., Zhang, J., Faure, P. A., Greville, L. J., Herrera M., L. G., Flores-Martínez, J. J., & Horvath, S. (2021). DNA methylation predicts age and provides insight into exceptional longevity of bats. Nature Communications, 12: 1615. doi:10.1038/s41467-021-21900-2.

    Abstract

    Exceptionally long-lived species, including many bats, rarely show overt signs of aging, making it difficult to determine why species differ in lifespan. Here, we use DNA methylation (DNAm) profiles from 712 known-age bats, representing 26 species, to identify epigenetic changes associated with age and longevity. We demonstrate that DNAm accurately predicts chronological age. Across species, longevity is negatively associated with the rate of DNAm change at age-associated sites. Furthermore, analysis of several bat genomes reveals that hypermethylated age- and longevity-associated sites are disproportionately located in promoter regions of key transcription factors (TF) and enriched for histone and chromatin features associated with transcriptional regulation. Predicted TF binding site motifs and enrichment analyses indicate that age-related methylation change is influenced by developmental processes, while longevity-related DNAm change is associated with innate immunity or tumorigenesis genes, suggesting that bat longevity results from augmented immune response and cancer suppression.

    Additional information

    supplementary information
  • Willems, R. M., & Francken, J. C. (2012). Embodied cognition: Taking the next step. Frontiers in Psychology, 3, 582. doi:10.3389/fpsyg.2012.00582.

    Abstract

    Recent years have seen a large amount of empirical studies related to ‘embodied cognition’. While interesting and valuable, there is something dissatisfying with the current state of affairs in this research domain. Hypotheses tend to be underspecified, testing in general terms for embodied versus disembodied processing. The lack of specificity of current hypotheses can easily lead to an erosion of the embodiment concept, and result in a situation in which essentially any effect is taken as positive evidence. Such erosion is not helpful to the field and does not do justice to the importance of embodiment. Here we want to take stock, and formulate directions for how it can be studied in a more fruitful fashion. As an example we will describe few example studies that have investigated the role of sensori-motor systems in the coding of meaning (‘embodied semantics’). Instead of focusing on the dichotomy between embodied and disembodied theories, we suggest that the field move forward and ask how and when sensori-motor systems and behavior are involved in cognition.
  • Willems, R. M., & Peelen, M. V. (2021). How context changes the neural basis of perception and language. iScience, 24(5): 102392. doi:10.1016/j.isci.2021.102392.

    Abstract

    Cognitive processes—from basic sensory analysis to language understanding—are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
  • Woensdregt, M., Cummins, C., & Smith, K. (2021). A computational model of the cultural co-evolution of language and mindreading. Synthese, 199, 1347-1385. doi:10.1007/s11229-020-02798-7.

    Abstract

    Several evolutionary accounts of human social cognition posit that language has co-evolved with the sophisticated mindreading abilities of modern humans. It has also been argued that these mindreading abilities are the product of cultural, rather than biological, evolution. Taken together, these claims suggest that the evolution of language has played an important role in the cultural evolution of human social cognition. Here we present a new computational model which formalises the assumptions that underlie this hypothesis, in order to explore how language and mindreading interact through cultural evolution. This model treats communicative behaviour as an interplay between the context in which communication occurs, an agent’s individual perspective on the world, and the agent’s lexicon. However, each agent’s perspective and lexicon are private mental representations, not directly observable to other agents. Learners are therefore confronted with the task of jointly inferring the lexicon and perspective of their cultural parent, based on their utterances in context. Simulation results show that given these assumptions, an informative lexicon evolves not just under a pressure to be successful at communicating, but also under a pressure for accurate perspective-inference. When such a lexicon evolves, agents become better at inferring others’ perspectives; not because their innate ability to learn about perspectives changes, but because sharing a language (of the right type) with others helps them to do so.
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Wongratwanich, P., Shimabukuro, K., Konishi, M., Nagasaki, T., Ohtsuka, M., Suei, Y., Nakamoto, T., Verdonschot, R. G., Kanesaki, T., Sutthiprapaporn, P., & Kakimoto, N. (2021). Do various imaging modalities provide potential early detection and diagnosis of medication-related osteonecrosis of the jaw? A review. Dentomaxillofacial Radiology, 50: 20200417. doi:10.1259/dmfr.20200417.

    Abstract


    Objective: Patients with medication-related osteonecrosis of the jaw (MRONJ) often visit their dentists at advanced stages and subsequently require treatments that greatly affect quality of life. Currently, no clear diagnostic criteria exist to assess MRONJ, and the definitive diagnosis solely relies on clinical bone exposure. This ambiguity leads to a diagnostic delay, complications, and unnecessary burden. This article aims to identify imaging modalities' usage and findings of MRONJ to provide possible approaches for early detection.

    Methods: Literature searches were conducted using PubMed, Web of Science, Scopus, and Cochrane Library to review all diagnostic imaging modalities for MRONJ.

    Results: Panoramic radiography offers a fundamental understanding of the lesions. Imaging findings were comparable between non-exposed and exposed MRONJ, showing osteolysis, osteosclerosis, and thickened lamina dura. Mandibular cortex index Class II could be a potential early MRONJ indicator. While three-dimensional modalities, CT and CBCT, were able to show more features unique to MRONJ such as a solid type periosteal reaction, buccal predominance of cortical perforation, and bone-within-bone appearance. MRI signal intensities of vital bones are hypointense on T1WI and hyperintense on T2WI and STIR when necrotic bone shows hypointensity on all T1WI, T2WI, and STIR. Functional imaging is the most sensitive method but is usually performed in metastasis detection rather than being a diagnostic tool for early MRONJ.

    Conclusion: Currently, MRONJ-specific imaging features cannot be firmly established. However, the current data are valuable as it may lead to a more efficient diagnostic procedure along with a more suitable selection of imaging modalities.
  • Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.

    Abstract

    We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., Hino, Y., & Lupker, S. J. (2021). Orthographic properties of distractors do influence phonological Stroop effects: Evidence from Japanese Romaji distractors. Memory & Cognition, 49(3), 600-612. doi:10.3758/s13421-020-01103-8.

    Abstract

    In attempting to understand mental processes, it is important to use a task that appropriately reflects the underlying processes being investigated. Recently, Verdonschot and Kinoshita (Memory & Cognition, 46,410-425, 2018) proposed that a variant of the Stroop task-the "phonological Stroop task"-might be a suitable tool for investigating speech production. The major advantage of this task is that the task is apparently not affected by the orthographic properties of the stimuli, unlike other, commonly used, tasks (e.g., associative-cuing and word-reading tasks). The viability of this proposal was examined in the present experiments by manipulating the script types of Japanese distractors. For Romaji distractors (e.g., "kushi"), color-naming responses were faster when the initial phoneme was shared between the color name and the distractor than when the initial phonemes were different, thereby showing a phoneme-based phonological Stroop effect (Experiment1). In contrast, no such effect was observed when the same distractors were presented in Katakana (e.g., "< ") pound, replicating Verdonschot and Kinoshita's original results (Experiment2). A phoneme-based effect was again found when the Katakana distractors used in Verdonschot and Kinoshita's original study were transcribed and presented in Romaji (Experiment3). Because the observation of a phonemic effectdirectly depended on the orthographic properties of the distractor stimuli, we conclude that the phonological Stroop task is also susceptible to orthographic influences.
  • You, W., Zhang, Q., & Verdonschot, R. G. (2012). Masked syllable priming effects in word and picture naming in Chinese. PLoS One, 7(10): e46595. doi:10.1371/journal.pone.0046595.

    Abstract

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e. g., /ba2.ying2/, strike camp) and CVG (e. g., /bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e. g., CV (/qi4.qiu2/, balloon) and CVN (/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e. g., /qi3/, attempt), or a CVN (e. g., /qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e. g., /mi2.ni3/, mini) and CVN-CX (e. g., /min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.
  • Zaadnoordijk, L., Buckler, H., Cusack, R., Tsuji, S., & Bergmann, C. (2021). A global perspective on testing infants online: Introducing ManyBabies-AtHome. Frontiers in Psychology, 12: 703234. doi:10.3389/fpsyg.2021.703234.

    Abstract

    Online testing holds great promise for infant scientists. It could increase participant diversity, improve reproducibility and collaborative possibilities, and reduce costs for researchers and participants. However, despite the rise of platforms and participant databases, little work has been done to overcome the challenges of making this approach available to researchers across the world. In this paper, we elaborate on the benefits of online infant testing from a global perspective and identify challenges for the international community that have been outside of the scope of previous literature. Furthermore, we introduce ManyBabies-AtHome, an international, multi-lab collaboration that is actively working to facilitate practical and technical aspects of online testing as well as address ethical concerns regarding data storage and protection, and cross-cultural variation. The ultimate goal of this collaboration is to improve the method of testing infants online and make it globally available.
  • Yu, C., Zhang, Y., Slone, L. K., & Smith, L. B. (2021). The infant’s view redefines the problem of referential uncertainty in early word learning. Proceedings of the National Academy of Sciences of the United States of America, 118(52): e2107019118. doi:10.1073/pnas.2107019118.

    Abstract

    The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2021). Cross-situational learning from ambiguous egocentric input is a continuous process: Evidence using the human simulation paradigm. Cognitive Science, 45(7): e13010. doi:10.1111/cogs.13010.

    Abstract

    Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners' performance gradually improved over time. This improvement was driven in part by learners' use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.
  • Zhong, S., Wei, L., Zhao, C., Yang, L., Di, Z., Francks, C., & Gong, G. (2021). Interhemispheric relationship of genetic influence on human brain connectivity. Cerebral Cortex, 31(1), 77-88. doi:10.1093/cercor/bhaa207.

    Abstract

    To understand the origins of interhemispheric differences and commonalities/coupling in human brain wiring, it is crucial to determine how homologous interregional connectivities of the left and right hemispheres are genetically determined and related. To address this, in the present study, we analyzed human twin and pedigree samples with high-quality diffusion magnetic resonance imaging tractography and estimated the heritability and genetic correlation of homologous left and right white matter (WM) connections. The results showed that the heritability of WM connectivity was similar and coupled between the 2 hemispheres and that the degree of overlap in genetic factors underlying homologous WM connectivity (i.e., interhemispheric genetic correlation) varied substantially across the human brain: from complete overlap to complete nonoverlap. Particularly, the heritability was significantly stronger and the chance of interhemispheric complete overlap in genetic factors was higher in subcortical WM connections than in cortical WM connections. In addition, the heritability and interhemispheric genetic correlations were stronger for long-range connections than for short-range connections. These findings highlight the determinants of the genetics underlying WM connectivity and its interhemispheric relationships, and provide insight into genetic basis of WM connectivity asymmetries in both healthy and disease states.

    Additional information

    Supplementary data
  • Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.

    Abstract

    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.

    Additional information

    stimulus materials
  • Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.

    Abstract

    Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.

    Additional information

    Zhu_2012_suppl.dot
  • Zimianiti, E. (2021). Adjective-noun constructions in Griko: Focusing on measuring adjectives and their placement in the nominal domain. LingUU Journal, 5(2), 62-75.

    Abstract

    This paper examines adjectival placement in Griko, an Italian-Greek lan-
    guage variety. Guardiano and Stavrou (2019, 2014) have argued that
    there is a gap of evidence in the diachrony of adjectives in prenominal
    position and in particular, of measuring adjectives. Evidence is presented
    in this paper contradicting the aforementioned claims. After considering
    the placement of adjectives in Greek and Italian, and their similarities
    and differences, the adjectival pattern of Griko is analysed. The analysis
    is based mostly on written data from the early 20th century proving the
    prenominal position of adjectives and adding to the diachronic schema of
    adjectival placement in Griko.
  • Zinken, J., Kaiser, J., Weidner, M., Mondada, L., Rossi, G., & Sorjonen, M.-L. (2021). Rule talk: Instructing proper play with impersonal deontic statements. Frontiers in Communication, 6: 660394. doi:10.3389/fcomm.2021.660394.

    Abstract

    The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. ‘It’s not allowed to do this’). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction”. The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
  • Zora, H., Riad, T., Ylinen, S., & Csépe, V. (2021). Phonological variations are compensated at the lexical level: Evidence from auditory neural activity. Frontiers in Human Neuroscience, 15: 622904. doi:10.3389/fnhum.2021.622904.

    Abstract

    Dealing with phonological variations is important for speech processing. This article addresses whether phonological variations introduced by assimilatory processes are compensated for at the pre-lexical or lexical level, and whether the nature of variation and the phonological context influence this process. To this end, Swedish nasal regressive place assimilation was investigated using the mismatch negativity (MMN) component. In nasal regressive assimilation, the coronal nasal assimilates to the place of articulation of a following segment, most clearly with a velar or labial place of articulation, as in utan mej “without me” > [ʉːtam mɛjː]. In a passive auditory oddball paradigm, 15 Swedish speakers were presented with Swedish phrases with attested and unattested phonological variations and contexts for nasal assimilation. Attested variations – a coronal-to-labial change as in utan “without” > [ʉːtam] – were contrasted with unattested variations – a labial-to-coronal change as in utom “except” > ∗[ʉːtɔn] – in appropriate and inappropriate contexts created by mej “me” [mɛjː] and dej “you” [dɛjː]. Given that the MMN amplitude depends on the degree of variation between two stimuli, the MMN responses were expected to indicate to what extent the distance between variants was tolerated by the perceptual system. Since the MMN response reflects not only low-level acoustic processing but also higher-level linguistic processes, the results were predicted to indicate whether listeners process assimilation at the pre-lexical and lexical levels. The results indicated no significant interactions across variations, suggesting that variations in phonological forms do not incur any cost in lexical retrieval; hence such variation is compensated for at the lexical level. However, since the MMN response reached significance only for a labial-to-coronal change in a labial context and for a coronal-to-labial change in a coronal context, the compensation might have been influenced by the nature of variation and the phonological context. It is therefore concluded that while assimilation is compensated for at the lexical level, there is also some influence from pre-lexical processing. The present results reveal not only signal-based perception of phonological units, but also higher-level lexical processing, and are thus able to reconcile the bottom-up and top-down models of speech processing.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
  • Zwaan, R. A., Van der Stoep, N., Guadalupe, T., & Bouwmeester, S. (2012). Language comprehension in the balance: The robustness of the action-compatibility effect (ACE). PLoS One, 7(2), e31204. doi:10.1371/journal.pone.0031204.

    Abstract

    How does language comprehension interact with motor activity? We investigated the conditions under which comprehending an action sentence affects people's balance. We performed two experiments to assess whether sentences describing forward or backward movement modulate the lateral movements made by subjects who made sensibility judgments about the sentences. In one experiment subjects were standing on a balance board and in the other they were seated on a balance board that was mounted on a chair. This allowed us to investigate whether the action compatibility effect (ACE) is robust and persists in the face of salient incompatibilities between sentence content and subject movement. Growth-curve analysis of the movement trajectories produced by the subjects in response to the sentences suggests that the ACE is indeed robust. Sentence content influenced movement trajectory despite salient inconsistencies between implied and actual movement. These results are interpreted in the context of the current discussion of embodied, or grounded, language comprehension and meaning representation.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.

    Abstract

    This paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.

Share this page