Publications

Displaying 501 - 568 of 568
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (Eds.), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2006). Central Solomon languages. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.

    Abstract

    The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea.
  • Terrill, A. (2006). Body part terms in Lavukaleve, a Papuan language of the Solomon Islands. Language Sciences, 28(2-3), 304-322. doi:10.1016/j.langsci.2005.11.008.

    Abstract

    This paper explores body part terms in Lavukaleve, a Papuan isolate spoken in the Solomon Islands. The full set of body part terms collected so far is presented, and their grammatical properties are explained. It is argued that Lavukaleve body part terms do not enter into partonomic relations with each other, and that a hierarchical structure of body part terms does not apply for Lavukaleve. It is shown too that some universal claims which have been made about the expression of terms relating to limbs are contradicted in Lavukaleve, which has only one general term covering arm, hand, leg and (for some people) foot.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2006). Note of clarification on the coding of light verbs in ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language 31, 61–99). Journal of Child Language, 33(1), 191-197. doi:10.1017/S0305000905007178.

    Abstract

    In our recent paper, ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language31, 61–99), we presented data from two-year-old children to examine the question of whether the semantic generality of verbs contributed to their ease and stage of acquisition over and above the effects of their typically high frequency in the language to which children are exposed. We adopted two different categorization schemes to determine whether individual verbs should be considered to be semantically general, or ‘light’, or whether they encoded more specific semantics. These categorization schemes were based on previous work in the literature on the role of semantically general verbs in early verb acquisition, and were designed, in the first case, to be a conservative estimate of semantic generality, including only verbs designated as semantically general by a number of other researchers (e.g. Clark, 1978; Pinker, 1989; Goldberg, 1998), and, in the second case, to be a more inclusive estimate of semantic generality based on Ninio's (1999a,b) suggestion that grammaticalizing verbs encode the semantics associated with semantically general verbs. Under this categorization scheme, a much larger number of verbs were included as semantically general verbs.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Van Staden, M., Bowerman, M., & Verhelst, M. (2006). Some properties of spatial description in Dutch. In S. C. Levinson, & D. Wilkins (Eds.), Grammars of Space (pp. 475-511). Cambridge: Cambridge University Press.
  • Van Alphen, P. M., & McQueen, J. M. (2006). The effect of voice onset time differences on lexical access in Dutch. Journal of Experimental Psychology: Human Perception and Performance, 32(1), 178-196. doi:10.1037/0096-1523.32.1.178.

    Abstract

    Effects on spoken-word recognition of prevoicing differences in Dutch initial voiced plosives were examined. In 2 cross-modal identity-priming experiments, participants heard prime words and nonwords beginning with voiced plosives with 12, 6, or 0 periods of prevoicing or matched items beginning with voiceless plosives and made lexical decisions to visual tokens of those items. Six-period primes had the same effect as 12-period primes. Zero-period primes had a different effect, but only when their voiceless counterparts were real words. Listeners could nevertheless discriminate the 6-period primes from the 12- and 0-period primes. Phonetic detail appears to influence lexical access only to the extent that it is useful: In Dutch, presence versus absence of prevoicing is more informative than amount of prevoicing.
  • Van den Brink, D., Brown, C. M., & Hagoort, P. (2006). The cascaded nature of lexical selection and integration in auditory sentence processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(3), 364-372. doi:10.1037/0278-7393.32.3.364.

    Abstract

    An event-related brain potential experiment was carried out to investigate the temporal relationship
    between lexical selection and the semantic integration in auditory sentence processing. Participants were
    presented with spoken sentences that ended with a word that was either semantically congruent or
    anomalous. Information about the moment in which a sentence-final word could uniquely be identified,
    its isolation point (IP), was compared with the onset of the elicited N400 congruity effect, reflecting
    semantic integration processing. The results revealed that the onset of the N400 effect occurred prior to
    the IP of the sentence-final words. Moreover, the factor early or late IP did not affect the onset of the
    N400. These findings indicate that lexical selection and semantic integration are cascading processes, in
    that semantic integration processing can start before the acoustic information allows the selection of a
    unique candidate and seems to be attempted in parallel for multiple candidates that are still compatible
    with the bottom–up acoustic input.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Wijk, C., & Kempen, G. (1987). A dual system for producing self-repairs in spontaneous speech: Evidence from experimentally elicited corrections. Cognitive Psychology, 19, 403-440. doi:10.1016/0010-0285(87)90014-4.

    Abstract

    This paper presents a cognitive theory on the production and shaping of selfrepairs during speaking. In an extensive experimental study, a new technique is tried out: artificial elicitation of self-repairs. The data clearly indicate that two mechanisms for computing the shape of self-repairs should be distinguished. One is based on the repair strategy called reformulation, the second one on lemma substitution. W. Levelt’s (1983, Cognition, 14, 41- 104) well-formedness rule, which connects self-repairs to coordinate structures, is shown to apply only to reformulations. In case of lemma substitution, a totally different set of rules is at work. The linguistic unit of central importance in reformulations is the major syntactic constituent; in lemma substitutions it is a prosodic unit. the phonological phrase. A parametrization of the model yielded a very satisfactory fit between observed and reconstructed scores.
  • Van Berkum, J. J. A. (1986). De cognitieve psychologie op zoek naar grondslagen. Kennis en Methode: Tijdschrift voor wetenschapsfilosofie en methodologie, X, 348-360.
  • Van Berkum, J. J. A. (1986). Doordacht gevoel: Emoties als informatieverwerking. De Psycholoog, 21(9), 417-423.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Abstract

    This paper outlines a method for collecting information on the extensional meanings of body part terms using a colouring in task.
  • van de Beek, D., Weisfelt, M., Hoogman, M., de Gans, J., & Schmand, B. (2006). Neuropsychological sequelae of bacterial meningitis: The influence of alcoholism and adjunctive dexamethasone therapy [Letter to the editor]. Brain, 129, E46. doi:10.1093/brain/awl052.

    Abstract

    The article by Schmidt and colleagues (2006) reported neuropsychological sequelae of bacterial and viral meningitis. In a retrospective study, they carefully selected patients and excluded those with concomitant conditions such as alcoholism after Streptococcus pneumoniae meningitis (Schmidt et al., 2006). The authors should be complimented for their solid work; however, some questions can be raised.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Valin Jr., R. D. (2006). Some universals of verb semantics. In R. Mairal, & J. Gil (Eds.), Linguistic universals (pp. 155-178). Cambridge: Cambridge University Press.
  • Van Valin Jr., R. D. (2006). Semantic macroroles and language processing. In I. Bornkessel, M. Schlesewsky, B. Comrie, & A. Friederici (Eds.), Semantic role universals and argument linking: Theoretical, typological and psycho-/neurolinguistic perspectives (pp. 263-302). Berlin: Mouton de Gruyter.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Vernes, S. C., Nicod, J., Elahi, F. M., Coventry, J. A., Kenny, N., Coupe, A.-M., Bird, L. E., Davies, K. E., & Fisher, S. E. (2006). Functional genetic analysis of mutations implicated in a human speech and language disorder. Human Molecular Genetics, 15(21), 3154-3167. doi:10.1093/hmg/ddl392.

    Abstract

    Mutations in the FOXP2 gene cause a severe communication disorder involving speech deficits (developmental verbal dyspraxia), accompanied by wide-ranging impairments in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerization. Here we report the first direct functional genetic investigation of missense and nonsense mutations in FOXP2 using human cell-lines, including a well-established neuronal model system. We focused on three unusual FOXP2 coding variants, uniquely identified in cases of verbal dyspraxia, assessing expression, subcellular localization, DNA-binding and transactivation properties. Analysis of the R553H forkhead-box substitution, found in all affected members of a large three-generation family, indicated that it severely affects FOXP2 function, chiefly by disrupting nuclear localization and DNA-binding properties. The R328X truncation mutation, segregating with speech/language disorder in a second family, yields an unstable, predominantly cytoplasmic product that lacks transactivation capacity. A third coding variant (Q17L) observed in a single affected child did not have any detectable functional effect in the present study. In addition, we used the same systems to explore the properties of different isoforms of FOXP2, resulting from alternative splicing in human brain. Notably, one such isoform, FOXP2.10+, contains dimerization domains, but no DNA-binding domain, and displayed increased cytoplasmic localization, coupled with aggresome formation. We hypothesize that expression of alternative isoforms of FOXP2 may provide mechanisms for post-translational regulation of transcription factor function.
  • De Vos, C. (2006). Mixed signals: Combining affective and linguistic functions of eyebrows in sign language of The Netherlands (Master's thesis). Nijmegen: Department of Linguistics, Radboud University.

    Abstract

    Sign Language of the Netherlands (NGT) is a visual-gestural language in which linguistic information is conveyed through manual as well as non-manual channels; not only the hands, but also body position, head position and facial expression are important for the language structure. Facial expressions serve grammatical functions in the marking of topics, yes/no questions, and wh-questions (Coerts, 1992). Furthermore, facial expression is used nonlinguistically in the expression of affect (Ekman, 1979). Consequently, at the phonetic level obligatory marking of grammar using facial expression may conflict with the expression of affect. In this study, I investigated the interplay of linguistic and affective functions of brow movements in NGT. Three hypotheses were tested in this thesis. The first is that the affective markers of eyebrows would dominate over the linguistic markers. The second hypothesis predicts that the grammatical markers dominate over the affective brow movements. A third possibility is that a Phonetic Sum would occur in which both functions are combined simultaneously. I elicited sentences combining grammatical and affective functions of eyebrows using a randomised design. Five sentence types were included: declarative sentences, topic sentences, yes-no questions, wh-questions with the wh-sign sentence-final and wh-questions with the wh-sign sentence-initial. These sentences were combined with neutral, surprised, angry, and distressed affect. The brow movements were analysed using the Facial Action Coding System (Ekman, Friesen, & Hager, 2002a). In these sentences, the eyebrows serve a linguistic function, an affective function, or both. One of the possibilities in the latter cases was that a Phonetic Sum would occur that combines both functions simultaneously. Surprisingly, it was found that a Phonetic Sum occurs in which the phonetic weight of Action Unit 4 appears to play an important role. The results show that affect displays may alter question signals in NGT.
  • Wagner, A., Ernestus, M., & Cutler, A. (2006). Formant transitions in fricative identification: The role of native fricative inventory. Journal of the Acoustical Society of America, 120(4), 2267-2277. doi:10.1121/1.2335422.

    Abstract

    The distribution of energy across the noise spectrum provides the primary cues for the identification of a fricative. Formant transitions have been reported to play a role in identification of some fricatives, but the combined results so far are conflicting. We report five experiments testing the hypothesis that listeners differ in their use of formant transitions as a function of the presence of spectrally similar fricatives in their native language. Dutch, English, German, Polish, and Spanish native listeners performed phoneme monitoring experiments with pseudowords containing either coherent or misleading formant transitions for the fricatives / s / and / f /. Listeners of German and Dutch, both languages without spectrally similar fricatives, were not affected by the misleading formant transitions. Listeners of the remaining languages were misled by incorrect formant transitions. In an untimed labeling experiment both Dutch and Spanish listeners provided goodness ratings that revealed sensitivity to the acoustic manipulation. We conclude that all listeners may be sensitive to mismatching information at a low auditory level, but that they do not necessarily take full advantage of all available systematic acoustic variation when identifying phonemes. Formant transitions may be most useful for listeners of languages with spectrally similar fricatives.
  • Wang, X., Jahagirdar, S., Bakker, W., Lute, C., Kemp, B., Knegsel, A. v., & Saccenti, E. (2024). Discrimination of Lipogenic or Glucogenic Diet Effects in Early-Lactation Dairy Cows Using Plasma Metabolite Abundances and Ratios in Combination with Machine Learning. Metabolites, 14(4): 230. doi:10.3390/metabo14040230.

    Abstract

    During early lactation, dairy cows have a negative energy balance since their energy demands exceed their energy intake: in this study, we aimed to investigate the association between diet and plasma metabolomics profiles and how these relate to energy unbalance of course in the early-lactation stage. Holstein-Friesian cows were randomly assigned to a glucogenic (n = 15) or lipogenic (n = 15) diet in early lactation. Blood was collected in week 2 and week 4 after calving. Plasma metabolite profiles were detected using liquid chromatography–mass spectrometry (LC-MS), and a total of 39 metabolites were identified. Two plasma metabolomic profiles were available every week for each cow. Metabolite abundance and metabolite ratios were used for the analysis using the XGboost algorithm to discriminate between diet treatment and lactation week. Using metabolite ratios resulted in better discrimination performance compared with the metabolite abundances in assigning cows to a lipogenic diet or a glucogenic diet. The quality of the discrimination of performance of lipogenic diet and glucogenic diet effects improved from 0.606 to 0.753 and from 0.696 to 0.842 in week 2 and week 4 (as measured by area under the curve, AUC), when the metabolite abundance ratios were used instead of abundances. The top discriminating ratios for diet were the ratio of arginine to tyrosine and the ratio of aspartic acid to valine in week 2 and week 4, respectively. For cows fed the lipogenic diet, choline and the ratio of creatinine to tryptophan were top features to discriminate cows in week 2 vs. week 4. For cows fed the glucogenic diet, methionine and the ratio of 4-hydroxyproline to choline were top features to discriminate dietary effects in week 2 or week 4. This study shows the added value of using metabolite abundance ratios to discriminate between lipogenic and glucogenic diet and lactation weeks in early-lactation cows when using metabolomics data. The application of this research will help to accurately regulate the nutrition of lactating dairy cows and promote sustainable agricultural development.
  • Wang, M.-Y., Korbmacher, M., Eikeland, R., Craven, A. R., & Specht, K. (2024). The intra‐individual reliability of 1H‐MRS measurement in the anterior cingulate cortex across 1 year. Human Brain Mapping, 45(1): e26531. doi:10.1002/hbm.26531.

    Abstract

    Magnetic resonance spectroscopy (MRS) is the primary method that can measure the levels of metabolites in the brain in vivo. To achieve its potential in clinical usage, the reliability of the measurement requires further articulation. Although there are many studies that investigate the reliability of gamma-aminobutyric acid (GABA), comparatively few studies have investigated the reliability of other brain metabolites, such as glutamate (Glu), N-acetyl-aspartate (NAA), creatine (Cr), phosphocreatine (PCr), or myo-inositol (mI), which all play a significant role in brain development and functions. In addition, previous studies which predominately used only two measurements (two data points) failed to provide the details of the time effect (e.g., time-of-day) on MRS measurement within subjects. Therefore, in this study, MRS data located in the anterior cingulate cortex (ACC) were repeatedly recorded across 1 year leading to at least 25 sessions for each subject with the aim of exploring the variability of other metabolites by using the index coefficient of variability (CV); the smaller the CV, the more reliable the measurements. We found that the metabolites of NAA, tNAA, and tCr showed the smallest CVs (between 1.43% and 4.90%), and the metabolites of Glu, Glx, mI, and tCho showed modest CVs (between 4.26% and 7.89%). Furthermore, we found that the concentration reference of the ratio to water results in smaller CVs compared to the ratio to tCr. In addition, we did not find any time-of-day effect on the MRS measurements. Collectively, the results of this study indicate that the MRS measurement is reasonably reliable in quantifying the levels of metabolites.

    Additional information

    tables and figures data
  • Warner, N., Good, E., Jongman, A., & Sereno, J. (2006). Orthographic vs. morphological incomplete neutralization effects. Journal of Phonetics, 34(2), 285-293. doi:10.1016/j.wocn.2004.11.003.

    Abstract

    This study, following up on work on Dutch by Warner, Jongman, Sereno, and Kemps (2004. Journal of Phonetics, 32, 251–276), investigates the influence of orthographic distinctions and underlying morphological distinctions on the small sub-phonemic durational differences that have been called incomplete neutralization. One part of the previous work indicated that an orthographic geminate/singleton distinction could cause speakers to produce an incomplete neutralization effect. However, one interpretation of the materials in that experiment is that they contain an underlying difference in the phoneme string at the level of concatenation of morphemes, rather than just an orthographic difference. Thus, the previous effect might simply be another example of incomplete neutralization of a phonemic distinction. The current experiment, also on Dutch, uses word pairs which have the same underlying morphological contrast, but do not differ in orthography. These new materials show no incomplete neutralization, and thus support the hypothesis that orthography, but not underlying morphological differences, can cause incomplete neutralization effects.
  • Warren, J. E., Sauter, D., Eisner, F., Wiland, J., Dresner, M. A., Wise, R. J. S., Rosen, S., & Scott, S. K. (2006). Positive emotions preferentially engage an auditory–motor “mirror” system. The Journal of Neuroscience, 26(50), 13067-13075. doi:10.1523/JNEUROSCI.3907-06.2006.

    Abstract

    Social interaction relies on the ability to react to communication signals. Although cortical sensory–motor “mirror” networks are thought to play a key role in visual aspects of primate communication, evidence for a similar generic role for auditory–motor interaction in primate nonverbal communication is lacking. We demonstrate that a network of human premotor cortical regions activated during facial movement is also involved in auditory processing of affective nonverbal vocalizations. Within this auditory–motor mirror network, distinct functional subsystems respond preferentially to emotional valence and arousal properties of heard vocalizations. Positive emotional valence enhanced activation in a left posterior inferior frontal region involved in representation of prototypic actions, whereas increasing arousal enhanced activation in presupplementary motor area cortex involved in higher-order motor control. Our findings demonstrate that listening to nonverbal vocalizations can automatically engage preparation of responsive orofacial gestures, an effect that is greatest for positive-valence and high-arousal emotions. The automatic engagement of responsive orofacial gestures by emotional vocalizations suggests that auditory–motor interactions provide a fundamental mechanism for mirroring the emotional states of others during primate social behavior. Motor facilitation by positive vocal emotions suggests a basic neural mechanism for establishing cohesive bonds within primate social groups.
  • Weber, A., Braun, B., & Crocker, M. W. (2006). Finding referents in time: Eye-tracking evidence for the role of contrastive accents. Language and Speech, 49(3), 367-392.

    Abstract

    In two eye-tracking experiments the role of contrastive pitch accents during the on-line determination of referents was examined. In both experiments, German listeners looked earlier at the picture of a referent belonging to a contrast pair (red scissors, given purple scissors) when instructions to click on it carried a contrastive accent on the color adjective (L + H*) than when the adjective was not accented. In addition to this prosodic facilitation, a general preference to interpret adjectives contrastively was found in Experiment 1: Along with the contrast pair, a noncontrastive referent was displayed (red vase) and listeners looked more often at the contrastive referent than at the noncontrastive referent even when the adjective was not focused. Experiment 2 differed from Experiment 1 in that the first member of the contrast pair (purple scissors) was introduced with a contrastive accent, thereby strengthening the salience of the contrast. In Experiment 2, listeners no longer preferred a contrastive interpretation of adjectives when the accent in a subsequent instruction was not contrastive. In sum, the results support both an early role for prosody in reference determination and an interpretation of contrastive focus that is dependent on preceding prosodic context.
  • Weber, A., & Cutler, A. (2006). First-language phonotactics in second-language listening. Journal of the Acoustical Society of America, 119(1), 597-607. doi:10.1121/1.2141003.

    Abstract

    Highly proficient German users of English as a second language, and native speakers of American English, listened to nonsense sequences and responded whenever they detected an embedded English word. The responses of both groups were equivalently facilitated by preceding context that both by English and by German phonotactic constraints forced a boundary at word onset (e.g., lecture was easier to detect in moinlecture than in gorklecture, and wish in yarlwish than in plookwish. The American L1 speakers’ responses were strongly facilitated, and the German listeners’ responses almost as strongly facilitated, by contexts that forced a boundary in English but not in German thrarshlecture, glarshwish. The German listeners’ responses were significantly facilitated also by contexts that forced a boundary in German but not in English )moycelecture, loitwish, while L1 listeners were sensitive to acoustic boundary cues in these materials but not to the phonotactic sequences. The pattern of results suggests that proficient L2 listeners can acquire the phonotactic probabilities of an L2 and use them to good effect in segmenting continuous speech, but at the same time they may not be able to prevent interference from L1 constraints in their L2 listening.
  • Weber, A., Grice, M., & Crocker, M. W. (2006). The role of prosody in the interpretation of structural ambiguities: A study of anticipatory eye movements. Cognition, 99, B63-B72. doi:10.1016/j.cognition.2005.07.001.

    Abstract

    An eye-tracking experiment examined whether prosodic cues can affect the interpretation of grammatical functions in the absence of clear morphological information. German listeners were presented with scenes depicting three potential referents while hearing temporarily ambiguous SVO and OVS sentences. While case marking on the first noun phrase (NP) was ambiguous, clear case marking on the second NP disambiguated sentences towards SVO or OVS. Listeners interpreted caseambiguous NP1s more often as Subject, and thus expected an Object as upcoming argument, only when sentence beginnings carried an SVO-type intonation. This was revealed by more anticipatory eye movements to suitable Patients (Objects) than Agents (Subjects) in the visual scenes. No such preference was found when sentence beginnings had a clearly OVS-type intonation. Prosodic cues were integrated rapidly enough to affect listeners’ interpretation of grammatical function before disambiguating case information was available. We conclude that in addition to manipulating attachment ambiguities, prosody can influence the interpretation of constituent order ambiguities.
  • Wegener, C. (2006). Savosavo body part terminology. Language Sciences, 28(2-3), 344-359. doi:10.1016/j.langsci.2005.11.005.

    Abstract

    This paper provides a description of body part terminology used in Savosavo, a Papuan language of the Solomon Islands. The first part of the paper lists the known terms and discusses their meanings. This is followed by an analysis of their structural properties. Finally, the paper discusses partonomic relations in Savosavo and argues that it is difficult to structure the body part terminology hierarchically, because there is no linguistic evidence for part–whole relations between body parts.
  • Weisfelt, M., Hoogman, M., van de Beek, D., de Gans, J., Dreschler, W. A., & Schmand, B. A. (2006). Dexamethasone and long-term outcome in adults with bacterial meningitis. Annals of Neurology, 60, 456-468. doi:10.1002/ana.20944.

    Abstract

    This follow-up study of the European Dexamethasone Study was designed to examine the potential harmful effect of adjunctive dexamethasone treatment on long-term neuropsychological outcome in adults with bacterial meningitis. METHODS: Neurological, audiological, and neuropsychological examinations were performed in adults who survived pneumococcal or meningococcal meningitis. RESULTS: Eighty-seven of 99 (88%) eligible patients were included in the follow-up study; 46 (53%) were treated with dexamethasone and 41 (47%) with placebo. Median time between meningitis and testing was 99 months. Neuropsychological evaluation showed no significant differences between patients treated with dexamethasone and placebo. The proportions of patients with persisting neurological sequelae or hearing loss were similar in the dexamethasone and placebo groups. The overall rate of cognitive dysfunction did not differ significantly between patients and control subjects; however, patients after pneumococcal meningitis had a higher rate of cognitive dysfunction (21 vs 6%; p = 0.05) and experienced more impairment of everyday functioning due to physical problems (p = 0.05) than those after meningococcal meningitis. INTERPRETATION: Treatment with adjunctive dexamethasone is not associated with an increased risk for long-term cognitive impairment. Adults who survive pneumococcal meningitis are at significant risk for long-term neuropsychological abnormalities.
  • Weisfelt, M., van de Beek, D., Hoogman, M., Hardeman, C., de Gans, J., & Schmand, B. (2006). Cognitive outcome in adults with moderate disability after pneumococcal meningitis. Journal of Infection, 52, 433-439. doi:10.1016/j.jinf.2005.08.014.

    Abstract

    Objectives To assess cognitive outcome and quality of life in patients with moderate disability after bacterial meningitis as compared to patients with good recovery. Methods Neuropsychological evaluation was performed in 40 adults after pneumococcal meningitis; 20 patients with moderate disability at discharge on the glasgow outcome scale (GOS score 4) and 20 with good recovery (GOS score 5). Results Patients with GOS score 4 had similar test results as compared to patients with GOS score 5 for the neuropsychological domains ‘intelligence’, ‘memory’ and ‘attention and executive functioning’. Patients with GOS score 4 showed less cognitive slowness than patients with GOS score 5. In a linear regression analysis cognitive speed was related to current intelligence, years of education and time since meningitis. Overall performance on the speed composite score correlated significantly with time since meningitis (−0.62; P<0.001). Therefore, difference between both groups may have been related to a longer time between meningitis and testing for GOS four patients (29 vs. 12 months; P<0.001). Conclusions Patients with moderate disability after bacterial meningitis are not at higher risk for neuropsychological abnormalities than patients with good recovery. In addition, cognitive slowness after bacterial meningitis may be reversible in time.
  • Weissenborn, J. (1986). Learning how to become an interlocutor. The verbal negotiation of common frames of reference and actions in dyads of 7–14 year old children. In J. Cook-Gumperz, W. A. Corsaro, & J. Streeck (Eds.), Children's worlds and children's language (pp. 377-404). Berlin: Mouton de Gruyter.
  • Wesseldijk, L. W., Henechowicz, T. L., Baker, D. J., Bignardi, G., Karlsson, R., Gordon, R. L., Mosing, M. A., Ullén, F., & Fisher, S. E. (2024). Notes from Beethoven’s genome. Current Biology, 34(6), R233-R234. doi:10.1016/j.cub.2024.01.025.

    Abstract

    Rapid advances over the last decade in DNA sequencing and statistical genetics enable us to investigate the genomic makeup of individuals throughout history. In a recent notable study, Begg et al.1 used Ludwig van Beethoven’s hair strands for genome sequencing and explored genetic predispositions for some of his documented medical issues. Given that it was arguably Beethoven’s skills as a musician and composer that made him an iconic figure in Western culture, we here extend the approach and apply it to musicality. We use this as an example to illustrate the broader challenges of individual-level genetic predictions.

    Additional information

    supplemental information
  • White, S. A., Fisher, S. E., Geschwind, D. H., Scharff, C., & Holy, T. E. (2006). Singing mice, songbirds, and more: Models for FOXP2 function and dysfunction in human speech and language. The Journal of Neuroscience, 26(41), 10376-10379. doi:10.1523/JNEUROSCI.3379-06.2006.

    Abstract

    In 2001, a point mutation in the forkhead box P2 (FOXP2) coding sequence was identified as the basis of an inherited speech and language disorder suffered by members of the family known as "KE." This mini-symposium review focuses on recent findings and research-in-progress, primarily from five laboratories. Each aims at capitalizing on the FOXP2 discovery to build a neurobiological bridge between molecule and phenotype. Below, we describe genetic through behavioral techniques used currently to investigate FoxP2 in birds, rodents, and humans for discovery of the neural bases of vocal learning and language.
  • Winter, B., Lupyan, G., Perry, L. K., Dingemanse, M., & Perlman, M. (2024). Iconicity ratings for 14,000+ English words. Behavior Research Methods, 56, 1640-1655. doi:10.3758/s13428-023-02112-6.

    Abstract

    Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Domain-general and language-specific contributions to speech production in a second language: An fMRI study using functional localizers. Scientific Reports, 14: 57. doi:10.1038/s41598-023-49375-9.

    Abstract

    For bilinguals, speaking in a second language (L2) compared to the native language (L1) is usually more difficult. In this study we asked whether the difficulty in L2 production reflects increased demands imposed on domain-general or core language mechanisms. We compared the brain response to speech production in L1 and L2 within two functionally-defined networks in the brain: the Multiple Demand (MD) network and the language network. We found that speech production in L2 was linked to a widespread increase of brain activity in the domain-general MD network. The language network did not show a similarly robust differences in processing speech in the two languages, however, we found increased response to L2 production in the language-specific portion of the left inferior frontal gyrus (IFG). To further explore our results, we have looked at domain-general and language-specific response within the brain structures postulated to form a Bilingual Language Control (BLC) network. Within this network, we found a robust increase in response to L2 in the domain-general, but also in some language-specific voxels including in the left IFG. Our findings show that L2 production strongly engages domain-general mechanisms, but only affects language sensitive portions of the left IFG. These results put constraints on the current model of bilingual language control by precisely disentangling the domain-general and language-specific contributions to the difficulty in speech production in L2.

    Additional information

    supplementary materials
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Tracking components of bilingual language control in speech production: An fMRI study using functional localizers. Neurobiology of Language, 5(2), 315-340. doi:10.1162/nol_a_00128.

    Abstract

    When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the L2 after-effect. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish–English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm—manifested as the L2 after-effect—reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.

    Additional information

    supplementary materials
  • Wong, M. M. K., Sha, Z., Lütje, L., Kong, X., Van Heukelum, S., Van de Berg, W. D. J., Jonkman, L. E., Fisher, S. E., & Francks, C. (2024). The neocortical infrastructure for language involves region-specific patterns of laminar gene expression. Proceedings of the National Academy of Sciences of the United States of America, 121(34): e2401687121. doi:10.1073/pnas.2401687121.

    Abstract

    The language network of the human brain has core components in the inferior frontal cortex and superior/middle temporal cortex, with left-hemisphere dominance in most people. Functional specialization and interconnectivity of these neocortical regions is likely to be reflected in their molecular and cellular profiles. Excitatory connections between cortical regions arise and innervate according to layer-specific patterns. Here we generated a new gene expression dataset from human postmortem cortical tissue samples from core language network regions, using spatial transcriptomics to discriminate gene expression across cortical layers. Integration of these data with existing single-cell expression data identified 56 genes that showed differences in laminar expression profiles between frontal and temporal language cortex together with upregulation in layer II/III and/or layer V/VI excitatory neurons. Based on data from large-scale genome-wide screening in the population, DNA variants within these 56 genes showed set-level associations with inter-individual variation in structural connectivity between left-hemisphere frontal and temporal language cortex, and with predisposition to dyslexia. The axon guidance genes SLIT1 and SLIT2 were consistently implicated. These findings identify region-specific patterns of laminar gene expression as a feature of the brain’s language network.
  • Wurm, L. H., Ernestus, M., Schreuder, R., & Baayen, R. H. (2006). Dynamics of the auditory comprehension of prefixed words: Cohort entropies and conditional root uniqueness points. The Mental Lexicon, 1(1), 125-146.

    Abstract

    This auditory lexical decision study shows that cohort entropies, conditional root uniqueness points, and morphological family size all contribute to the dynamics of the auditory comprehension of prefixed words. Three entropy measures calculated for different positions in the stem of Dutch prefixed words revealed facilitation for higher entropies, except at the point of disambiguation, where we observed inhibition. Morphological family size was also facilitatory, but only for prefixed words in which the conditional root uniqueness point coincided with the conventional uniqueness point. For words with early conditional disambiguation, in contrast, only the morphologically related words that were onset-aligned with the target word facilitated lexical decision.
  • Yang, J. (2024). Rethinking tokenization: Crafting better tokenizers for large language models. International Journal of Chinese Linguistics, 11(1), 94-109. doi:10.1075/ijchl.00023.yan.

    Abstract

    Tokenization significantly influences language models (LMs)’ performance. This paper traces the evolution of tokenizers from word-level to subword-level, analyzing how they balance tokens and types to enhance model adaptability while controlling complexity. Despite subword tokenizers like Byte Pair Encoding (BPE) overcoming many word tokenizer limitations, they encounter difficulties in handling non-Latin languages and depend heavily on extensive training data and computational resources to grasp the nuances of multiword expressions (MWEs). This article argues that tokenizers, more than mere technical tools, should drawing inspiration from the cognitive science about human language processing. This study then introduces the “Principle of Least Effort” from cognitive science, that humans naturally seek to reduce cognitive effort, and discusses the benefits of this principle for tokenizer development. Based on this principle, the paper proposes that the Less-is-Better (LiB) model could be a new approach for LLM tokenizer. The LiB model can autonomously learn an integrated vocabulary consisting of subwords, words, and MWEs, which effectively reduces both the numbers of tokens and types. Comparative evaluations show that the LiB tokenizer outperforms existing word and BPE tokenizers, presenting an innovative method for tokenizer development, and hinting at the possibility of future cognitive science-based tokenizers being more efficient.
  • Zeshan, U. (2006). Sign language of the world. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 11) (pp. 358-365). Amsterdam: Elsevier.

    Abstract

    Although sign language-using communities exist in all areas of the world, few sign languages have been documented in detail. Sign languages occur in a variety of sociocultural contexts, ranging from sign languages used in closed village communities to officially recognized national sign languages. They may be grouped into language families on historical grounds or may participate in various language contact situations. Systematic cross-linguistic comparison reveals both significant structural similarities and important typological differences between sign languages. Focusing on information from non-Western countries, this article provides an overview of the sign languages of the world.
  • Zeshan, U. (Ed.). (2006). Interrogative and negative constructions in sign languages. Nijmegen: Ishara Press.
  • Zettersten, M., Cox, C., Bergmann, C., Tsui, A. S. M., Soderstrom, M., Mayor, J., Lundwall, R. A., Lewis, M., Kosie, J. E., Kartushina, N., Fusaroli, R., Frank, M. C., Byers-Heinlein, K., Black, A. K., & Mathur, M. B. (2024). Evidence for infant-directed speech preference is consistent across large-scale, multi-site replication and meta-analysis. Open Mind, 8, 439-461. doi:10.1162/opmi_a_00134.

    Abstract

    There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena.

    Additional information

    supplementary data link to preprint
  • He, J., & Zhang, Q. (2024). Direct retrieval of orthographic representations in Chinese handwritten production: Evidence from a dynamic causal modeling study. Journal of Cognitive Neuroscience, 36(9), 1937-1962. doi:10.1162/jocn_a_02176.

    Abstract

    This present study identified an optimal model representing the relationship between orthography and phonology in Chinese handwritten production using dynamic causal modeling, and further explored how this model was modulated by word frequency and syllable frequency. Each model contained five volumes of interest in the left hemisphere (angular gyrus [AG], inferior frontal gyrus [IFG], middle frontal gyrus [MFG], superior frontal gyrus [SFG], and supramarginal gyrus [SMG]), with the IFG as the driven input area. Results showed the superiority of a model in which both the MFG and the AG connected with the IFG, supporting the orthography autonomy hypothesis. Word frequency modulated the AG → SFG connection (information flow from the orthographic lexicon to the orthographic buffer), and syllable frequency affected the IFG → MFG connection (information transmission from the semantic system to the phonological lexicon). This study thus provides new insights into the connectivity architecture of neural substrates involved in writing.
  • Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.

    Abstract

    Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
  • Zhou, H., Van der Ham, S., De Boer, B., Bogaerts, L., & Raviv, L. (2024). Modality and stimulus effects on distributional statistical learning: Sound vs. sight, time vs. space. Journal of Memory and Language, 138: 104531. doi:10.1016/j.jml.2024.104531.

    Abstract

    Statistical learning (SL) is postulated to play an important role in the process of language acquisition as well as in other cognitive functions. It was found to enable learning of various types of statistical patterns across different sensory modalities. However, few studies have distinguished distributional SL (DSL) from sequential and spatial SL, or examined DSL across modalities using comparable tasks. Considering the relevance of such findings to the nature of SL, the current study investigated the modality- and stimulus-specificity of DSL. Using a within-subject design we compared DSL performance in auditory and visual modalities. For each sensory modality, two stimulus types were used: linguistic versus non-linguistic auditory stimuli and temporal versus spatial visual stimuli. In each condition, participants were exposed to stimuli that varied in their length as they were drawn from two categories (short versus long). DSL was assessed using a categorization task and a production task. Results showed that learners’ performance was only correlated for tasks in the same sensory modality. Moreover, participants were better at categorizing the temporal signals in the auditory conditions than in the visual condition, where in turn an advantage of the spatial condition was observed. In the production task participants exaggerated signal length more for linguistic signals than non-linguistic signals. Together, these findings suggest that DSL is modality- and stimulus-sensitive.

    Additional information

    link to preprint
  • Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.

    Abstract

    Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production.
  • Zwitserlood, I., & Van Gijn, I. (2006). Agreement phenomena in Sign Language of the Netherlands. In P. Ackema (Ed.), Arguments and Agreement (pp. 195-229). Oxford: Oxford University Press.

Share this page