Publications

Displaying 1201 - 1264 of 1264
  • Vosse, T. G., & Kempen, G. (2008). Parsing verb-final clauses in German: Garden-path and ERP effects modeled by a parallel dynamic parser. In B. Love, K. McRae, & V. Sloutsky (Eds.), Proceedings of the 30th Annual Conference on the Cognitive Science Society (pp. 261-266). Washington: Cognitive Science Society.

    Abstract

    Experimental sentence comprehension studies have shown that superficially similar German clauses with verb-final word order elicit very different garden-path and ERP effects. We show that a computer implementation of the Unification Space parser (Vosse & Kempen, 2000) in the form of a localist-connectionist network can model the observed differences, at least qualitatively. The model embodies a parallel dynamic parser that, in contrast with existing models, does not distinguish between consecutive first-pass and reanalysis stages, and does not use semantic or thematic roles. It does use structural frequency data and animacy information.
  • Wagner, A. (2008). Phoneme inventories and patterns of speech sound perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wagner, A., & Ernestus, M. (2008). Identification of phonemes: Differences between phoneme classes and the effect of class size. Phonetica, 65(1-2), 106-127. doi:10.1159/000132389.

    Abstract

    This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners' response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme's class in the listener's native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Wanner-Kawahara, J., Yoshihara, M., Lupker, S. J., Verdonschot, R. G., & Nakayama, M. (2022). Morphological priming effects in L2 English verbs for Japanese-English bilinguals. Frontiers in Psychology, 13: 742965. doi:10.3389/fpsyg.2022.742965.

    Abstract

    For native (L1) English readers, masked presentations of past-tense verb primes (e.g., fell and looked) produce faster lexical decision latencies to their present-tense targets (e.g., FALL and LOOK) than orthographically related (e.g., fill and loose) or unrelated (e.g., master and bank) primes. This facilitation observed with morphologically related prime-target pairs (morphological priming) is generally taken as evidence for strong connections based on morphological relationships in the L1 lexicon. It is unclear, however, if similar, morphologically based, connections develop in non-native (L2) lexicons. Several earlier studies with L2 English readers have reported mixed results. The present experiments examine whether past-tense verb primes (both regular and irregular verbs) significantly facilitate target lexical decisions for Japanese-English bilinguals beyond any facilitation provided by prime-target orthographic similarity. Overall, past-tense verb primes facilitated lexical decisions to their present-tense targets relative to both orthographically related and unrelated primes. Replicating previous masked priming experiments with L2 readers, orthographically related primes also facilitated target recognition relative to unrelated primes, confirming that orthographic similarity facilitates L2 target recognition. The additional facilitation from past-tense verb primes beyond that provided by orthographic primes suggests that, in the L2 English lexicon, connections based on morphological relationships develop in a way that is similar to how they develop in the L1 English lexicon even though the connections and processing of lower level, lexical/orthographic information may differ. Further analyses involving L2 proficiency revealed that as L2 proficiency increased, orthographic facilitation was reduced, indicating that there is a decrease in the fuzziness in orthographic representations in the L2 lexicon with increased proficiency.

    Additional information

    supplementary material
  • Warner, N., & Weber, A. (2001). Perception of epenthetic stops. Journal of Phonetics, 29(1), 53-87. doi:10.1006/jpho.2001.0129.

    Abstract

    In processing connected speech, listeners must parse a highly variable signal. We investigate processing of a particular type of production variability, namely epenthetic stops between nasals and obstruents. Using a phoneme monitoring task and a dictation task, we test listeners' perception of epenthetic stops (which are not part of the string of segments intended by the speaker). We confirm that the epenthetic stop perceived is the one predicted by articulatory accounts of how such stops are produced, and that the likelihood of an epenthetic stop being perceived as a real stop is related to the strength of acoustic cues in the signal. We show that the probability of listeners mis-parsing epenthetic stops as real is influenced by language-specific syllable structure constraints, and depends on processing demands. We further show, through reaction time data, that even when epenthetic stops are perceived, they impose a greater processing load than stops which were intended by the speaker. These results show that processing of phonetic variability is affected by several factors, including language-specific phonology, even though the mis-timing of articulations that creates epenthetic stops is universally possible.
  • Warner, N., Jongman, A., Cutler, A., & Mücke, D. (2001). The phonological status of Dutch epenthetic schwa. Phonology, 18, 387-420. doi:10.1017/S0952675701004213.

    Abstract

    In this paper, we use articulatory measures to determine whether Dutch schwa epenthesis is an abstract phonological process or a concrete phonetic process depending on articulatory timing. We examine tongue position during /l/ before underlying schwa and epenthetic schwa and in coda position. We find greater tip raising before both types of schwa, indicating light /l/ before schwa and dark /l/ in coda position. We argue that the ability of epenthetic schwa to condition the /l/ alternation shows that Dutch schwa epenthesis is an abstract phonological process involving insertion of some unit, and cannot be accounted for within Articulatory Phonology.
  • Warner, N., Jongman, A., Mucke, D., & Cutler, A. (2001). The phonological status of schwa insertion in Dutch: An EMA study. In B. Maassen, W. Hulstijn, R. Kent, H. Peters, & P. v. Lieshout (Eds.), Speech motor control in normal and disordered speech: 4th International Speech Motor Conference (pp. 86-89). Nijmegen: Vantilt.

    Abstract

    Articulatory data are used to address the question of whether Dutch schwa insertion is a phonological or a phonetic process. By investigating tongue tip raising and dorsal lowering, we show that /l/ when it appears before inserted schwa is a light /l/, just as /l/ before an underlying schwa is, and unlike the dark /l/ before a consonant in non-insertion productions of the same words. The fact that inserted schwa can condition the light/dark /l/ alternation shows that schwa insertion involves the phonological insertion of a segment rather than phonetic adjustments to articulations.
  • Warner, N., & Arai, T. (2001). The role of the mora in the timing of spontaneous Japanese speech. The Journal of the Acoustical Society of America, 109, 1144-1156. doi:10.1121/1.1344156.

    Abstract

    This study investigates whether the mora is used in controlling timing in Japanese speech, or is instead a structural unit in the language not involved in timing. Unlike most previous studies of mora-timing in Japanese, this article investigates timing in spontaneous speech. Predictability of word duration from number of moras is found to be much weaker than in careful speech. Furthermore, the number of moras predicts word duration only slightly better than number of segments. Syllable structure also has a significant effect on word duration. Finally, comparison of the predictability of whole words and arbitrarily truncated words shows better predictability for truncated words, which would not be possible if the truncated portion were compensating for remaining moras. The results support an accumulative model of variance with a final lengthening effect, and do not indicate the presence of any compensation related to mora-timing. It is suggested that the rhythm of Japanese derives from several factors about the structure of the language, not from durational compensation.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Wassenaar, M., & Hagoort, P. (2001). Het matchen van zinnen bij plaatjes door Broca afasiepatiënten: een hersenpotentiaal studie. Afasiologie, 23, 122-126.
  • Weber, A. (2001). Language-specific listening: The case of phonetic sequences. PhD Thesis, University of Nijmegen, Nijmegen, The Netherlands. doi:10.17617/2.68255.
  • Weber, A., & Melinger, A. (2008). Name dominance in spoken word recognition is (not) modulated by expectations: Evidence from synonyms. In A. Botinis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (ExLing 2008) (pp. 225-228). Athens: University of Athens.

    Abstract

    Two German eye-tracking experiments tested whether top-down expectations interact with acoustically-driven word-recognition processes. Competitor objects with two synonymous names were paired with target objects whose names shared word onsets with either the dominant or the non-dominant name of the competitor. Non-dominant names of competitor objects were either introduced before the test session or not. Eye-movements were monitored while participants heard instructions to click on target objects. Results demonstrate dominant and non-dominant competitor names were considered for recognition, regardless of top-down expectations, though dominant names were always activated more strongly.
  • Weber, A. (2001). Help or hindrance: How violation of different assimilation rules affects spoken-language processing. Language and Speech, 44(1), 95-118. doi:10.1177/00238309010440010401.

    Abstract

    Four phoneme-detection studies tested the conclusion from recent research that spoken-language processing is inhibited by violation of obligatory assimilation processes in the listeners’ native language. In Experiment 1, native listeners of German detected a target fricative in monosyllabic Dutch nonwords, half of which violated progressive German fricative place assimilation. In contrast to the earlier findings, listeners detected the fricative more quickly when assimilation was violated than when no violation occurred. This difference was not due to purely acoustic factors, since in Experiment 2 native Dutch listeners, presented with the same materials, showed no such effect. In Experiment 3, German listeners again detected the fricative more quickly when violation occurred in both monosyllabic and bisyllabic native nonwords, further ruling out explanations based on non-native input or on syllable structure. Finally Experiment 4 tested whether the direction in which the rule operates (progressive or regressive) controls the direction of the effect on phoneme detection responses.When regressive German place assimilation for nasals was violated, German listeners detected stops more slowly, exactly as had been observed in previous studies of regressive assimilation. It is argued that a combination of low expectations in progressive assimilation and novel popout causes facilitation of processing,whereas not fulfilling high expectations in regressive assimilation causes inhibition.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weber, K., & Lavric, A. (2008). Syntactic anomaly elicits a lexico-semantic (N400) ERP effect in the second but not in the first language. Psychophysiology, 45(6), 920-925. doi:10.1111/j.1469-8986.2008.00691.x.

    Abstract

    Recent brain potential research into first versus second language (L1 vs. L2) processing revealed striking responses to morphosyntactic features absent in the mother tongue. The aim of the present study was to establish whether the presence of comparable morphosyntactic features in L1 leads to more similar electrophysiological L1 and L2 profiles. ERPs were acquired while German-English bilinguals and native speakers of English read sentences. Some sentences were meaningful and well formed, whereas others contained morphosyntactic or semantic violations in the final word. In addition to the expected P600 component, morphosyntactic violations in L2 but not L1 led to an enhanced N400. This effect may suggest either that resolution of morphosyntactic anomalies in L2 relies on the lexico-semantic system or that the weaker/slower morphological mechanisms in L2 lead to greater sentence wrap-up difficulties known to result in N400 enhancement.
  • Weber, A. (2008). What eye movements can tell us about spoken-language processing: A psycholinguistic survey. In C. M. Riehl (Ed.), Was ist linguistische Evidenz: Kolloquium des Zentrums Sprachenvielfalt und Mehrsprachigkeit, November 2006 (pp. 57-68). Aachen: Shaker.
  • Weber, A. (2008). What the eyes can tell us about spoken-language comprehension [Abstract]. Journal of the Acoustical Society of America, 124, 2474-2474.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Wegener, C. (2008). A grammar of Savosavo: A Papuan language of the Solomon Islands. PhD Thesis, Radboud University Nijmegen, Njimegen.
  • Whitehead, H., & Hersh, T. A. (2022). Posterior probabilities of membership of repertoires in acoustic clades. PLoS One, 17(4): e0267501. doi:10.1371/journal.pone.0267501.

    Abstract

    Recordings of calls may be used to assess population structure for acoustic species. This can be particularly effective if there are identity calls, produced nearly exclusively by just one population segment. The identity call method, IDcall, classifies calls into types using contaminated mixture models, and then clusters repertoires of calls into identity clades (potential population segments) using identity calls that are characteristic of the repertoires in each identity clade. We show how to calculate the Bayesian posterior probabilities that each repertoire is a member of each identity clade, and display this information as a stacked bar graph. This methodology (IDcallPP) is introduced using the output of IDcall but could easily be adapted to estimate posterior probabilities of clade membership when acoustic clades are delineated using other methods. This output is similar to that of the STRUCTURE software which uses molecular genetic data to assess population structure and has become a standard in conservation genetics. The technique introduced here should be a valuable asset to those who use acoustic data to address evolution, ecology, or conservation, and creates a methodological and conceptual bridge between geneticists and acousticians who aim to assess population structure.
  • Widlok, T., Rapold, C. J., & Hoymann, G. (2008). Multimedia analysis in documentation projects: Kinship, interrogatives and reciprocals in ǂAkhoe Haiǁom. In K. D. Harrison, D. S. Rood, & A. Dwyer (Eds.), Lessons from documented endangered languages (pp. 355-370). Amsterdam: Benjamins.

    Abstract

    This contribution emphasizes the role of multimedia data not only for archiving languages but also for creating opportunities for innovative analyses. In the case at hand, video material was collected as part of the documentation of Akhoe Haiom, a Khoisan language spoken in northern Namibia. The multimedia documentation project brought together linguistic and anthropological work to highlight connections between specialized domains, namely kinship terminology, interrogatives and reciprocals. These connections would have gone unnoticed or undocumented in more conventional modes of language description. It is suggested that such an approach may be particularly appropriate for the documentation of endangered languages since it directs the focus of attention away from isolated traits of languages towards more complex practices of communication that are also frequently threatened with extinction.
  • Widlok, T. (2008). Landscape unbounded: Space, place, and orientation in ≠Akhoe Hai// om and beyond. Language Sciences, 30(2/3), 362-380. doi:10.1016/j.langsci.2006.12.002.

    Abstract

    Even before it became a common place to assume that “the Eskimo have a hundred words for snow” the languages of hunting and gathering people have played an important role in debates about linguistic relativity concerning geographical ontologies. Evidence from languages of hunter-gatherers has been used in radical relativist challenges to the overall notion of a comparative typology of generic natural forms and landscapes as terms of reference. It has been invoked to emphasize a personalized relationship between humans and the non-human world. It is against this background that this contribution discusses the landscape terminology of ≠Akhoe Hai//om, a Khoisan language spoken by “Bushmen” in Namibia. Landscape vocabulary is ubiquitous in ≠Akhoe Hai//om due to the fact that the landscape plays a critical role in directionals and other forms of “topographical gossip” and due to merges between landscape and group terminology. This system of landscape-cum-group terminology is outlined and related to the use of place names in the area.
  • Widlok, T. (2008). The dilemmas of walking: A comparative view. In T. Ingold, & J. L. Vergunst (Eds.), Ways of walking: Ethnography and practice on foot (pp. 51-66). Aldershot: Ashgate.
  • Wierenga, L. M., Doucet, G. E., Dima, D., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes-Eizagirre, A., Alnæs, D., Alpert, K. I., Andreassen, O. A., Anticevic, A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur-Streubel, R., Bertolino, A., Bonvino, A., Boomsma, D. I., Borgwardt, S. and 139 moreWierenga, L. M., Doucet, G. E., Dima, D., Agartz, I., Aghajani, M., Akudjedu, T. N., Albajes-Eizagirre, A., Alnæs, D., Alpert, K. I., Andreassen, O. A., Anticevic, A., Asherson, P., Banaschewski, T., Bargallo, N., Baumeister, S., Baur-Streubel, R., Bertolino, A., Bonvino, A., Boomsma, D. I., Borgwardt, S., Bourque, J., Den Braber, A., Brandeis, D., Breier, A., Brodaty, H., Brouwer, R. M., Buitelaar, J. K., Busatto, G. F., Calhoun, V. D., Canales-Rodríguez, E. J., Cannon, D. M., Caseras, X., Castellanos, F. X., Chaim-Avancini, T. M., Ching, C. R. K., Clark, V. P., Conrod, P. J., Conzelmann, A., Crivello, F., Davey, C. G., Dickie, E. W., Ehrlich, S., Van 't Ent, D., Fisher, S. E., Fouche, J.-P., Franke, B., Fuentes-Claramonte, P., De Geus, E. J. C., Di Giorgio, A., Glahn, D. C., Gotlib, I. H., Grabe, H. J., Gruber, O., Gruner, P., Gur, R. E., Gur, R. C., Gurholt, T. P., De Haan, L., Haatveit, B., Harrison, B. J., Hartman, C. A., Hatton, S. N., Heslenfeld, D. J., Van den Heuvel, O. A., Hickie, I. B., Hoekstra, P. J., Hohmann, S., Holmes, A. J., Hoogman, M., Hosten, N., Howells, F. M., Hulshoff Pol, H. E., Huyser, C., Jahanshad, N., James, A. C., Jiang, J., Jönsson, E. G., Joska, J. A., Kalnin, A. J., Karolinska Schizophrenia Project (KaSP) Consortium, Klein, M., Koenders, L., Kolskår, K. K., Krämer, B., Kuntsi, J., Lagopoulos, J., Lazaro, L., Lebedeva, I. S., Lee, P. H., Lochner, C., Machielsen, M. W. J., Maingault, S., Martin, N. G., Martínez-Zalacaín, I., Mataix-Cols, D., Mazoyer, B., McDonald, B. C., McDonald, C., McIntosh, A. M., McMahon, K. L., McPhilemy, G., Van der Meer, D., Menchón, J. M., Naaijen, J., Nyberg, L., Oosterlaan, J., Paloyelis, Y., Pauli, P., Pergola, G., Pomarol-Clotet, E., Portella, M. J., Radua, J., Reif, A., Richard, G., Roffman, J. L., Rosa, P. G. P., Sacchet, M. D., Sachdev, P. S., Salvador, R., Sarró, S., Satterthwaite, T. D., Saykin, A. J., Serpa, M. H., Sim, K., Simmons, A., Smoller, J. W., Sommer, I. E., Soriano-Mas, C., Stein, D. J., Strike, L. T., Szeszko, P. R., Temmingh, H. S., Thomopoulos, S. I., Tomyshev, A. S., Trollor, J. N., Uhlmann, A., Veer, I. M., Veltman, D. J., Voineskos, A., Völzke, H., Walter, H., Wang, L., Wang, Y., Weber, B., Wen, W., West, J. D., Westlye, L. T., Whalley, H. C., Williams, S. C. R., Wittfeld, K., Wolf, D. H., Wright, M. J., Yoncheva, Y. N., Zanetti, M. V., Ziegler, G. C., De Zubicaray, G. I., Thompson, P. M., Crone, E. A., Frangou, S., & Tamnes, C. K. (2022). Greater male than female variability in regional brain structure across the lifespan. Human Brain Mapping, 43(1), 470-499. doi:10.1002/hbm.25204.

    Abstract

    For many traits, males show greater variability than females, with possible implications for understanding sex differences in health and disease. Here, the ENIGMA (Enhancing Neuro Imaging Genetics through Meta‐Analysis) Consortium presents the largest‐ever mega‐analysis of sex differences in variability of brain structure, based on international data spanning nine decades of life. Subcortical volumes, cortical surface area and cortical thickness were assessed in MRI data of 16,683 healthy individuals 1‐90 years old (47% females). We observed significant patterns of greater male than female between‐subject variance for all subcortical volumetric measures, all cortical surface area measures, and 60% of cortical thickness measures. This pattern was stable across the lifespan for 50% of the subcortical structures, 70% of the regional area measures, and nearly all regions for thickness. Our findings that these sex differences are present in childhood implicate early life genetic or gene‐environment interaction mechanisms. The findings highlight the importance of individual differences within the sexes, that may underpin sex‐specific vulnerability to disorders.
  • Wilkins, D. (2001). Eliciting contrastive use of demonstratives for objects within close personal space (all objects well within arm’s reach). In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 164-168). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wilkins, D., Kita, S., & Enfield, N. J. (2001). Ethnography of pointing questionnaire version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 136-141). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wilkins, D. (2001). The 1999 demonstrative questionnaire: “This” and “that” in comparative perspective. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 149-163). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Williams, N. M., Williams, H., Majounie, E., Norton, N., Glaser, B., Morris, H. R., Owen, M. J., & O'Donovan, M. C. (2008). Analysis of copy number variation using quantitative interspecies competitive PCR. Nucleic Acids Research, 36(17): e112. doi:10.1093/nar/gkn495.

    Abstract

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls.
  • Wilms, V., Drijvers, L., & Brouwer, S. (2022). The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. Journal of Speech, Language, and Hearing Research, 65, 1822-1838. doi:10.1044/2022\_JSLHR-21-00387.

    Abstract

    Purpose:This study investigated to what extent iconic co-speech gestures helpword intelligibility in sentence context in two different linguistic maskers (nativevs. foreign). It was hypothesized that sentence recognition improves with thepresence of iconic co-speech gestures and with foreign compared to nativebabble.Method:Thirty-two native Dutch participants performed a Dutch word recogni-tion task in context in which they were presented with videos in which anactress uttered short Dutch sentences (e.g.,Ze begint te openen,“She starts toopen”). Participants were presented with a total of six audiovisual conditions: nobackground noise (i.e., clear condition) without gesture, no background noise withgesture, French babble without gesture, French babble with gesture, Dutch bab-ble without gesture, and Dutch babble with gesture; and they were asked to typedown what was said by the Dutch actress. The accurate identification of theaction verbs at the end of the target sentences was measured.Results:The results demonstrated that performance on the task was better inthe gesture compared to the nongesture conditions (i.e., gesture enhancementeffect). In addition, performance was better in French babble than in Dutchbabble.Conclusions:Listeners benefit from iconic co-speech gestures during commu-nication and from foreign background speech compared to native. Theseinsights into multimodal communication may be valuable to everyone whoengages in multimodal communication and especially to a public who oftenworks in public places where competing speech is present in the background.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittenburg, P. (2008). Die CLARIN Forschungsinfrastruktur. ÖGAI-journal (Österreichische Gesellschaft für Artificial Intelligence), 27, 10-17.
  • Wnuk, E., Verkerk, A., Levinson, S. C., & Majid, A. (2022). Color technology is not necessary for rich and efficient color language. Cognition, 229: 105223. doi:10.1016/j.cognition.2022.105223.

    Abstract

    The evolution of basic color terms in language is claimed to be stimulated by technological development, involving technological control of color or exposure to artificially colored objects. Accordingly, technologically “simple” non-industrialized societies are expected to have poor lexicalization of color, i.e., only rudimentary lexica of 2, 3 or 4 basic color terms, with unnamed gaps in the color space. While it may indeed be the case that technology stimulates lexical growth of color terms, it is sometimes considered a sine qua non for color salience and lexicalization. We provide novel evidence that this overlooks the role of the natural environment, and people's engagement with the environment, in the evolution of color vocabulary. We introduce the Maniq—nomadic hunter-gatherers with no color technology, but who have a basic color lexicon of 6 or 7 terms, thus of the same order as large languages like Vietnamese and Hausa, and who routinely talk about color. We examine color language in Maniq and compare it to available data in other languages to demonstrate it has remarkably high consensual color term usage, on a par with English, and high coding efficiency. This shows colors can matter even for non-industrialized societies, suggesting technology is not necessary for color language. Instead, factors such as perceptual prominence of color in natural environments, its practical usefulness across communicative contexts, and symbolic importance can all stimulate elaboration of color language.
  • Woensdregt, M., Jara-Ettinger, J., & Rubio-Fernandez, P. (2022). Language universals rely on social cognition: Computational models of the use of this and that to redirect the receiver’s attention. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1382-1388). Toronto, Canada: Cognitive Science Society.

    Abstract

    Demonstratives—simple referential devices like this and that—are linguistic universals, but their meaning varies cross-linguistically. In languages like English and Italian, demonstratives are thought to encode the referent’s distance from the producer (e.g., that one means “the one far away from me”),
    while in others, like Portuguese and Spanish, they encode relative distance from both producer and receiver (e.g., aquel means “the one far away from both of us”). Here we propose that demonstratives are also sensitive to the receiver’s focus of attention, hence requiring a deeper form of social cognition
    than previously thought. We provide initial empirical and computational evidence for this idea, suggesting that producers use
    demonstratives to redirect the receiver’s attention towards the intended referent, rather than only to indicate its physical distance.
  • Wolf, M. C. (2022). Spoken and written word processing: Effects of presentation modality and individual differences in experience to written language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Yang, J. (2022). Discovering the units in language cognition: From empirical evidence to a computational model. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Zavala, R. (2001). Entre consejos, diablos y vendedores de caca, rasgos gramaticales deloluteco en tres de sus cuentos. Tlalocan. Revista de Fuentes para el Conocimiento de las Culturas Indígenas de México, XIII, 335-414.

    Abstract

    The three Olutec stories from Oluta, Veracruz, werenarrated by Antonio Asistente Maldonado. Roberto Zavala presents amorpheme-by-morpheme analysis of the texts with a sketch of the majorgrammatical and typological features of this language. Olutec is spoken bythree dozen speakers. The grammatical structure of this language has not beendescribed before. The sketch contains information on verb and noun morphology,verb dasses, clause types, inverse/direct patterns, grammaticalizationprocesses, applicatives, incorporation, word order type, and discontinuousexpressions. The stories presented here are the first Olutec texts everpublished. The motifs of the stories are well known throughout Middle America.The story of "the Rabbit who wants to be big" explains why one of the mainprotagonists of Middle American folktales acquired long ears. The story of "theDevil who is inebriated by the people of a village" explains how theinhabitants of a village discover the true identity of a man who likes to dancehuapango and decide to get rid of him. Finally the story of "theshit-sellers" presents two compadres, one who is lazy and the otherone who works hard. The hard-worker asks the lazy compadre how he surviveswithout working. The latter lies to to him that he sells shit in theneighboring village. The hard-working compadre decides to become a shit-sellerand in the process realizes that the lazy compadre deceived him. However, he islucky and meets with the Devil who offers him money in compensation for havingbeen deceived. When the lazy compadre realizes that the hard-working compadrehas become rich, he tries to do the same business but gets beaten in theprocess.
  • Zeller, J., Bylund, E., & Lewis, A. G. (2022). The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition, 226: 105148. doi:10.1016/j.cognition.2022.105148.

    Abstract

    In sentence comprehension, the parser in many languages has the option to use both the morphological form of a noun and its lexical representation when evaluating agreement. The additional step of consulting the lexicon incurs processing costs, and an important question is whether the parser takes that step even when the formal cues alone are sufficiently reliable to evaluate agreement. Our study addressed this question using electrophysiology in Zulu, a language where both grammatical gender and number features are reliably expressed formally by noun class prefixes, but only gender features are lexically specified. We observed reduced, more topographically focal LAN, and more frontally distributed alpha/beta power effects for gender compared to number agreement violations. These differences provide evidence that for gender mismatches, even though the formal cues are reliable, the parser nevertheless takes the additional step of consulting the noun's lexical representation, a step which is not available for number.

    Files private

    Request files
  • Zeshan, U., & Perniss, P. M. (2008). Possessive and existential constructions in sign languages. Nijmegen: Ishara Press.
  • Zhang, Q., Zhou, Y., & Lou, H. (2022). The dissociation between age of acquisition and word frequency effects in Chinese spoken picture naming. Psychological Research, 86, 1918-1929. doi:10.1007/s00426-021-01616-0.

    Abstract

    This study aimed to examine the locus of age of acquisition (AoA) and word frequency (WF) effects in Chinese spoken picture naming, using a picture–word interference task. We conducted four experiments manipulating the properties of picture names (AoA in Experiments 1 and 2, while controlling WF; and WF in Experiments 3 and 4, while controlling AoA), and the relations between distractors and targets (semantic or phonological relatedness). Both Experiments 1 and 2 demonstrated AoA effects in picture naming; pictures of early acquired concepts were named faster than those acquired later. There was an interaction between AoA and semantic relatedness, but not between AoA and phonological relatedness, suggesting localisation of AoA effects at the stage of lexical access in picture naming. Experiments 3 and 4 demonstrated WF effects: pictures of high-frequency concepts were named faster than those of low-frequency concepts. WF interacted with both phonological and semantic relatedness, suggesting localisation of WF effects at multiple levels of picture naming, including lexical access and phonological encoding. Our findings show that AoA and WF effects exist in Chinese spoken word production and may arise at related processes of lexical selection.
  • Zhang, Y., & Yu, C. (2022). Examining real-time attention dynamics in parent-infant picture book reading. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1367-1374). Toronto, Canada: Cognitive Science Society.

    Abstract

    Picture book reading is a common word-learning context from which parents repeatedly name objects to their child and it has been found to facilitate early word learning. To learn the correct word-object mappings in a book-reading context, infants need to be able to link what they see with what they hear. However, given multiple objects on every book page, it is not clear how infants direct their attention to objects named by parents. The aim of the current study is to examine how infants mechanistically discover the correct word-object mappings during book reading in real time. We used head-mounted eye-tracking during parent-infant picture book reading and measured the infant's moment-by-moment visual attention to the named referent. We also examined how gesture cues provided by both the child and the parent may influence infants' attention to the named target. We found that although parents provided many object labels during book reading, infants were not able to attend to the named objects easily. However, their abilities to follow and use gestures to direct the other social partner’s attention increase the chance of looking at the named target during parent naming.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Wu, S., Zhang, D., Li, X., Zhao, J., Sun, X., Shi, L., Mao, Y., Zhang, Y., & Jiang, F. (2022). Siblings and Early Childhood Development: Evidence from a Population-Based Cohort in Preschoolers from Shanghai. International Journal of Environmental Research and Public Health, 19(9): 5739. doi:10.3390/ijerph19095739.

    Abstract

    Background: The current study aims to investigate the association between the presence of a sibling and early childhood development (ECD). (2) Methods: Data were obtained from a large-scale population-based cohort in Shanghai. Children were followed from three to six years old. Based on birth order, the sample was divided into four groups: single child, younger child, elder child, and single-elder transfer (transfer from single-child to elder-child). Psychosocial well-being and school readiness were assessed with the total difficulties score from the Strengths and Difficulties Questionnaire (SDQ) and the overall development score from the early Human Capability Index (eHCI), respectively. A multilevel model was conducted to evaluate the main effect of each sibling group and the group × age interaction effect on psychosocial well-being and school readiness. (3) Results: Across all measures, children in the younger child group presented with lower psychosocial problems (β = −0.96, 95% CI: −1.44, −0.48, p < 0.001) and higher school readiness scores (β = 1.56, 95% CI: 0.61, 2.51, p = 0.001). No significant difference, or marginally significant difference, was found between the elder group and the single-child group. Compared to the single-child group, the single-elder transfer group presented with slower development on both psychosocial well-being (Age × Group: β = 0.37, 95% CI: 0.18, 0.56, p < 0.001) and school readiness (Age × Group: β = −0.75, 95% CI: −1.10, −0.40, p < 0.001). The sibling-ECD effects did not differ between children from families of low versus high socioeconomic status. (4) Conclusion: The current study suggested the presence of a sibling was not associated with worse development outcomes in general. Rather, children with an elder sibling are more likely to present with better ECD.
  • Zhao, J., Yu, Z., Sun, X., Wu, S., Zhang, J., Zhang, D., Zhang, Y., & Jiang, F. (2022). Association between screen time trajectory and early childhood development in children in China. JAMA Pediatrics, 176(8), 768-775. doi:10.1001/jamapediatrics.2022.1630.

    Abstract

    Importance: Screen time has become an integral part of children's daily lives. Nevertheless, the developmental consequences of screen exposure in young children remain unclear.

    Objective: To investigate the screen time trajectory from 6 to 72 months of age and its association with children's development at age 72 months in a prospective birth cohort.

    Design, setting, and participants: Women in Shanghai, China, who were at 34 to 36 gestational weeks and had an expected delivery date between May 2012 and July 2013 were recruited for this cohort study. Their children were followed up at 6, 9, 12, 18, 24, 36, 48, and 72 months of age. Children's screen time was classified into 3 groups at age 6 months: continued low (ie, stable amount of screen time), late increasing (ie, sharp increase in screen time at age 36 months), and early increasing (ie, large amount of screen time in early stages that remained stable after age 36 months). Cognitive development was assessed by specially trained research staff in a research clinic. Of 262 eligible mother-offspring pairs, 152 dyads had complete data regarding all variables of interest and were included in the analyses. Data were analyzed from September 2019 to November 2021.

    Exposures: Mothers reported screen times of children at 6, 9, 12, 18, 24, 36, 48, and 72 months of age.

    Main outcomes and measures: The cognitive development of children was evaluated using the Wechsler Intelligence Scale for Children, 4th edition, at age 72 months. Social-emotional development was measured by the Strengths and Difficulties Questionnaire, which was completed by the child's mother. The study described demographic characteristics, maternal mental health, child's temperament at age 6 months, and mental development at age 12 months by subgroups clustered by a group-based trajectory model. Group difference was examined by analysis of variance.

    Results: A total of 152 mother-offspring dyads were included in this study, including 77 girls (50.7%) and 75 boys (49.3%) (mean [SD] age of the mothers was 29.7 [3.3] years). Children's screen time trajectory from age 6 to 72 months was classified into 3 groups: continued low (110 [72.4%]), late increasing (17 [11.2%]), and early increasing (25 [16.4%]). Compared with the continued low group, the late increasing group had lower scores on the Full-Scale Intelligence Quotient (β coefficient, -8.23; 95% CI, -15.16 to -1.30; P < .05) and the General Ability Index (β coefficient, -6.42; 95% CI, -13.70 to 0.86; P = .08); the early increasing group presented with lower scores on the Full-Scale Intelligence Quotient (β coefficient, -6.68; 95% CI, -12.35 to -1.02; P < .05) and the Cognitive Proficiency Index (β coefficient, -10.56; 95% CI, -17.23 to -3.90; P < .01) and a higher total difficulties score (β coefficient, 2.62; 95% CI, 0.49-4.76; P < .05).

    Conclusions and relevance: This cohort study found that excessive screen time in early years was associated with poor cognitive and social-emotional development. This finding may be helpful in encouraging awareness among parents of the importance of onset and duration of children's screen time.
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2022). Is semantic memory the winning component in second language teaching with Accelerative Integrated Method (AIM)? LingUU Journal, 6(1), 54-62.

    Abstract

    This paper constitutes a research proposal based on Rousse-Malpalt’s
    (2019) dissertation, which extensively examines the effectiveness of the
    Accelerative Integrated Method (AIM) in second language (L2) learning.
    Although it has been found that AIM is a greatly effective method in comparison with non-implicit teaching methods, the reasons behind its success and effectiveness are yet unknown. As Semantic Memory (SM) is the component of memory responsible for the conceptualization and storage of knowledge, this paper sets to propose an investigation of its role in the learning process of AIM and provide with insights as to why the embodied experience of learning with AIM is more effective than others. The tasks proposed for administration take into account the factors of gestures being related to a learner’s memorization process and Semantic Memory. Lastly, this paper provides with a future research idea about the learning mechanisms of sign languages in people with hearing deficits and healthy population, aiming to indicate which brain mechanisms benefit from the teaching method of AIM and reveal important brain functions for SLA via AIM.
  • Zinn, C., Cablitz, G., Ringersma, J., Kemps-Snijders, M., & Wittenburg, P. (2008). Constructing knowledge spaces from linguistic resources. In Proceedings of the CIL 18 Workshop on Linguistic Studies of Ontology: From lexical semantics to formal ontologies and back.
  • Zinn, C. (2008). Conceptual spaces in ViCoS. In S. Bechhofer, M. Hauswirth, J. Hoffmann, & M. Koubarakis (Eds.), The semantic web: Research and applications (pp. 890-894). Berlin: Springer.

    Abstract

    We describe ViCoS, a tool for constructing and visualising conceptual spaces in the area of language documentation. ViCoS allows users to enrich existing lexical information about the words of a language with conceptual knowledge. Their work towards language-based, informal ontology building must be supported by easy-to-use workflows and supporting software, which we will demonstrate.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zwitserlood, I., Ozyurek, A., & Perniss, P. M. (2008). Annotation of sign and gesture cross-linguistically. In O. Crasborn, E. Efthimiou, T. Hanke, E. D. Thoutenhoofd, & I. Zwitserlood (Eds.), Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages (pp. 185-190). Paris: ELDA.

    Abstract

    This paper discusses the construction of a cross-linguistic, bimodal corpus containing three modes of expression: expressions from two sign languages, speech and gestural expressions in two spoken languages and pantomimic expressions by users of two spoken languages who are requested to convey information without speaking. We discuss some problems and tentative solutions for the annotation of utterances expressing spatial information about referents in these three modes, suggesting a set of comparable codes for the description of both sign and gesture. Furthermore, we discuss the processing of entered annotations in ELAN, e.g. relating descriptive annotations to analytic annotations in all three modes and performing relational searches across annotations on different tiers.
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.
  • Zwitserlood, I. (2008). Morphology below the level of the sign - frozen forms and classifier predicates. In J. Quer (Ed.), Proceedings of the 8th Conference on Theoretical Issues in Sign Language Research (TISLR) (pp. 251-272). Hamburg: Signum Verlag.

    Abstract

    The lexicons of many sign languages hold large proportions of “frozen” forms, viz. signs that are generally considered to have been formed productively (as classifier predicates), but that have diachronically undergone processes of lexicalisation. Nederlandse Gebarentaal (Sign Language of the Netherlands; henceforth: NGT) also has many of these signs (Van der Kooij 2002, Zwitserlood 2003). In contrast to the general view on “frozen” forms, a few researchers claim that these signs may be formed according to productive sign formation rules, notably Brennan (1990) for BSL, and Meir (2001, 2002) for ISL. Following these claims, I suggest an analysis of “frozen” NGT signs as morphologically complex, using the framework of Distributed Morphology. The signs in question are derived in a similar way as classifier predicates; hence their similar form (but diverging characteristics). I will indicate how and why the structure and use of classifier predicates and “frozen” forms differ. Although my analysis focuses on NGT, it may also be applicable to other sign languages.

Share this page