Publications

Displaying 101 - 137 of 137
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Roberts, S. G., Everett, C., & Blasi, D. (2015). Exploring potential climate effects on the evolution of human sound systems. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences [ICPhS 2015] Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 14-19). Glasgow: ICPHS.

    Abstract

    We suggest that it is now possible to conduct research on a topic which might be called evolutionary geophonetics. The main question is how the climate influences the evolution of language. This involves biological adaptations to the climate that may affect biases in production and perception; cultural evolutionary adaptations of the sounds of a language to climatic conditions; and influences of the climate on language diversity and contact. We discuss these ideas with special reference to a recent hypothesis that lexical tone is not adaptive in dry climates (Everett, Blasi & Roberts, 2015).
  • Rossi, G. (2021). Conversation analysis (CA). In J. Stanlaw (Ed.), The International Encyclopedia of Linguistic Anthropology. Wiley-Blackwell. doi:10.1002/9781118786093.iela0080.

    Abstract

    Conversation analysis (CA) is an approach to the study of language and social interaction that puts at center stage its sequential development. The chain of initiating and responding actions that characterizes any interaction is a source of internal evidence for the meaning of social behavior as it exposes the understandings that participants themselves give of what one another is doing. Such an analysis requires the close and repeated inspection of audio and video recordings of naturally occurring interaction, supported by transcripts and other forms of annotation. Distributional regularities are complemented by a demonstration of participants' orientation to deviant behavior. CA has long maintained a constructive dialogue and reciprocal influence with linguistic anthropology. This includes a recent convergence on the cross-linguistic and cross-cultural study of social interaction.
  • Schiller, N. O., & Verdonschot, R. G. (2015). Accessing words from the mental lexicon. In J. Taylor (Ed.), The Oxford handbook of the word (pp. 481-492). Oxford: Oxford University Press.

    Abstract

    This chapter describes how speakers access words from the mental lexicon. Lexical access is a crucial
    component in the process of transforming thoughts into speech. Some theories consider lexical access to be
    strictly serial and discrete, while others view this process as being cascading or even interactive, i.e. the different
    sub-levels influence each other. We discuss some of the evidence in favour and against these viewpoints, and
    also present arguments regarding the ongoing debate on how words are selected for production. Another important
    issue concerns the access to morphologically complex words such as derived and inflected words, as well as
    compounds. Are these accessed as whole entities from the mental lexicon or are the parts assembled online? This
    chapter tries to provide an answer to that question as well.
  • Schmidt, J., Scharenborg, O., & Janse, E. (2015). Semantic processing of spoken words under cognitive load in older listeners. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Processing of semantic information in language comprehension has been suggested to be modulated by attentional resources. Consequently, cognitive load would be expected to reduce semantic priming, but studies have yielded inconsistent results. This study investigated whether cognitive load affects semantic activation in speech processing in older adults, and whether this is modulated by individual differences in cognitive and hearing abilities. Older adults participated in an auditory continuous lexical decision task in a low-load and high-load condition. The group analysis showed only a marginally significant reduction of semantic priming in the high-load condition compared to the low-load condition. The individual differences analysis showed that semantic priming was significantly reduced under increased load in participants with poorer attention-switching control. Hence, a resource-demanding secondary task may affect the integration of spoken words into a coherent semantic representation for listeners with poorer attentional skills.
  • Schriefers, H., & Vigliocco, G. (2015). Speech Production, Psychology of [Repr.]. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed) Vol. 23 (pp. 255-258). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.52022-4.

    Abstract

    This article is reproduced from the previous edition, volume 22, pp. 14879–14882, © 2001, Elsevier Ltd.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for
    addressees. Here, we investigate whether this ability is
    modulated by age. Younger and older adults participated in a
    comic narration task in which one participant (the speaker)
    narrated six short comic stories to another participant (the
    addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but
    not older speakers used more words and gestures when narrating novel story content as opposed to known content.
    We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants.
  • Senft, G. (2021). A very special letter. In T. Szczerbowski (Ed.), Language "as round as an orange".. In memory of Professor Krystyna Pisarkowa on the 90th anniversary of her birth (pp. 367). Krakow: Uniwersytetu Pedagogicznj.
  • Senft, G. (2015). The Trobriand Islanders' concept of karewaga. In S. Lestrade, P. de Swart, & L. Hogeweg (Eds.), Addenda. Artikelen voor Ad Foolen (pp. 381-390). Nijmegen: Radboud University.
  • Seuren, P. A. M. (1974). Autonomous versus semantic syntax. In P. A. M. Seuren (Ed.), Semantic syntax (pp. 96-122). Oxford: Oxford University Press.
  • Seuren, P. A. M. (1974). Introduction. In P. A. M. Seuren (Ed.), Semantic syntax (pp. 1-28). Oxford: Oxford University Press.
  • Seuren, P. A. M. (1974). Negative's travels. In P. A. M. Seuren (Ed.), Semantic syntax (pp. 183-208). Oxford: Oxford University Press.
  • Seuren, P. A. M. (2015). Prestructuralist and structuralist approaches to syntax. In T. Kiss, & A. Alexiadou (Eds.), Syntax--theory and analysis: An international handbook (pp. 134-157). Berlin: Mouton de Gruyter.
  • Seuren, P. A. M. (1974). Pronomi clitici in italiano. In M. Medici, & A. Sangregorio (Eds.), Fenomeni morfologici e sintattici nell'Italiano contemporaneo (pp. 309-327). Roma: Bulzoni.
  • Seuren, P. A. M. (2015). Taal is complexer dan je denkt - recursief. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda. Artikelen voor Ad Foolen (pp. 393-400). Nijmegen: Radboud University.
  • Li, Y., Wu, S., Shi, S., Tong, S., Zhang, Y., & Guo, X. (2021). Enhanced inter-brain connectivity between children and adults during cooperation: a dual EEG study. In 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) (pp. 6289-6292). doi:10.1109/EMBC46164.2021.9630330.

    Abstract

    Previous fNIRS studies have suggested that adult-child cooperation is accompanied by increased inter-brain synchrony. However, its reflection in the electrophysiological synchrony remains unclear. In this study, we designed a naturalistic and well-controlled adult-child interaction paradigm using a tangram solving video game, and recorded dual-EEG from child and adult dyads during cooperative and individual conditions. By calculating the directed inter-brain connectivity in the theta and alpha bands, we found that the inter-brain frontal network was more densely connected and stronger in strength during the cooperative than the individual condition when the adult was watching the child playing. Moreover, the inter-brain network across different dyads shared more common information flows from the player to the observer during cooperation, but was more individually different in solo play. The results suggest an enhancement in inter-brain EEG interactions during adult-child cooperation. However, the enhancement was evident in all cooperative cases but partly depended on the role of participants.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: markers of communicative relevance of gesture during demonstration to adults and children. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 217-222). Nantes: Universite of Nantes.

    Abstract

    Speakers adapt their speech and gestures in various ways for their audience. We investigated further whether they use
    ostensive signals (eye gaze, ostensive speech (e.g. like this, this) or a combination of both) in relation to their gestures
    when talking to different addressees, i.e., to another adult or a child in a multimodal demonstration task. While adults used
    more eye gaze towards their gestures with other adults than with children, they were more likely to use combined
    ostensive signals for children than for adults. Thus speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account.
  • Smorenburg, L., Rodd, J., & Chen, A. (2015). The effect of explicit training on the prosodic production of L2 sarcasm by Dutch learners of English. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.

    Abstract

    Previous research [9] suggests that Dutch learners of (British) English are not able to express sarcasm prosodically in their L2. The present study investigates whether explicit training on the prosodic markers of sarcasm in English can improve learners’ realisation of sarcasm. Sarcastic speech was elicited in short simulated telephone conversations between Dutch advanced learners of English and a native British English-speaking ‘friend’ in two sessions, fourteen days apart. Between the two sessions, participants were trained by means of (1) a presentation, (2) directed independent practice, and (3) evaluation of participants’ production and individual feedback in small groups. L1 British English-speaking raters subsequently evaluated the degree of sarcastic sounding in the participants’ responses on a five-point scale. It was found that significantly higher sarcasm ratings were given to L2 learners’ production obtained after the training than that obtained before the training; explicit training on prosody has a positive effect on learners’ production of sarcasm.
  • De Sousa, H., Langella, F., & Enfield, N. J. (2015). Temperature terms in Lao, Southern Zhuang, Southern Pinghua and Cantonese. In M. Koptjevskaja-Tamm (Ed.), The linguistics of temperature (pp. 594-638). Amsterdam: Benjamins.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2015). DIANA, an end-to-end computational model of human word comprehension. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper presents DIANA, a new computational model of human speech processing. It is the first model that simulates the complete processing chain from the on-line processing of an acoustic signal to the execution of a response, including reaction times. Moreover it assumes minimal modularity. DIANA consists of three components. The activation component computes a probabilistic match between the input acoustic signal and representations in DIANA’s lexicon, resulting in a list of word hypotheses changing over time as the input unfolds. The decision component operates on this list and selects a word as soon as sufficient evidence is available. Finally, the execution component accounts for the time to execute a behavioral action. We show that DIANA well simulates the average participant in a word recognition experiment.
  • Ten Bosch, L., Boves, L., Tucker, B., & Ernestus, M. (2015). DIANA: Towards computational modeling reaction times in lexical decision in North American English. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 1576-1580).

    Abstract

    DIANA is an end-to-end computational model of speech processing, which takes as input the speech signal, and provides as output the orthographic transcription of the stimulus, a word/non-word judgment and the associated estimated reaction time. So far, the model has only been tested for Dutch. In this paper, we extend DIANA such that it can also process North American English. The model is tested by having it simulate human participants in a large scale North American English lexical decision experiment. The simulations show that DIANA can adequately approximate the reaction times of an average participant (r = 0.45). In addition, they indicate that DIANA does not yet adequately model the cognitive processes that take place after stimulus offset.
  • Terband, H., Rodd, J., & Maas, E. (2015). Simulations of feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production in the DIVA model. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahan, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015).

    Abstract

    Apraxia of Speech (AOS) is a motor speech disorder whose precise nature is still poorly understood. A recent behavioural experiment featuring a noise masking paradigm suggests that AOS reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts [10]. In the present study, we set out to validate the interpretation of AOS as a feedforward impairment by means of a series of computational simulations with the DIVA model [6, 7] mimicking the behavioural experiment. Simulation results showed a larger reduction in vowel spacing and a smaller vowel dispersion in the masking condition compared to the no-masking condition for the simulated feedforward deficit, whereas the other groups showed an opposite pattern. These results mimic the patterns observed in the human data, corroborating the notion that AOS can be conceptualized as a deficit in feedforward control
  • Torreira, F. (2015). Melodic alternations in Spanish. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) (pp. 946.1-5). Glasgow, UK: The University of Glasgow. Retrieved from http://www.icphs2015.info/pdfs/Papers/ICPHS0946.pdf.

    Abstract

    This article describes how the tonal elements of two common Spanish intonation contours –the falling statement and the low-rising-falling request– align with the segmental string in broad-focus utterances differing in number of prosodic words. Using an imitation-and-completion task, we show that (i) the last stressed syllable of the utterance, traditionally viewed as carrying the ‘nuclear’ accent, associates with either a high or a low tonal element depending on phrase length (ii) that certain tonal elements can be realized or omitted depending on the availability of specific metrical positions in their intonational phrase, and (iii) that the high tonal element of the request contour associates with either a stressed syllable or an intonational phrase edge depending on phrase length. On the basis of these facts, and in contrast to previous descriptions of Spanish intonation relying on obligatory and constant nuclear contours (e.g., L* L% for all neutral statements), we argue for a less constrained intonational morphology involving tonal units linked to the segmental string via contour-specific principles.
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2015). ERP indices of situated reference in visual contexts. In D. Noelle, R. Dale, A. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2422-2427). Austin: Cognitive Science Society.

    Abstract

    Violations of the maxims of Quantity occur when utterances provide more (over-specified) or less (under-specified) information than strictly required for referent identification. While behavioural datasuggest that under-specified expressions lead to comprehension difficulty and communicative failure, there is no consensus as to whether over-specified expressions are also detrimental to comprehension. In this study we shed light on this debate, providing neurophysiological evidence supporting the view that extra information facilitates comprehension. We further present novel evidence that referential failure due to under-specification is qualitatively different from explicit cases of referential failure, when no matching referential candidate is available in the context.
  • Trilsbeek, P., Broeder, D., Elbers, W., & Moreira, A. (2015). A sustainable archiving software solution for The Language Archive. In Proceedings of the 4th International Conference on Language Documentation and Conservation (ICLDC).
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Udden, J., & Schoffelen, J.-M. (2015). Mother of all Unification Studies (MOUS). In A. E. Konopka (Ed.), Research Report 2013 | 2014 (pp. 21-22). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2236748.
  • Van Heugten, M., Bergmann, C., & Cristia, A. (2015). The Effects of Talker Voice and Accent on Young Children's Speech Perception. In S. Fuchs, D. Pape, C. Petrone, & P. Perrier (Eds.), Individual Differences in Speech Production and Perception (pp. 57-88). Bern: Peter Lang.

    Abstract

    Within the first few years of life, children acquire many of the building blocks of their native language. This not only involves knowledge about the linguistic structure of spoken language, but also knowledge about the way in which this linguistic structure surfaces in their speech input. In this chapter, we review how infants and toddlers cope with differences between speakers and accents. Within the context of milestones in early speech perception, we examine how voice and accent characteristics are integrated during language processing, looking closely at the advantages and disadvantages of speaker and accent familiarity, surface-level deviation between two utterances, variability in the input, and prior speaker exposure. We conclude that although deviation from the child’s standard can complicate speech perception early in life, young listeners can overcome these additional challenges. This suggests that early spoken language processing is flexible and adaptive to the listening situation at hand.
  • Verhoef, T., Roberts, S. G., & Dingemanse, M. (2015). Emergence of systematic iconicity: Transmission, interaction and analogy. In D. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2481-2486). Austin, Tx: Cognitive Science Society.

    Abstract

    Languages combine arbitrary and iconic signals. How do iconic signals emerge and when do they persist? We present an experimental study of the role of iconicity in the emergence of structure in an artificial language. Using an iterated communication game in which we control the signalling medium as well as the meaning space, we study the evolution of communicative signals in transmission chains. This sheds light on how affordances of the communication medium shape and constrain the mappability and transmissibility of form-meaning pairs. We find that iconic signals can form the building blocks for wider compositional patterns
  • Wanrooij, K., De Vos, J., & Boersma, P. (2015). Distributional vowel training may not be effective for Dutch adults. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    Distributional vowel training for adults has been reported as “effective” for Spanish and Bulgarian learners of Dutch vowels, in studies using a behavioural task. A recent study did not yield a similar clear learning effect for Dutch learners of the English vowel contrast /æ/~/ε/, as measured with event-related potentials (ERPs). The present study aimed to examine the possibility that the latter result was related to the method. As in the ERP study, we tested whether distributional training improved Dutch adult learners’ perception of English /æ/~/ε/. However, we measured behaviour instead of ERPs, in a design identical to that used in the previous studies with Spanish learners. The results do not support an effect of distributional training and thus “replicate” the ERP study. We conclude that it remains unclear whether distributional vowel training is effective for Dutch adults.
  • Willems, R. M. (2015). Cognitive neuroscience of natural language use: Introduction. In Cognitive neuroscience of natural language use (pp. 1-7). Cambridge: Cambridge University Press.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2015). Statistical word learning is a continuous process: Evidence from the human simulation paradigm. In D. Noelle, R. Dale, A. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2422-2427). Austin: Cognitive Science Society.

    Abstract

    In the word-learning domain, both adults and young children are able to find the correct referent of a word from highly ambiguous contexts that involve many words and objects by computing distributional statistics across the co-occurrences of words and referents at multiple naming moments (Yu & Smith, 2007; Smith & Yu, 2008). However, there is still debate regarding how learners accumulate distributional information to learn object labels in natural learning environments, and what underlying learning mechanism learners are most likely to adopt. Using the Human Simulation Paradigm (Gillette, Gleitman, Gleitman & Lederer, 1999), we found that participants’ learning performance gradually improved and that their ability to remember and carry over partial knowledge from past learning instances facilitated subsequent learning. These results support the statistical learning model that word learning is a continuous process.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.

Share this page