Presentations

Displaying 1 - 3 of 3
  • Benetti, S., & Ferrari, A. (2023). Towards a neurocognitive model of multisensory processing in face- to-face communication. Talk presented at the 21st International Multisensory Research Forum (IRMF 2023). Brussels, Belgium. 2023-06-23 - 2023-06-30.

    Abstract

    Building on previous calls for the need to study communication in its multimodal manifestation and ecological context, we offer an original perspective that bridges recent advances in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to- face communication. First, we highlight a psycholinguistic framework that characterises face-to- face communication at three parallel processing levels:
    multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a ateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (“lateral processing pathway”). Third, we reconcile the two frameworks into a
    neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. We conclude that the time is mature to accept the challenge we, among others before, advocated in this perspective and move beyond the speech-centred perspective dominating research on the neurocognitive mechanisms of human communication and language. Testing our framework represents a novel and promising endeavour for future research.
  • Chen, Y., Ferrari, A., Hagoort, P., Bocanegra, B., & Poletiek, F. H. (2023). Learning hierarchical centre-embedding structures: Influence of distributional properties of the Input. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Nearly all human languages have grammars with complex recursive structures. These structures pose notable learning challenges. Two distributional properties of the input may facilitate learning: the presence of semantic biases (e.g. p(barks|dog) > p(talks|dog)) and the Zipf-distribution, with short sentences being extremely more frequent than longer ones. This project tested the effect of these sources of information on statistical learning of a hierarchical center-embedding grammar, using an artificial grammar learning paradigm. Semantic biases were represented by variations in transitional probabilities between words, with a biased input (p(barks|dog) > p(talks|dog)) compared to a non-biased input (p(barks|dog) = p(talks|dog)). The Zipf distribution was compared to a flat distribution, with sentences of different lengths occurring equally often. In a 2×2 factorial design, we tested for effects of biased transitional probabilities (biased/non-biased) and the distribution of sequences with varying length (Zipf distribution/flat distribution) on implicit learning and explicit ratings of grammaticality. Preliminary results show that a Zipf-shaped and semantically biased input facilitates grammar learnability. Thus, this project contributes to understanding how we learn complex structures with long-distance dependencies: learning may be sensitive to the specific distributional properties of the linguistic input, mirroring meaningful aspects of the world and favoring short utterances.
  • Mazzi, G., Ferrari, A., Valzolgher, C., Tommasini, M., Pavani, F., & Benetti, S. (2023). Domain-general Bayesian causal inference in multisensory processing of face-to-face interactions. Poster presented at the Workshop on Concepts, Actions and Objects (CAOs 2023), Rovereto, Italy.

Share this page