Andrea Ravignani

Publications

Displaying 1 - 14 of 14
  • Ravignani, A., & Thompson, B. (2017). A note on ‘Noam Chomsky – What kind of creatures are we? Language in Society, 46(3), 446-447. doi:10.1017/S0047404517000288.
  • Ravignani, A., Honing, H., & Kotz, S. A. (2017). Editorial: The evolution of rhythm cognition: Timing in music and speech. Frontiers in Human Neuroscience, 11: 303. doi:10.3389/fnhum.2017.00303.

    Abstract

    This editorial serves a number of purposes. First, it aims at summarizing and discussing 33 accepted contributions to the special issue “The evolution of rhythm cognition: Timing in music and speech.” The major focus of the issue is the cognitive neuroscience of rhythm, intended as a neurobehavioral trait undergoing an evolutionary process. Second, this editorial provides the interested reader with a guide to navigate the interdisciplinary contributions to this special issue. For this purpose, we have compiled Table 1, where methods, topics, and study species are summarized and related across contributions. Third, we also briefly highlight research relevant to the evolution of rhythm that has appeared in other journals while this special issue was compiled. Altogether, this editorial constitutes a summary of rhythm research in music and speech spanning two years, from mid-2015 until mid-2017
  • Ravignani, A., & Sonnweber, R. (2017). Chimpanzees process structural isomorphisms across sensory modalities. Cognition, 161, 74-79. doi:10.1016/j.cognition.2017.01.005.
  • Ravignani, A., Gross, S., Garcia, M., Rubio-Garcia, A., & De Boer, B. (2017). How small could a pup sound? The physical bases of signaling body size in harbor seals. Current Zoology, 63(4), 457-465. doi:10.1093/cz/zox026.

    Abstract

    Vocal communication is a crucial aspect of animal behavior. The mechanism which most mammals use to vocalize relies on three anatomical components. First, air overpressure is generated inside the lower vocal tract. Second, as the airstream goes through the glottis, sound is produced via vocal fold vibration. Third, this sound is further filtered by the geometry and length of the upper vocal tract. Evidence from mammalian anatomy and bioacoustics suggests that some of these three components may covary with an animal’s body size. The framework provided by acoustic allometry suggests that, because vocal tract length (VTL) is more strongly constrained by the growth of the body than vocal fold length (VFL), VTL generates more reliable acoustic cues to an animal’s size. This hypothesis is often tested acoustically but rarely anatomically, especially in pinnipeds. Here, we test the anatomical bases of the acoustic allometry hypothesis in harbor seal pups Phoca vitulina. We dissected and measured vocal tract, vocal folds, and other anatomical features of 15 harbor seals post-mortem. We found that, while VTL correlates with body size, VFL does not. This suggests that, while body growth puts anatomical constraints on how vocalizations are filtered by harbor seals’ vocal tract, no such constraints appear to exist on vocal folds, at least during puppyhood. It is particularly interesting to find anatomical constraints on harbor seals’ vocal tracts, the same anatomical region partially enabling pups to produce individually distinctive vocalizations.
  • Ravignani, A., & Norton, P. (2017). Measuring rhythmic complexity: A primer to quantify and compare temporal structure in speech, movement, and animal vocalizations. Journal of Language Evolution, 2(1), 4-19. doi:10.1093/jole/lzx002.

    Abstract

    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history of human speech. Here we focus on analytical tools to measure and compare temporal structure in human speech and animal vocalizations. We introduce the reader to a range of statistical methods usable, on the one hand, to quantify rhythmic complexity in single vocalizations, and on the other hand, to compare rhythmic structure between multiple vocalizations. These methods include: time series analysis, distributional measures, variability metrics, Fourier transform, auto- and cross-correlation, phase portraits, and circular statistics. Using computer-generated data, we apply a range of techniques, walking the reader through the necessary software and its functions. We describe which techniques are most appropriate to test particular hypotheses on rhythmic structure, and provide possible interpretations of the tests. These techniques can be equally well applied to find rhythmic structure in gesture, movement, and any other behavior developing over time, when the research focus lies on its temporal structure. This introduction to quantitative techniques for rhythm and timing analysis will hopefully spur additional comparative research, and will produce comparable results across all disciplines working on the evolution of speech, ultimately advancing the field.

    Additional information

    lzx002_Supp.docx
  • Ravignani, A. (2017). Interdisciplinary debate: Agree on definitions of synchrony [Correspondence]. Nature, 545, 158. doi:10.1038/545158c.
  • Ravignani, A., & Madison, G. (2017). The paradox of isochrony in the evolution of human rhythm. Frontiers in Psychology, 8: 1820. doi:10.3389/fpsyg.2017.01820.

    Abstract

    Isochrony is crucial to the rhythm of human music. Some neural, behavioral and anatomical traits underlying rhythm perception and production are shared with a broad range of species. These may either have a common evolutionary origin, or have evolved into similar traits under different evolutionary pressures. Other traits underlying rhythm are rare across species, only found in humans and few other animals. Isochrony, or stable periodicity, is common to most human music, but isochronous behaviors are also found in many species. It appears paradoxical that humans are particularly good at producing and perceiving isochronous patterns, although this ability does not conceivably confer any evolutionary advantage to modern humans. This article will attempt to solve this conundrum. To this end, we define the concept of isochrony from the present functional perspective of physiology, cognitive neuroscience, signal processing, and interactive behavior, and review available evidence on isochrony in the signals of humans and other animals. We then attempt to resolve the paradox of isochrony by expanding an evolutionary hypothesis about the function that isochronous behavior may have had in early hominids. Finally, we propose avenues for empirical research to examine this hypothesis and to understand the evolutionary origin of isochrony in general.
  • Ravignani, A. (2017). Visualizing and interpreting rhythmic patterns using phase space plots. Music Perception, 34(5), 557-568. doi:10.1525/MP.2017.34.5.557.

    Abstract

    STRUCTURE IN MUSICAL RHYTHM CAN BE MEASURED using a number of analytical techniques. While some techniques—like circular statistics or grammar induction—rely on strong top-down assumptions, assumption-free techniques can only provide limited insights on higher-order rhythmic structure. I suggest that research in music perception and performance can benefit from systematically adopting phase space plots, a visualization technique originally developed in mathematical physics that overcomes the aforementioned limitations. By jointly plotting adjacent interonset intervals (IOI), the motivic rhythmic structure of musical phrases, if present, is visualized geometrically without making any a priori assumptions concerning isochrony, beat induction, or metrical hierarchies. I provide visual examples and describe how particular features of rhythmic patterns correspond to geometrical shapes in phase space plots. I argue that research on music perception and systematic musicology stands to benefit from this descriptive tool, particularly in comparative analyses of rhythm production. Phase space plots can be employed as an initial assumption-free diagnostic to find higher order structures (i.e., beyond distributional regularities) before proceeding to more specific, theory-driven analyses.
  • Filippi, P., Jadoul, Y., Ravignani, A., Thompson, B., & de Boer, B. (2016). Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Frontiers in Human Neuroscience, 10: 586. doi:10.3389/fnhum.2016.00586.

    Abstract

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
  • Geambaşu, A., Ravignani, A., & Levelt, C. C. (2016). Preliminary experiments on human sensitivity to rhythmic structure in a grammar with recursive self-similarity. Frontiers in Neuroscience, 10: 281. doi:10.3389/fnins.2016.00281.

    Abstract

    We present the first rhythm detection experiment using a Lindenmayer grammar, a self-similar recursive grammar shown previously to be learnable by adults using speech stimuli. Results show that learners were unable to correctly accept or reject grammatical and ungrammatical strings at the group level, although five (of 40) participants were able to do so with detailed instructions before the exposure phase.
  • Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.

    Abstract

    Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals1, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm2, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music1; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.

    Additional information

    Supplementary information Raw data
  • Ravignani, A., & Cook, P. F. (2016). The evolutionary biology of dance without frills. Current Biology, 26(19), R878-R879. doi:10.1016/j.cub.2016.07.076.

    Abstract

    Recently psychologists have taken up the question of whether dance is reliant on unique human adaptations, or whether it is rooted in neural and cognitive mechanisms shared with other species 1, 2. In its full cultural complexity, human dance clearly has no direct analog in animal behavior. Most definitions of dance include the consistent production of movement sequences timed to an external rhythm. While not sufficient for dance, modes of auditory-motor timing, such as synchronization and entrainment, are experimentally tractable constructs that may be analyzed and compared between species. In an effort to assess the evolutionary precursors to entrainment and social features of human dance, Laland and colleagues [2] have suggested that dance may be an incidental byproduct of adaptations supporting vocal or motor imitation — referred to here as the ‘imitation and sequencing’ hypothesis. In support of this hypothesis, Laland and colleagues rely on four convergent lines of evidence drawn from behavioral and neurobiological research on dance behavior in humans and rhythmic behavior in other animals. Here, we propose a less cognitive, more parsimonious account for the evolution of dance. Our ‘timing and interaction’ hypothesis suggests that dance is scaffolded off of broadly conserved timing mechanisms allowing both cooperative and antagonistic social coordination.
  • Ravignani, A., Fitch, W. T., Hanke, F. D., Heinrich, T., Hurgitsch, B., Kotz, S. A., Scharff, C., Stoeger, A. S., & de Boer, B. (2016). What pinnipeds have to say about human speech, music, and the evolution of rhythm. Frontiers in Neuroscience, 10: 274. doi:10.3389/fnins.2016.00274.

    Abstract

    Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed.
  • Ravignani, A., & Fitch, W. T. (2012). Sonification of experimental parameters as a new method for efficient coding of behavior. In A. Spink, F. Grieco, O. E. Krips, L. W. S. Loijens, L. P. P. J. Noldus, & P. H. Zimmerman (Eds.), Measuring Behavior 2012, 8th International Conference on Methods and Techniques in Behavioral Research (pp. 376-379).

    Abstract

    Cognitive research is often focused on experimental condition-driven reactions. Ethological studies frequently
    rely on the observation of naturally occurring specific behaviors. In both cases, subjects are filmed during the
    study, so that afterwards behaviors can be coded on video. Coding should typically be blind to experimental
    conditions, but often requires more information than that present on video. We introduce a method for blindcoding
    of behavioral videos that takes care of both issues via three main innovations. First, of particular
    significance for playback studies, it allows creation of a “soundtrack” of the study, that is, a track composed of
    synthesized sounds representing different aspects of the experimental conditions, or other events, over time.
    Second, it facilitates coding behavior using this audio track, together with the possibly muted original video.
    This enables coding blindly to conditions as required, but not ignoring other relevant events. Third, our method
    makes use of freely available, multi-platform software, including scripts we developed.

Share this page