Advancing Behavioral And Cognitive Understanding of Speech (ABACUS) -

Program & Abstracts

Friday 13th January

17:00 Tecumseh Fitch (in SP1 of the Spinoza Building, Radboud Uni)  Cancelled

Saturday 14th January

9:30 Bart de Boer - opening

9:45 Andy Wedel

"Signal evolution within the word"

Languages have been shown to optimize their lexicons over time with respect to the amount of signal allocated to words, relative to their informativity: words that are on average less predictable in context tend to be longer, while those that are on average more predictable tend to be shorter (Piantadosi et al 2011, cf. Zipf 1935). Further, psycholinguistic research has shown that listeners are able to incrementally process words as they are heard, progressively updating inferences about what word is intended as the phonetic signal unfolds in time. As a consequence, phonetic cues early in the signal for a word are more informative about word-identity because they are less constrained by previous segmental context. This suggests that languages should not only optimize the total amount of signal allocated to different words, but optimize the distribution of that information across the word. Specifically, words that are on average less predictable in context should preferentially target highly informative phonetic cues early in the word, while preserving a 'long tail' of redundant cues later in the word. In this talk I will review recent evidence that this is the case in English. Further, languages show a strong tendency to develop phonological patterns which support phonetic cue informativity at the beginnings of words, while reduce cue informativity later in words. I will argue that this typological tendency plausibly arises from this word-level phenomenon.

10:30 Anne Warlaumont 

"Evolution and Development of Syllabic Speech Sounds."

One of the hallmark characteristics of speech is that it combines consonant and vowel elements in syllabic units. The emergence of well-timed consonant-vowel transitions is a salient feature of infant vocal development and on average infants start to regularly produce these sounds at around 6 months of age. I will present a computational model in which oscillations in the activity of a network of spiking motor cortex neurons generate oscillations of vocal tract muscles in a simulated human vocal tract, in turn generating syllabic synthesized vocalizations. Furthermore, a biologically and psychologically plausible learning mechanism (reward-modulated spike timing dependent plasticity) can drive the model to increase its likelihood of producing syllabic sounds. I will discuss the implications of this work for our understanding of how syllabic vocalization likely evolved. I will also discuss the relationship of the model to other computational modeling projects and to behavioral research with human infants.


11:15 Coffee break

11:30 Karin Wanrooij

"Neural correlates of distributional speech sound learning. A literature review"

Distributional speech sound learning is learning speech sound categories from plain exposure to speech, i.e., without feedback or instruction. In linguistic theory and related computer simulations, the mechanism is viewed as a low-level, bottom-up process, possibly related to neuronal tuning in the primary auditory cortex (A1). However, neuroscientific evidence has been presented scarcely. I review possible neural correlates of distributional learning in infancy and adulthood, obtained with various research techniques, which target different levels of analysis (i.e., the population of neurons, the neuron, and the synapse), and which are applied in vivo and in vitro, in humans and non-human animals.

12:15 Tessa Verhoef 

"Experiments investigating the emergence of structure and meaning in language."

Language is an important defining feature of the human species. There is, however, still a lot we do not know about how language evolved. I will discuss recent data collected as part of several studies that mimic language evolution processes by inviting participants to take part in experiments disguised as interactive games. Previous work has shown that language-like signals emerge spontaneously in the laboratory when people are asked to communicate through a medium that is linguistically novel to them. When laboratory languages are transmitted from person to person, features of language structure gradually appear. Such experimental methods provide a window into the mechanisms that were likely involved in the early emergence of human language. The first study I will present investigate the influence of cultural transmission and social coordination on iconic patterning. Participants had to learn, reproduce and communicate with an artificial language and predictable patterns emerged as these novel sound systems were transmitted. Another study focuses on the role of cognitive biases and social coordination in the emergence of space-time metaphors in language. Pairs of participants used a novel, spatial signaling device to play guessing games about temporal concepts. Rapidly, communication systems were established that mapped systematically between time and space. Finally I will show how Microsoft Kinect, a technology that was designed for video game control, can be used to measure formal changes in gesture as a consequence of conventionalization in interactive games. The results of these experiments contribute to our understanding of the relation between the macro-level patterns we see emerge in languages and the micro-level individual behaviors and cognitive biases that shape them.

13:00 Lunch (provided by ABACUS)

14:15 Marieke Schouwstra 

"The emergence of word order conventions: natural and experimental evidence from the manual modality"

Many of the world’s languages use conventional word order for expressing who did what to whom. But how did these word order conventions come into existence? Recently, researchers have started focusing on linguistic structure in the manual modality (gesture and sign language), to look at how it emerges ‘in the wild’ and in the laboratory. 

Silent gesture, an experimental paradigm in which adult hearing participants describe events using only their hands, has been a valuable tool for investigating the cognitive biases that play a role when no system of conventions is in place yet. Participants showed a language- independent preference for SOV for extensional transitive events (e.g., boy-ball- throw), but participants prefer SVO for intensional events (e.g., boy-search- ball). This variability—dependent on semantic properties—represents naturalness, reflecting cognitive preferences to put Agents first and more abstract/relational information last. The pattern is not typically found in existing languages, which are instead more regular. 

Understanding the transition from naturalness to conventionalised regularity is a major goal of language evolution research. I will present a new approach to this challenge, extending he silent gesture paradigm, to show how individuals improvise solutions to communicative challenges, how pairs of individuals create conventions through interaction, and how these onventions are transmitted over time through learning.

Finally, to assess how word order conventions from the lab compare to an existing (and recently emerged) language in the manual modality, I will discuss results from a study eliciting ntensional and extensional events in Nicaraguan Sign Language, showing that, despite being quite strongly V-final overall, this language shows traces of naturalness. Together, the data suggest a picture of the emergence word order conventions, starting from a semantically conditioned basis, and becoming more (but not entirely) regular over time.

15:00 Odette Scharenborg 

"The effect of background noise on native and non-native listening"

Most people will have noticed that communication in the presence of background noise is more difficult in a non-native than in the native language - even for those who have a high proficiency in the non-native language involved. Why is that? The main reason for this problem seems obvious: Imperfect knowledge of the non-native language and a degraded speech signal due to the presence of background noise interact strongly to our disadvantage when we listen to a non-native language. In my research, I aim to understand the effect of background noise on the cognitive processes underlying native and non-native spoken-word recognition. I will present results of several experiments investigating the effect of background noise on 1) the flexibility of the perceptual system in native and non-native listening; 2) the multiple activation, competition and recognition processes in native and non-native spoken-word recognition; and 3) the perception of sentence accent in native and non-native listening. The results support the hypothesis that the performance difference between native and non-native listeners in the presence of background noise is, at least partially, caused by a reduced flexibility of the perceptual system during non-native listening and a reduced exploitation of higher-level information during speech processing by non-native listeners.

15:45 Coffee break

16:00 Marco Gamba 

"Primate business and one hell of a song."

Vocal communication plays a critical role in mediating interactions among conspecifics for many animals, especially in the dark, dense tropical rainforests. Among the most interesting acoustic signals, primate songs function efficiently in spacing groups, defend territories, and possibly attract potential mates. Indris (Indri indri), the only singing lemurs, produce different types of songs that can be differentiated according to their temporal patterns. The most distinctive portions of the songs are "descending phrases"; consisting of 2-5 units. We recorded songs in the Eastern rainforests of Madagascar from 2005 to 2015. We recognised individual indris using natural markings, and all the recordings were collected when the recorder operator was in visual contact with the singing group. We extracted the pitch contour of song units to investigated which individuals overlapped each other more frequently. We then tested whether the structure of the phrases could provide conspecifics with information about sex and individual identity. We also examined whether the structure of the phrases was related to the genetic relatedness of the indris. The results suggested that the reproductive pair overlaps more often than the other members of the social group. The songs have consistent sex-, group-, and individual-specific features. We have also found a significant correlation between the genetic distance and the acoustic similarity. The descending phrases may be used by the indris to convey information of sex and identity, and genetic relatedness may play a role in determining song similarity to a larger extent than previously assumed.

16:45 Dan Dediu 

"Language and speech do not evolve in a void: how our biology affects linguistic diversity."

In order to properly understand language and its evolution, we must cease to consider it as a purely cultural product somehow detached from the outside world. Instead, I will argue, we must consider the language’s (and its speakers’) wider environment, and the many ways in which this multi-faceted environment can affect the evolution of language, its diversity and (quasi)universal properties. In particular, I will focus on the influence of vocal tract anatomy on cross-linguistic variation in speech sounds (phonetics and phonology), seen as a test case and entry point for understanding the complex and subtle interplay between our biology and our culture (of which language is a fascinating component).

17:30 Closing

Last checked 2017-01-03 by hannah little

Street address
Wundtlaan 1
6525 XD Nijmegen
The Netherlands


Mailing address
P.O. Box 310
6500 AH Nijmegen
The Netherlands

Phone:   +31-24-3521911
Fax:        +31-24-3521213
E-mail:   


Public Outreach Officer
Charlotte Horn