This content is archived, it could be outdated.
Vidi Grants for two MPI researchers
MPI researchers Dan Dediu and Odette Scharenborg both received Vidi Grants for innovative research from the Netherlands Organisation for Scientific Research (NWO). Dediu will investigate genetic variation and genetic biases in order to explain language diversity, while Scharenborg will study the effect of noise on non-native spoken-word recognition. Both topics have received little attention in research.
June 15, 2012
Across the world people speak to each other in more than 6,000 languages. This bewildering diversity comprises not only their words and grammars but also the actual sounds they use. This diversity, however, is not unbounded: there are universals to which languages tend to conform (e.g., as having vowels and consonants) and groups of languages share similarities. In general, languages are similar due to descent from an ancestral language (such as French, Italian and Spanish descending from vulgar Latin) or because they borrow words, sounds or grammar from each other (such as the recent widespread borrowings from English).
"A source of both universal constraints and differences between languages is represented by the organs we use to produce speech," says Dan Dediu, researcher at MPI's Language and Genetics Department. "Obviously, no human language uses a sound that the vocal tract cannot produce, and few languages will use those that are hard to make, resulting in universal constraints and tendencies across the world's languages."
The study of vocal tract variation and its possible effects on differences between languages has been almost completely neglected, Dediu explains. During the Vidi project, he will investigate the effects of genetically-based variation on speech and language using complex and realistic computer models, and apply novel methods inspired by evolutionary biology to the study of such genetic biases. As well, he will collect high-quality primary data on vocal tract variation within and across languages. "I will make available several public databases bringing together new data and existing information that is currently spread across different scientific literature.""I really appreciated the good reviews and positive feedback from the NWO committee and take this grant as a personal validation as a researcher. The competition was very high and many applications were very good, so I'm really glad my application was selected. Now I'll be able to start this research and build and manage a team. I feel quite lucky."
Listening to non-native language
Successful speech recognition is a key factor for social integration and communication. At the same time, ever increasing numbers of people travel, live, or work in a non-native language environment, thus communicating in a non-native language. Listening in a non-native language is harder than in one’s native language, and even more difficult in the presence of background noise, e.g., when listening on an airplane. A common observation is that some people have more difficulty in such situations than others.
"These individual differences in the effect of noise on non-native spoken-word recognition might be modulated by individual differences in attention and proficiency of the non-native language", explains Odette Scharenborg, visiting researcher at MPI's Adaptive Listening Group. Surprisingly, this topic has received little attention in research. In her Vidi project, she will try to find out why non-native word recognition in noise is so much harder than in a native language, and why some listeners can cope with noise better than others.
Develop a new theory
Scharenborg wants to develop a new theory of spoken-word recognition that accounts for individual differences in non-native spoken-word recognition in noise. She also plans to examine the interaction between language processing and cognition (i.e., attention, proficiency), a link generally acknowledged but rarely implemented in spoken-word recognition theories.
"This grant allows me to continue doing what I love best: investigating how we are able to understand words from the speech stream," she says. "I am fascinated by our ability to understand speech in a large variety of listening conditions, from quiet offices to loud parties, from listening in one's native language to listening in a non-native language, and while carrying out other tasks simultaneously, such as listening (and talking) while driving a car. I look forward to collaborating with many inspiring scientists around the world."