Publications

Displaying 101 - 200 of 441
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A., & Carter, D. (1987). The prosodic structure of initial syllables in English. In J. Laver, & M. Jack (Eds.), Proceedings of the European Conference on Speech Technology: Vol. 1 (pp. 207-210). Edinburgh: IEE.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., & Shanley, J. (2010). Validation of a training method for L2 continuous-speech segmentation. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 1844-1847).

    Abstract

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for second-language listening, focusing on speech segmentation and employing a task designed for studying this: word-spotting. Listeners detect real words in sequences consisting of a word plus a minimal context. The present validation study shows that learners from varying non-English backgrounds successfully perform a version of this task in English, and display appropriate sensitivity to structural factors that also affect segmentation by native English listeners.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Dijkstra, T., & Kempen, G. (1997). Het taalgebruikersmodel. In H. Hulshof, & T. Hendrix (Eds.), De taalcentrale. Amsterdam: Bulkboek.
  • Dimitrova, D. V., Redeker, G., & Hoeks, J. C. J. (2009). Did you say a BLUE banana? The prosody of contrast and abnormality in Bulgarian and Dutch. In 10th Annual Conference of the International Speech Communication Association [Interspeech 2009] (pp. 999-1002). ISCA Archive.

    Abstract

    In a production experiment on Bulgarian that was based on a previous study on Dutch [1], we investigated the role of prosody when linguistic and extra-linguistic information coincide or contradict. Speakers described abnormally colored fruits in conditions where contrastive focus and discourse relations were varied. We found that the coincidence of contrast and abnormality enhances accentuation in Bulgarian as it did in Dutch. Surprisingly, when both factors are in conflict, the prosodic prominence of abnormality often overruled focus accentuation in both Bulgarian and Dutch, though the languages also show marked differences.
  • Dimroth, C., & Narasimhan, B. (2009). Accessibility and topicality in children's use of word order. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd annual Boston University Conference on Language Development (BULCD) (pp. 133-138).
  • Dimroth, C. (2010). The acquisition of negation. In L. R. Horn (Ed.), The expression of negation (pp. 39-73). Berlin/New York: Mouton de Gruyter.
  • Dimroth, C. (2009). Stepping stones and stumbling blocks: Why negation accelerates and additive particles delay the acquisition of finiteness in German. In C. Dimroth, & P. Jordens (Eds.), Functional Categories in Learner Language (pp. 137-170). Berlin: Mouton de Gruyter.
  • Dingemanse, M. (2010). Folk definitions of ideophones. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 24-29). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529151.

    Abstract

    Ideophones are marked words that depict sensory events, for example English hippety-hoppety ‘in a limping and hobbling manner’ or Siwu mukumuku ‘mouth movements of a toothless person eating’. They typically have special sound patterns and distinct grammatical properties. Ideophones are found in many languages of the world, suggesting a common fascination with detailed sensory depiction, but reliable data on their meaning and use is still very scarce. This task involves video-recording spontaneous, informal explanations (“folk definitions”) of individual ideophones by native speakers, in their own language. The approach facilitates collection of rich primary data in a planned context while ensuring a large amount of spontaneity and freedom.
  • Dingemanse, M. (2009). Ideophones in unexpected places. In P. K. Austin, O. Bond, M. Charette, D. Nathan, & P. Sells (Eds.), Proceedings of the 2nd Conference on Language Documentation and Linguistic Theory (pp. 83-97). London: School of Oriental and African Studies (SOAS).
  • Dingemanse, M. (2023). Ideophones. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 466-476). Oxford: Oxford University Press.

    Abstract

    Many of the world’s languages feature an open lexical class of ideophones, words whose marked forms and sensory meanings invite iconic associations. Ideophones (also known as mimetics or expressives) are well-known from languages in Asia, Africa and the Americas, where they often form a class on the same order of magnitude as other major word classes and take up a considerable functional load as modifying expressions or predicates. Across languages, commonalities in the morphosyntactic behaviour of ideophones can be related to their nature and origin as vocal depictions. At the same time there is ample room for linguistic diversity, raising the need for fine-grained grammatical description of ideophone systems. As vocal depictions, ideophones often form a distinct lexical stratum seemingly conjured out of thin air; but as conventionalized words, they inevitably grow roots in local linguistic systems, showing relations to adverbs, adjectives, verbs and other linguistic resources devoted to modification and predication
  • Dingemanse, M. (2023). Interjections. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 477-491). Oxford: Oxford University Press.

    Abstract

    No class of words has better claims to universality than interjections. At the same time, no category has more variable content than this one, traditionally the catch-all basket for linguistic items that bear a complicated relation to sentential syntax. Interjections are a mirror reflecting methodological and theoretical assumptions more than a coherent linguistic category that affords unitary treatment. This chapter focuses on linguistic items that typically function as free-standing utterances, and on some of the conceptual, methodological, and theoretical questions generated by such items. A key move is to study these items in the setting of conversational sequences, rather than from the “flatland” of sequential syntax. This makes visible how some of the most frequent interjections streamline everyday language use and scaffold complex language. Approaching interjections in terms of their sequential positions and interactional functions has the potential to reveal and explain patterns of universality and diversity in interjections.
  • Dittmar, N., & Klein, W. (1975). Untersuchungen zum Pidgin-Deutsch spanischer und italienischer Arbeiter in der Bundesrepublik: Ein Arbeitsbericht. In A. Wierlacher (Ed.), Jahrbuch Deutsch als Fremdsprache (pp. 170-194). Heidelberg: Groos.
  • Dolscheid, S., Shayan, S., Ozturk, O., Majid, A., & Casasanto, D. (2010). Language shapes mental representations of musical pitch: Implications for metaphorical language processing [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.

    Abstract

    Speakers often use spatial metaphors to talk about musical pitch (e.g., a low note, a high soprano). Previous experiments suggest that English speakers also think about pitches as high or low in space, even when theyʼre not using language or musical notation (Casasanto, 2010). Do metaphors in language merely reflect pre-existing associations between space and pitch, or might language also shape these non-linguistic metaphorical mappings? To investigate the role of language in pitch tepresentation, we conducted a pair of non-linguistic spacepitch interference experiments in speakers of two languages that use different spatial metaphors. Dutch speakers usually describe pitches as ʻhighʼ (hoog) and ʻlowʼ (laag). Farsi speakers, however, often describe high-frequency pitches as ʻthinʼ (naazok) and low-frequency pitches as ʻthickʼ (koloft). Do Dutch and Farsi speakers mentally represent pitch differently? To find out, we asked participants to reproduce musical pitches that they heard in the presence of irrelevant spatial information (i.e., lines that varied either in height or in thickness). For the Height Interference experiment, horizontal lines bisected a vertical reference line at one of nine different locations. For the Thickness Interference experiment, a vertical line appeared in the middle of the screen in one of nine thicknesses. In each experiment, the nine different lines were crossed with nine different pitches ranging from C4 to G#4 in semitone increments, to produce 81 distinct trials. If Dutch and Farsi speakers mentally represent pitch the way they talk about it, using different kinds of spatial representations, they should show contrasting patterns of cross-dimensional interference: Dutch speakersʼ pitch estimates should be more strongly affected by irrelevant height information, and Farsi speakersʼ by irrelevant thickness information. As predicted, Dutch speakersʼ pitch estimates were significantly modulated by spatial height but not by thickness. Conversely, Farsi speakersʼ pitch estimates were modulated by spatial thickness but not by height (2x2 ANOVA on normalized slopes of the effect of space on pitch: F(1,71)=17,15 p<.001). To determine whether language plays a causal role in shaping pitch representations, we conducted a training experiment. Native Dutch speakers learned to use Farsi-like metaphors, describing pitch relationships in terms of thickness (e.g., a cello sounds ʻthickerʼ than a flute). After training, Dutch speakers showed a significant effect of Thickness interference in the non-linguistic pitch reproduction task, similar to native Farsi speakers: on average, pitches accompanied by thicker lines were reproduced as lower in pitch (effect of thickness on pitch: r=-.22, p=.002). By conducting psychophysical tasks, we tested the ʻWhorfianʼ question without using words. Yet, results also inform theories of metaphorical language processing. According to psycholinguistic theories (e.g., Bowdle & Gentner, 2005), highly conventional metaphors are processed without any active mapping from the source to the target domain (e.g., from space to pitch). Our data, however, suggest that when people use verbal metaphors they activate a corresponding non-linguistic mapping from either height or thickness to pitch, strengthening this association at the expense of competing associations. As a result, people who use different metaphors in their native languages form correspondingly different representations of musical pitch. Casasanto, D. (2010). Space for Thinking. In Language, Cognition and Space: State of the art and new directions. V. Evans & P. Chilton (Eds.), 453-478, London: Equinox Publishing. Bowdle, B. & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193-216.
  • Drijvers, L., & Mazzini, S. (2023). Neural oscillations in audiovisual language and communication. In Oxford Research Encyclopedia of Neuroscience. Oxford: Oxford University Press. doi:10.1093/acrefore/9780190264086.013.455.

    Abstract

    How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication.

    There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Dugoujon, J.-M., Larrouy, G., Mazières, S., Brucato, N., Sevin, A., Cassar, O., & Gessain, A. (2010). Histoire et dynamique du peuplement humain en Amazonie: L’exemple de la Guyane. In A. Pavé, & G. Fornet (Eds.), Amazonie: Une aventure scientifique et humaine du CNRS (pp. 128-132). Paris: Galaade Éditions.
  • Düngen, D., Sarfati, M., & Ravignani, A. (2023). Cross-species research in biomusicality: Methods, pitfalls, and prospects. In E. H. Margulis, P. Loui, & D. Loughridge (Eds.), The science-music borderlands: Reckoning with the past and imagining the future (pp. 57-95). Cambridge, MA, USA: The MIT Press. doi:10.7551/mitpress/14186.003.0008.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisner, F., Weber, A., & Melinger, A. (2010). Generalization of learning in pre-lexical adjustments to word-final devoicing [Abstract]. Journal of the Acoustical Society of America, 128, 2323.

    Abstract

    Pre-lexical representations of speech sounds have been to shown to change dynamically through a mechanism of lexically driven learning. [Norris et al. (2003).] Here we investigated whether this type of learning occurs in native British English (BE) listeners for a word-final stop contrast which is commonly de-voiced in Dutch-accented English. Specifically, this study asked whether the change in pre-lexical representation also encodes information about the position of the critical sound within a word. After exposure to a native Dutch speaker's productions of de-voiced stops in word-final position (but not in any other positions), BE listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., [si:t], “seat”) facilitated recognition of visual targets with voiced final stops (e.g., “seed”). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as [taun] (“town”), facilitated recognition of visual targets like “down”. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The results suggest that under these exposure conditions, word position is not encoded in the pre-lexical adjustment to the accented phoneme contras
  • Ekerdt, C., Takashima, A., & McQueen, J. M. (2023). Memory consolidation in second language neurocognition. In K. Morgan-Short, & J. G. Van Hell (Eds.), The Routledge handbook of second language acquisition and neurolinguistics. Oxfordshire: Routledge.

    Abstract

    Acquiring a second language (L2) requires newly learned information to be integrated with existing knowledge. It has been proposed that several memory systems work together to enable this process of rapidly encoding new information and then slowly incorporating it with existing knowledge, such that it is consolidated and integrated into the language network without catastrophic interference. This chapter focuses on consolidation of L2 vocabulary. First, the complementary learning systems model is outlined, along with the model’s predictions regarding lexical consolidation. Next, word learning studies in first language (L1) that investigate the factors playing a role in consolidation, and the neural mechanisms underlying this, are reviewed. Using the L1 memory consolidation literature as background, the chapter then presents what is currently known about memory consolidation in L2 word learning. Finally, considering what is already known about L1 but not about L2, future research investigating memory consolidation in L2 neurocognition is proposed.
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2010). Building a corpus of multimodal interaction in your field site. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 30-33). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Enfield, N. J. (2009). 'Case relations' in Lao, a radically isolating language. In A. L. Malčukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 808-819). Oxford: Oxford University Press.
  • Enfield, N. J., & Levinson, S. C. (2010). Metalanguage for speech acts. In Field manual volume 13 (pp. 34-36). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Levinson, S. C. (2009). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 12 (pp. 51-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883559.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J. (2009). Language and culture. In L. Wei, & V. Cook (Eds.), Contemporary Applied Linguistics Volume 2 (pp. 83-97). London: Continuum.
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2009). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 12 (pp. 54-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883564.

    Abstract

    Human actions in the social world – like greeting, requesting, complaining, accusing, asking, confirming, etc. – are recognised through the interpretation of signs. Language is where much of the action is, but gesture, facial expression and other bodily actions matter as well. The goal of this task is to establish a maximally rich description of a representative, good quality piece of conversational interaction, which will serve as a reference point for comparative exploration of the status of social actions and their formulation across language
  • Ernestus, M. (2009). The roles of reconstruction and lexical storage in the comprehension of regular pronunciation variants. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1875-1878). Causal Productions Pty Ltd.

    Abstract

    This paper investigates how listeners process regular pronunciation variants, resulting from simple general reduction processes. Study 1 shows that when listeners are presented with new words, they store the pronunciation variants presented to them, whether these are unreduced or reduced. Listeners thus store information on word-specific pronunciation variation. Study 2 suggests that if participants are presented with regularly reduced pronunciations, they also reconstruct and store the corresponding unreduced pronunciations. These unreduced pronunciations apparently have special status. Together the results support hybrid models of speech processing, assuming roles for both exemplars and abstract representations.
  • Fedor, A., Pléh, C., Brauer, J., Caplan, D., Friederici, A. D., Gulyás, B., Hagoort, P., Nazir, T., & Singer, W. (2009). What are the brain mechanisms underlying syntactic operations? In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 299-324). Cambridge, MA: MIT Press.

    Abstract

    This chapter summarizes the extensive discussions that took place during the Forum as well as the subsequent months thereafter. It assesses current understanding of the neuronal mechanisms that underlie syntactic structure and processing.... It is posited that to understand the neurobiology of syntax, it might be worthwhile to shift the balance from comprehension to syntactic encoding in language production
  • Ferré, G. (2023). Pragmatic gestures and prosody. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527215.

    Abstract

    The study presented here focuses on two pragmatic gestures:
    the hand flip (Ferré, 2011), a gesture of the Palm Up Open
    Hand/PUOH family (Müller, 2004) and the closed hand which
    can be considered as the opposite kind of movement to the open-
    ing of the hands present in the PUOH gesture. Whereas one of
    the functions of the hand flip has been described as presenting
    a new point in speech (Cienki, 2021), the closed hand gesture
    has not yet been described in the literature to the best of our
    knowledge. It can however be conceived of as having the oppo-
    site function of announcing the end of a point in discourse. The
    object of the present study is therefore to determine, with the
    study of prosodic features, if the two gestures are found in the
    same type of speech units and what their respective scope is.
    Drawing from a corpus of three TED Talks in French the
    prosodic characteristics of the speech that accompanies the two
    gestures will be examined. The hypothesis developed in the
    present paper is that their scope should be reflected in the
    prosody of accompanying speech, especially pitch key, tone,
    and relative pitch range. The prediction is that hand flips and
    closing hand gestures are expected to be located at the periph-
    ery of Intonation Phrases (IPs), Inter-Pausal Units (IPUs) or
    more conversational Turn Constructional Units (TCUs), and are
    likely to be co-occurrent with pauses in speech. But because of
    the natural slope of intonation in speech, the speech that accom-
    pany early gestures in Intonation Phrases should reveal different
    features from the speech at the end of intonational units. Tones
    should be different as well, considering the prosodic structure
    of spoken French.
  • Fitz, H. (2010). Statistical learning of complex questions. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2692-2698). Austin, TX: Cognitive Science Society.

    Abstract

    The problem of auxiliary fronting in complex polar questions occupies a prominent position within the nature versus nurture controversy in language acquisition. We employ a model of statistical learning which uses sequential and semantic information to produce utterances from a bag of words. This linear learner is capable of generating grammatical questions without exposure to these structures in its training environment. We also demonstrate that the model performs superior to n-gram learners on this task. Implications for nativist theories of language acquisition are discussed.
  • Fitz, H., & Chang, F. (2009). Syntactic generalization in a connectionist model of sentence production. In J. Mayor, N. Ruh, & K. Plunkett (Eds.), Connectionist models of behaviour and cognition II: Proceedings of the 11th Neural Computation and Psychology Workshop (pp. 289-300). River Edge, NJ: World Scientific Publishing.

    Abstract

    We present a neural-symbolic learning model of sentence production which displays strong semantic systematicity and recursive productivity. Using this model, we provide evidence for the data-driven learnability of complex yes/no- questions.
  • Floyd, S. (2009). Nexos históricos, gramaticales y culturales de los números en cha'palaa [Historical, grammatical and cultural connections of Cha'palaa numerals]. In Proceedings of the Conference on Indigenous Languages of Latin America (CILLA) -IV.

    Abstract

    Los idiomas sudamericanas tienen una diversidad de sistemas numéricos, desde sistemas con solamente dos o tres términos en algunos idiomas amazónicos hasta sistemas con numerales extendiendo a miles. Una mirada al sistema del idioma cha’palaa de Ecuador demuestra rasgos de base-2, base-5, base-10 y base-20, ligados a diferentes etapas de cambio, desarrollo y contacto lingüístico. Conocer estas etapas nos permite proponer algunas correlaciones con lo que conocemos de la historia de contactos culturales en la región. The South American languages have diverse types of numeral systems, from systems of just two or three terms in some Amazonian languages to systems extending into the thousands. A look a the system of the Cha'palaa language of Ecuador demonstrates base-2, base-5, base-10 and base-20 features, linked to different stages of change, development and language contact. Learning about these stages permits up to propose some correlations between them and what we know about the history of cultural contact in the region.
  • Folia, V., Uddén, J., De Vries, M., Forkstam, C., & Petersson, K. M. (2010). Artificial language learning in adults and children. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 188-220). Malden, MA: Wiley-Blackwell.
  • Folia, V., Forkstam, C., Hagoort, P., & Petersson, K. M. (2009). Language comprehension: The interplay between form and content. In N. Taatgen, & H. van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society (pp. 1686-1691). Austin, TX: Cognitive Science Society.

    Abstract

    In a 2x2 event-related FMRI study we find support for the idea that the inferior frontal cortex, centered on Broca’s region and its homologue, is involved in constructive unification operations during the structure-building process in parsing for comprehension. Tentatively, we provide evidence for a role of the dorsolateral prefrontal cortex centered on BA 9/46 in the control component of the language system. Finally, the left temporo-parietal cortex, in the vicinity of Wernicke’s region, supports the interaction between the syntax of gender agreement and sentence-level semantics.
  • Forkstam, C., Jansson, A., Ingvar, M., & Petersson, K. M. (2009). Modality transfer of acquired structural regularities: A preference for an acoustic route. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a simple model for aspects of natural language acquisition. In this paper we investigate the remaining effect of modality transfer in syntactic classification of an acquired grammatical sequence structure after implicit grammar acquisition. Participants practiced either on acoustically presented syllable sequences or visually presented consonant letter sequences. During classification we independently manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli. Participants performed reliably above chance on the within modality classification task although more so for those working on syllable sequence acquisition. These subjects were also the only group that kept a significant performance level in transfer classification. We speculate that this finding is of particular relevance in consideration of an ecological validity in the input signal in the use of artificial grammar learning and in language learning paradigms at large.
  • Francks, C. (2009). 13 - LRRTM1: A maternally suppressed genetic effect on handedness and schizophrenia. In I. E. C. Sommer, & R. S. Kahn (Eds.), Cerebral lateralization and psychosis (pp. 181-196). Cambridge: Cambridge University Press.

    Abstract

    The molecular, developmental, and evolutionary bases of human brain asymmetry are almost completely unknown. Genetic linkage and association mapping have pin-pointed a gene called LRRTM1 (leucine-rich repeat transmembrane neuronal 1) that may contribute to variability in human handedness. Here I describe how LRRTM1's involvement in handedness was discovered, and also the latest knowledge of its functions in brain development and disease. The association of LRRTM1 with handedness was derived entirely from the paternally inherited gene, and follow-up analysis of gene expression confirmed that LRRTM1 is one of a small number of genes that are imprinted in the human genome, for which the maternally inherited copy is suppressed. The same variation at LRRTM1 that was associated paternally with mixed-/left-handedness was also over-transmitted paternally to schizophrenic patients in a large family study.
    LRRTM1 is expressed in specific regions of the developing and adult forebrain by post-mitotic neurons, and the protein may be involved in axonal trafficking. Thus LRRTM1 has a probable role in neurodevelopment, and its association with handedness suggests that one of its functions may be in establishing or consolidating human brain asymmetry.
    LRRTM1 is the first gene for which allelic variation has been associated with human handedness. The genetic data also suggest indirectly that the epigenetic regulation of this gene may yet prove more important than DNA sequence variation for influencing brain development and disease.
    Intriguingly, the parent-of-origin activity of LRRTM1 suggests that men and women have had conflicting interests in relation to the outcome of lateralized brain development in their offspring.
  • Friederici, A., & Levelt, W. J. M. (1987). Spatial description in microgravity: Aspects of cognitive adaptation. In P. R. Sahm, R. Jansen, & M. Keller (Eds.), Proceedings of the Norderney Symposium on Scientific Results of the German Spacelab Mission D1 (pp. 518-524). Köln, Germany: Wissenschaftliche Projektführung DI c/o DFVLR.
  • Friederici, A., & Levelt, W. J. M. (1987). Sprache. In K. Immelmann, K. Scherer, & C. Vogel (Eds.), Funkkolleg Psychobiologie (pp. 58-87). Weinheim: Beltz.
  • Furman, R., Ozyurek, A., & Küntay, A. C. (2010). Early language-specificity in Turkish children's caused motion event expressions in speech and gesture. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Boston University Conference on Language Development. Volume 1 (pp. 126-137). Somerville, MA: Cascadilla Press.
  • Gamba, M., Raimondi, T., De Gregorio, C., Valente, D., Carugati, F., Cristiano, W., Ferrario, V., Torti, V., Favaro, L., Friard, O., Giacoma, C., & Ravignani, A. (2023). Rhythmic categories across primate vocal displays. In A. Astolfi, F. Asdrubali, & L. Shtrepi (Eds.), Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023 (pp. 3971-3974). Torino: European Acoustics Association.

    Abstract

    The last few years have revealed that several species may share the building blocks of Musicality with humans. The recognition of these building blocks (e.g., rhythm, frequency variation) was a necessary impetus for a new round of studies investigating rhythmic variation in animal vocal displays. Singing primates are a small group of primate species that produce modulated songs ranging from tens to thousands of vocal units. Previous studies showed that the indri, the only singing lemur, is currently the only known species that perform duet and choruses showing multiple rhythmic categories, as seen in human music. Rhythmic categories occur when temporal intervals between note onsets are not uniformly distributed, and rhythms with a small integer ratio between these intervals are typical of human music. Besides indris, white-handed gibbons and three crested gibbon species showed a prominent rhythmic category corresponding to a single small integer ratio, isochrony. This study reviews previous evidence on the co-occurrence of rhythmic categories in primates and focuses on the prospects for a comparative, multimodal study of rhythmicity in this clade.
  • Garcia, N., Lenkiewicz, P., Freire, M., & Monteiro, P. (2009). A new architecture for optical burst switching networks based on cooperative control. In Proceeding of the 8th IEEE International Symposium on Network Computing and Applications (IEEE NCA09) (pp. 310-313).

    Abstract

    This paper presents a new architecture for optical burst switched networks where the control plane of the network functions in a cooperative manner. Each node interprets the data conveyed by the control packet and forwards it to the next nodes, making the control plane of the network distribute the relevant information to all the nodes in the network. A cooperation transmission tree is used, thus allowing all the nodes to store the information related to the traffic management in the network, and enabling better network resource planning at each node. A model of this network architecture is proposed, and its performance is evaluated.
  • Gentner, D., & Bowerman, M. (2009). Why some spatial semantic categories are harder to learn than others: The typological prevalence hypothesis. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 465-480). New York: Psychology Press.
  • Goldin-Meadow, S., Ozyurek, A., Sancar, B., & Mylander, C. (2009). Making language around the globe: A cross-linguistic study of homesign in the United States, China, and Turkey. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 27-39). New York: Psychology Press.
  • Goldin-Meadow, S., Gentner, D., Ozyurek, A., & Gurcanli, O. (2009). Spatial language supports spatial cognition: Evidence from deaf homesigners [abstract]. Cognitive Processing, 10(Suppl. 2), S133-S134.
  • Goudbeek, M., & Broersma, M. (2010). The Demo/Kemo corpus: A principled approach to the study of cross-cultural differences in the vocal expression and perception of emotion. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010) (pp. 2211-2215). Paris: ELRA.

    Abstract

    This paper presents the Demo / Kemo corpus of Dutch and Korean emotional speech. The corpus has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors as well as judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure was used for recordings of both languages; c) the same nonsense sentence, which was constructed to be permissible in both languages, was used for recordings of both languages; and d) the emotions present in the corpus are balanced in terms of valence, arousal, and dominance. The corpus contains a comparatively large number of emotions (eight) uttered by a large number of speakers (eight Dutch and eight Korean). The counterbalanced nature of the corpus will enable a stricter investigation of language-specific versus universal aspects of emotional expression than was possible so far. Furthermore, given the carefully controlled phonetic content of the expressions, it allows for analysis of the role of specific phonetic features in emotional expression in Dutch and Korean.
  • Green, K., Osei-Cobbina, C., Perlman, M., & Kita, S. (2023). Infants can create different types of iconic gestures, with and without parental scaffolding. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527188.

    Abstract

    Despite the early emergence of pointing, children are generally not documented to produce iconic gestures until later in development. Although research has described this developmental trajectory and the types of iconic gestures that emerge first, there has been limited focus on iconic gestures within interactional contexts. This study identified the first 10 iconic gestures produced by five monolingual English-speaking children in a naturalistic longitudinal video corpus and analysed the interactional contexts. We found children produced their first iconic gesture between 12 and 20 months and that gestural types varied. Although 34% of gestures could have been imitated or derived from adult or child actions in the preceding context, the majority were produced independently of any observed model. In these cases, adults often led the interaction in a direction where iconic gesture was an appropriate response. Overall, we find infants can represent a referent symbolically and possess a greater capacity for innovation than previously assumed. In order to develop our understanding of how children learn to produce iconic gestures, it is important to consider the immediate interactional context. Conducting naturalistic corpus analyses could be a more ecologically valid approach to understanding how children learn to produce iconic gestures in real life contexts.
  • Gubian, M., Torreira, F., Strik, H., & Boves, L. (2009). Functional data analysis as a tool for analyzing speech dynamics a case study on the French word c'était. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2199-2202).

    Abstract

    In this paper we introduce Functional Data Analysis (FDA) as a tool for analyzing dynamic transitions in speech signals. FDA makes it possible to perform statistical analyses of sets of mathematical functions in the same way as classical multivariate analysis treats scalar measurement data. We illustrate the use of FDA with a reduction phenomenon affecting the French word c'était /setε/ 'it was', which can be reduced to [stε] in conversational speech. FDA reveals that the dynamics of the transition from [s] to [t] in fully reduced cases may still be different from the dynamics of [s] - [t] transitions in underlying /st/ clusters such as in the word stage.
  • Gubian, M., Bergmann, C., & Boves, L. (2010). Investigating word learning processes in an artificial agent. In Proceedings of the IXth IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, 18-21 Aug. 2010 (pp. 178 -184). IEEE.

    Abstract

    Researchers in human language processing and acquisition are making an increasing use of computational models. Computer simulations provide a valuable platform to reproduce hypothesised learning mechanisms that are otherwise very difficult, if not impossible, to verify on human subjects. However, computational models come with problems and risks. It is difficult to (automatically) extract essential information about the developing internal representations from a set of simulation runs, and often researchers limit themselves to analysing learning curves based on empirical recognition accuracy through time. The associated risk is to erroneously deem a specific learning behaviour as generalisable to human learners, while it could also be a mere consequence (artifact) of the implementation of the artificial learner or of the input coding scheme. In this paper a set of simulation runs taken from the ACORNS project is investigated. First a look `inside the box' of the learner is provided by employing novel quantitative methods for analysing changing structures in large data sets. Then, the obtained findings are discussed in the perspective of their ecological validity in the field of child language acquisition.
  • Le Guen, O. (2009). Geocentric gestural deixis among Yucatecan Maya (Quintana Roo, México). In 18th IACCP Book of Selected Congress Papers (pp. 123-136). Athens, Greece: Pedio Books Publishing.
  • Le Guen, O. (2009). The ethnography of emotions: A field worker's guide. In A. Majid (Ed.), Field manual volume 12 (pp. 31-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446076.

    Abstract

    The goal of this task is to investigate cross-cultural emotion categories in language and thought. This entry is designed to provide researchers with some guidelines to describe the emotional repertoire of a community from an emic perspective. The first objective is to offer ethnographic tools and a questionnaire in order to understand the semantics of emotional terms and the local conception of emotions. The second objective is to identify the local display rules of emotions in communicative interactions.
  • Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 5-24). Malden, MA: Wiley-Blackwell.
  • Gullberg, M., De Bot, K., & Volterra, V. (2010). Gestures and some key issues in the study of language development. In M. Gullberg, & K. De Bot (Eds.), Gestures in language development (pp. 3-33). Amsterdam: Benjamins.
  • Gullberg, M., Indefrey, P., & Muysken, P. (2009). Research techniques for the study of code-switching. In B. E. Bullock, & J. A. Toribio (Eds.), The Cambridge handbook on linguistic code-switching (pp. 21-39). Cambridge: Cambridge University Press.

    Abstract

    The aim of this chapter is to provide researchers with a tool kit of semi-experimental and experimental techniques for studying code-switching. It presents an overview of the current off-line and on-line research techniques, ranging from analyses of published bilingual texts of spontaneous conversations, to tightly controlled experiments. A multi-task approach used for studying code-switched sentence production in Papiamento-Dutch bilinguals is also exemplified.
  • Gullberg, M. (2009). Why gestures are relevant to the bilingual mental lexicon. In A. Pavlenko (Ed.), The bilingual mental lexicon: Interdisciplinary approaches (pp. 161-184). Clevedon: Multilingual Matters.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language in non-trivial ways. This chapter presents an overview of what gestures can and cannot tell us about the monolingual and the bilingual mental lexicon. Gesture analysis opens for a broader view of the mental lexicon, targeting the interface between conceptual, semantic and syntactic aspects of event construal, and offers new possibilities for examining how languages co-exist and interact in bilinguals beyond the level of surface forms. The first section of this chapter gives a brief introduction to gesture studies and outlines the current views on the relationship between gesture, speech, and language. The second section targets the key questions for the study of the monolingual and bilingual lexicon, and illustrates the methods employed for addressing these questions. It further exemplifies systematic cross-linguistic patterns in gestural behaviour in monolingual and bilingual contexts. The final section discusses some implications of an expanded view of the multilingual lexicon that includes gesture, and outlines directions for future inquiry.

    Files private

    Request files
  • Hagoort, P. (2009). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. In B. C. J. Moore, L. K. Tyler, & W. Marslen-Wilson (Eds.), The perception of speech: From sound to meaning (pp. 223-248). New York: Oxford University Press.
  • Hagoort, P., & Indefrey, P. (1997). De neurale architectuur van het menselijk taalvermogen. In H. Peters (Ed.), Handboek stem-, spraak-, en taalpathologie (pp. 1-36). Houten: Bohn Stafleu Van Loghum.
  • Hagoort, P. (2009). Reflections on the neurobiology of syntax. In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 279-296). Cambridge, MA: MIT Press.

    Abstract

    This contribution focuses on the neural infrastructure for parsing and syntactic encoding. From an anatomical point of view, it is argued that Broca's area is an ill-conceived notion. Functionally, Broca's area and adjacent cortex (together Broca's complex) are relevant for language, but not exclusively for this domain of cognition. Its role can be characterized as providing the necessary infrastructure for unification (syntactic and semantic). A general proposal, but with required level of computational detail, is discussed to account for the distribution of labor between different components of the language network in the brain.Arguments are provided for the immediacy principle, which denies a privileged status for syntax in sentence processing. The temporal profile of event-related brain potential (ERP) is suggested to require predictive processing. Finally, since, next to speed, diversity is a hallmark of human languages, the language readiness of the brain might not depend on a universal, dedicated neural machinery for syntax, but rather on a shaping of the neural infrastructure of more general cognitive systems (e.g., memory, unification) in a direction that made it optimally suited for the purpose of communication through language.
  • Hagoort, P., & Van Turennout, M. (1997). The electrophysiology of speaking: Possibilities of event-related potential research for speech production. In W. Hulstijn, H. Peters, & P. Van Lieshout (Eds.), Speech motor production and fluency disorders: Brain research in speech production (pp. 351-361). Amsterdam: Elsevier.
  • Hagoort, P., Baggio, G., & Willems, R. M. (2009). Semantic unification. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 819-836). Cambridge, MA: MIT Press.

    Abstract

    Language and communication are about the exchange of meaning. A key feature of understanding and producing language is the construction of complex meaning from more elementary semantic building blocks. The functional characteristics of this semantic unification process are revealed by studies using event related brain potentials. These studies have found that word meaning is assembled into compound meaning in not more than 500 ms. World knowledge, information about the speaker, co-occurring visual input and discourse all have an immediate impact on semantic unification, and trigger similar electrophysiological responses as sentence-internal semantic information. Neuroimaging studies show that a network of brain areas, including the left inferior frontal gyrus, the left superior/middle temporal cortex, the left inferior parietal cortex and, to a lesser extent their right hemisphere homologues are recruited to perform semantic unification.
  • Hagoort, P. (2009). Taalontwikkeling: Meer dan woorden alleen. In M. Evenblij (Ed.), Brein in beeld: Beeldvorming bij heersenonderzoek (pp. 53-57). Den Haag: Stichting Bio-Wetenschappen en Maatschappij.
  • Hagoort, P., & Wassenaar, M. (1997). Taalstoornissen: Van theorie tot therapie. In B. Deelman, P. Eling, E. De Haan, A. Jennekens, & A. Van Zomeren (Eds.), Klinische Neuropsychologie (pp. 232-248). Meppel: Boom.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P. (1997). Zonder fosfor geen gedachten: Gagarin, geest en brein. In Brain & Mind (pp. 6-14). Utrecht: Reünistenvereniging Veritas.
  • Hamans, C., & Seuren, P. A. M. (2010). Chomsky in search of a pedigree. In D. A. Kibbee (Ed.), Chomskyan (R)evolutions (pp. 377-394). Amsterdam/Philadelphia: Benjamins.

    Abstract

    This paper follows the changing fortunes of Chomsky’s search for a pedigree in the history of Western thought during the late 1960s. Having achieved a unique position of supremacy in the theory of syntax and having exploited that position far beyond the narrow circles of professional syntacticians, he felt the need to shore up his theory with the authority of history. It is shown that this attempt, resulting mainly in his Cartesian Linguistics of 1966, was widely, and rightly, judged to be a radical failure, even though it led to a sudden revival of interest in the history of linguistics. Ironically, the very upswing in historical studies caused by Cartesian Linguistics ended up showing that the real pedigree belongs to Generative Semantics, developed by the same ‘angry young men’ Chomsky was so bent on destroying.
  • Hammarström, H. (2010). Rarities in numeral systems. In J. Wohlgemuth, & M. Cysouw (Eds.), Rethinking universals. How rarities affect linguistic theory (pp. 11-60). Berlin: De Gruyter.
  • Hanique, I., Schuppler, B., & Ernestus, M. (2010). Morphological and predictability effects on schwa reduction: The case of Dutch word-initial syllables. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 933-936).

    Abstract

    This corpus-based study shows that the presence and duration of schwa in Dutch word-initial syllables are affected by a word’s predictability and its morphological structure. Schwa is less reduced in words that are more predictable given the following word. In addition, schwa may be longer if the syllable forms a prefix, and in prefixes the duration of schwa is positively correlated with the frequency of the word relative to its stem. Our results suggest that the conditions which favor reduced realizations are more complex than one would expect on the basis of the current literature.
  • Hanulikova, A., & Davidson, D. (2009). Inflectional entropy in Slovak. In J. Levicka, & R. Garabik (Eds.), Slovko 2009, NLP, Corpus Linguistics, Corpus Based Grammar Research (pp. 145-151). Bratislava, Slovakia: Slovak Academy of Sciences.
  • Hanulikova, A., & Weber, A. (2009). Experience with foreign accent influences non-native (L2) word recognition: The case of th-substitutions [Abstract]. Journal of the Acoustical Society of America, 125(4), 2762-2762.
  • Hanulikova, A., & Weber, A. (2010). Production of English interdental fricatives by Dutch, German, and English speakers. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 173-178). Poznan: Adam Mickiewicz University.

    Abstract

    Non-native (L2) speakers of English often experience difficulties in producing English interdental fricatives (e.g. the voiceless [θ]), and this leads to frequent substitutions of these fricatives (e.g. with [t], [s], and [f]). Differences in the choice of [θ]-substitutions across L2 speakers with different native (L1) language backgrounds have been extensively explored. However, even within one foreign accent, more than one substitution choice occurs, but this has been less systematically studied. Furthermore, little is known about whether the substitutions of voiceless [θ] are phonetically clear instances of [t], [s], and [f], as they are often labelled. In this study, we attempted a phonetic approach to examine language-specific preferences for [θ]-substitutions by carrying out acoustic measurements of L1 and L2 realizations of these sounds. To this end, we collected a corpus of spoken English with L1 speakers (UK-English), and Dutch and German L2 speakers. We show a) that the distribution of differential substitutions using identical materials differs between Dutch and German L2 speakers, b) that [t,s,f]-substitutes differ acoustically from intended [t,s,f], and c) that L2 productions of [θ] are acoustically comparable to L1 productions.
  • Hanulikova, A. (2009). The role of syllabification in the lexical segmentation of German and Slovak. In S. Fuchs, H. Loevenbruck, D. Pape, & P. Perrier (Eds.), Some aspects of speech and the brain (pp. 331-361). Frankfurt am Main: Peter Lang.

    Abstract

    Two experiments were carried out to examine the syllable affiliation of intervocalic consonant clusters and their effects on speech segmentation in two different languages. In a syllable reversal task, Slovak and German speakers divided bisyllabic non-words that were presented aurally into two parts, starting with the second syllable. Following the maximal onset principle, intervocalic consonants should be maximally assigned to the onset of the following syllable in conformity with language-specific restrictions, e.g., /du.gru/, /zu.kro:/ (dot indicates a syllable boundary). According to German phonology, syllables require branching rhymes (hence, /zuk.ro:/). In Slovak, both /du.gru/ and /dug.ru/ are possible syllabifications. Experiment 1 showed that German speakers more often closed the first syllable (/zuk.ro:/), following the requirement for a branching rhyme. In Experiment 2, Slovak speakers showed no clear preference; the first syllable was either closed (/dug.ru/) or open (/du.gru/). Correlation analyses on previously conducted word-spotting studies (Hanulíková, in press, 2008) suggest that speech segmentation is unaffected by these syllabification preferences.
  • Harbusch, K., & Kempen, G. (2009). Clausal coordinate ellipsis and its varieties in spoken German: A study with the TüBa-D/S Treebank of the VERBMOBIL corpus. In M. Passarotti, A. Przepiórkowski, S. Raynaud, & F. Van Eynde (Eds.), Proceedings of the The Eighth International Workshop on Treebanks and Linguistic Theories (pp. 83-94). Milano: EDUCatt.
  • Harbusch, K., & Kempen, G. (2009). Generating clausal coordinate ellipsis multilingually: A uniform approach based on postediting. In 12th European Workshop on Natural Language Generation: Proceedings of the Workshop (pp. 138-145). The Association for Computational Linguistics.

    Abstract

    Present-day sentence generators are often in-capable of producing a wide variety of well-formed elliptical versions of coordinated clauses, in particular, of combined elliptical phenomena (Gapping, Forward and Back-ward Conjunction Reduction, etc.). The ap-plicability of the various types of clausal co-ordinate ellipsis (CCE) presupposes detailed comparisons of the syntactic properties of the coordinated clauses. These nonlocal comparisons argue against approaches based on local rules that treat CCE structures as special cases of clausal coordination. We advocate an alternative approach where CCE rules take the form of postediting rules ap-plicable to nonelliptical structures. The ad-vantage is not only a higher level of modu-larity but also applicability to languages be-longing to different language families. We describe a language-neutral module (called Elleipo; implemented in JAVA) that gener-ates as output all major CCE versions of co-ordinated clauses. Elleipo takes as input linearly ordered nonelliptical coordinated clauses annotated with lexical identity and coreferentiality relationships between words and word groups in the conjuncts. We dem-onstrate the feasibility of a single set of postediting rules that attains multilingual coverage.
  • Heeschen, V., Eibl-Eibesfeldt, I., Grammer, K., Schiefenhövel, W., & Senft, G. (1986). Sprachliches Verhalten. In Generalverwaltung der MPG (Ed.), Max-Planck-Gesellschaft Jahrbuch 1986 (pp. 394-396). Göttingen: Vandenhoeck and Ruprecht.
  • Hill, C. (2010). Emergency language documentation teams: The Cape York Peninsula experience. In J. Hobson, K. Lowe, S. Poetsch, & M. Walsh (Eds.), Re-awakening languages: Theory and practice in the revitalisation of Australia’s Indigenous languages (pp. 418-432). Sydney: Sydney University Press.
  • Holler, J. (2010). Speakers’ use of interactive gestures to mark common ground. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction. 8th International Gesture Workshop, Bielefeld, Germany, 2009; Selected Revised Papers (pp. 11-22). Heidelberg: Springer Verlag.
  • Hulten, A. (2010). Sanan tuottaminen [Word production]. In Kieli ja aivot [Language and the Brain - Textbook series] (pp. 106-116).
  • Hurford, J. R., & Dediu, D. (2009). Diversity in language, genes and the language faculty. In R. Botha, & C. Knight (Eds.), The cradle of language (pp. 167-188). Oxford: Oxford University Press.
  • Indefrey, P. (1997). PET research in language production. In W. Hulstijn, H. F. M. Peters, & P. H. H. M. Van Lieshout (Eds.), Speech production: motor control, brain research and fluency disorders (pp. 269-278). Amsterdam: Elsevier.

    Abstract

    The aim of this paper is to discuss an inherent difficulty of PET (and fMRI) research in language production. On the one hand, language production presupposes some degree of freedom for the subject, on the other hand, interpretability of results presupposes restrictions of this freedom. This difficulty is reflected in the existing PET literature in some neglect of the general principle to design experiments in such a way that the results do not allow for alternative interpretations. It is argued that by narrowing down the scope of experiments a gain in interpretability can be achieved.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 1-4). Malden, MA: Wiley-Blackwell.
  • Indefrey, P., & Davidson, D. J. (2009). Second language acquisition. In L. R. Squire (Ed.), Encyclopedia of neuroscience (pp. 517-523). London: Academic Press.

    Abstract

    This article reviews neurocognitive evidence on second language (L2) processing at speech sound, word, and sentence levels. Hemodynamic (functional magnetic resonance imaging and positron emission tomography) data suggest that L2s are implemented in the same brain structures as the native language but with quantitative differences in the strength of activation that are modulated by age of L2 acquisition and L2 proficiency. Electrophysiological data show a more complex pattern of first and L2 similarities and differences, providing some, although not conclusive, evidence for qualitative differences between L1 and L2 syntactic processing.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). Live-tracking acoustic parameters in animal behavioural experiments: Interactive bioacoustics with parselmouth. In A. Astolfi, F. Asdrubali, & L. Shtrepi (Eds.), Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023 (pp. 4675-4678). Torino: European Acoustics Association.

    Abstract

    Most bioacoustics software is used to analyse the already collected acoustics data in batch, i.e., after the data-collecting phase of a scientific study. However, experiments based on animal training require immediate and precise reactions from the experimenter, and thus do not easily dovetail with a typical bioacoustics workflow. Bridging this methodological gap, we have developed a custom application to live-monitor the vocal development of harbour seals in a behavioural experiment. In each trial, the application records and automatically detects an animal's call, and immediately measures duration and acoustic measures such as intensity, fundamental frequency, or formant frequencies. It then displays a spectrogram of the recording and the acoustic measurements, allowing the experimenter to instantly evaluate whether or not to reinforce the animal's vocalisation. From a technical perspective, the rapid and easy development of this custom software was made possible by combining multiple open-source software projects. Here, we integrated the acoustic analyses from Parselmouth, a Python library for Praat, together with PyAudio and Matplotlib's recording and plotting functionality, into a custom graphical user interface created with PyQt. This flexible recombination of different open-source Python libraries allows the whole program to be written in a mere couple of hundred lines of code
  • Janse, E. (2009). Hearing and cognitive measures predict elderly listeners' difficulty ignoring competing speech. In M. Boone (Ed.), Proceedings of the International Conference on Acoustics (pp. 1532-1535).
  • Järvikivi, J., & Pyykkönen, P. (2010). Lauseiden ymmärtäminen [Engl. Sentence comprehension]. In P. Korpilahti, O. Aaltonen, & M. Laine (Eds.), Kieli ja aivot: Kommunikaation perusteet, häiriöt ja kuntoutus (pp. 117-125). Turku: Turku yliopisto.

    Abstract

    Kun kuuntelemme puhetta tai luemme tekstiä, alamme välittömästi rakentaa koherenttia tulkintaa. Toisin kuin lukemisessa, puheen havaitsemisessa kuulija voi harvoin kontrolloida nopeutta, jolla hänelle puhutaan. Huolimatta hyvin nopeasta syötteestä - noin 4-7 tavua sekunnissa - ihmiset kykenevät tulkitsemaan puhetta hyvin vaivattomasti. Lauseen ymmärtämisen tutkimuksessa selvitetäänkin, miten tällainen nopea ja useimmiten vaivaton tulkintaprosessi tapahtuu, mitkä kognitiiviset prosessit osallistuvat reaaliaikaiseen tulkintaan ja millaista informaatiota missäkin vaiheessa prosessointia ihminen käyttää hyväkseen johdonmukaisen tulkinnan muodostamiseksi. Tämä kappale on katsaus lauseen ymmärtämisen prosesseihin ja niiden tutkimukseen. Käsittelemme lyhyesti prosessointimalleja, aikuisten ja lasten kielen suhdetta, lauseen sisäisten ja välisten viittaussuhteiden tulkintaa ja sensorisen ympäristön sekä motorisen toiminnan roolia lauseiden tulkintaprosessissa.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Jesse, A., & Janse, E. (2009). Visual speech information aids elderly adults in stream segregation. In B.-J. Theobald, & R. Harvey (Eds.), Proceedings of the International Conference on Auditory-Visual Speech Processing 2009 (pp. 22-27). Norwich, UK: School of Computing Sciences, University of East Anglia.

    Abstract

    Listening to a speaker while hearing another speaker talks is a challenging task for elderly listeners. We show that elderly listeners over the age of 65 with various degrees of age-related hearing loss benefit in this situation from also seeing the speaker they intend to listen to. In a phoneme monitoring task, listeners monitored the speech of a target speaker for either the phoneme /p/ or /k/ while simultaneously hearing a competing speaker. Critically, on some trials, the target speaker was also visible. Elderly listeners benefited in their response times and accuracy levels from seeing the target speaker when monitoring for the less visible /k/, but more so when monitoring for the highly visible /p/. Visual speech therefore aids elderly listeners not only by providing segmental information about the target phoneme, but also by providing more global information that allows for better performance in this adverse listening situation.
  • Jolink, A. (2009). Finiteness in children with SLI: A functional approach. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 235-260). Berlin: Mouton de Gruyter.
  • Jordanoska, I. (2023). Focus marking and size in some Mande and Atlantic languages. In N. Sumbatova, I. Kapitonov, M. Khachaturyan, S. Oskolskaya, & V. Verhees (Eds.), Songs and Trees: Papers in Memory of Sasha Vydrina (pp. 311-343). St. Petersburg: Institute for Linguistic Studies and Russian Academy of Sciences.

    Abstract

    This paper compares the focus marking systems and the focus size that can be expressed by the different focus markings in four Mande and three Atlantic languages and varieties, namely: Bambara, Dyula, Kakabe, Soninke (Mande), Wolof, Jóola Foñy and Jóola Karon (Atlantic). All of these languages are known to mark focus morphosyntactically, rather than prosodically, as the more well-studied Germanic languages do. However, the Mande languages under discussion use only morphology, in the form of a particle that follows the focus, while the Atlantic ones use a more complex morphosyntactic system in which focus is marked by morphology in the verbal complex and movement of the focused term. It is shown that while there are some syntactic restrictions to how many different focus sizes can be marked in a distinct way, there is also a certain degree of arbitrariness as to which focus sizes are marked in the same way as each other.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P. (2009). The acquisition of functional categories in child L1 and adult L2 acquisition. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 45-96). Berlin: Mouton de Gruyter.
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Kanakanti, M., Singh, S., & Shrivastava, M. (2023). MultiFacet: A multi-tasking framework for speech-to-sign language generation. In E. André, M. Chetouani, D. Vaufreydaz, G. Lucas, T. Schultz, L.-P. Morency, & A. Vinciarelli (Eds.), ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction (pp. 205-213). New York: ACM. doi:10.1145/3610661.3616550.

    Abstract

    Sign language is a rich form of communication, uniquely conveying meaning through a combination of gestures, facial expressions, and body movements. Existing research in sign language generation has predominantly focused on text-to-sign pose generation, while speech-to-sign pose generation remains relatively underexplored. Speech-to-sign language generation models can facilitate effective communication between the deaf and hearing communities. In this paper, we propose an architecture that utilises prosodic information from speech audio and semantic context from text to generate sign pose sequences. In our approach, we adopt a multi-tasking strategy that involves an additional task of predicting Facial Action Units (FAUs). FAUs capture the intricate facial muscle movements that play a crucial role in conveying specific facial expressions during sign language generation. We train our models on an existing Indian Sign language dataset that contains sign language videos with audio and text translations. To evaluate our models, we report Dynamic Time Warping (DTW) and Probability of Correct Keypoints (PCK) scores. We find that combining prosody and text as input, along with incorporating facial action unit prediction as an additional task, outperforms previous models in both DTW and PCK scores. We also discuss the challenges and limitations of speech-to-sign pose generation models to encourage future research in this domain. We release our models, results and code to foster reproducibility and encourage future research1.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Semdt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Smedt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G. (1986). Beyond word processing. In E. Cluff, & G. Bunting (Eds.), Information management yearbook 1986 (pp. 178-181). London: IDPM Publications.

Share this page