Publications

Displaying 301 - 400 of 573
  • Lei, L., Raviv, L., & Alday, P. M. (2020). Using spatial visualizations and real-world social networks to understand language evolution and change. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 252-254). Nijmegen: The Evolution of Language Conferences.
  • Lenkiewicz, P., Auer, E., Schreer, O., Masneri, S., Schneider, D., & Tschöpe, S. (2012). AVATecH ― automated annotation through audio and video analysis. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 209-214). European Language Resources Association.

    Abstract

    In different fields of the humanities annotations of multimodal resources are a necessary component of the research workflow. Examples include linguistics, psychology, anthropology, etc. However, creation of those annotations is a very laborious task, which can take 50 to 100 times the length of the annotated media, or more. This can be significantly improved by applying innovative audio and video processing algorithms, which analyze the recordings and provide automated annotations. This is the aim of the AVATecH project, which is a collaboration of the Max Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS. In this paper we present a set of results of automated annotation together with an evaluation of their quality.
  • Lenkiewicz, P., Pereira, M., Freire, M. M., & Fernandes, J. (2009). A new 3D image segmentation method for parallel architectures. In Proceedings of the 2009 IEEE International Conference on Multimedia and Expo [ICME 2009] June 28 – July 3, 2009, New York (pp. 1813-1816).

    Abstract

    This paper presents a novel model for 3D image segmentation and reconstruction. It has been designed with the aim to be implemented over a computer cluster or a multi-core platform. The required features include a nearly absolute independence between the processes participating in the segmentation task and providing amount of work as equal as possible for all the participants. As a result, it is avoid many drawbacks often encountered when performing a parallelization of an algorithm that was constructed to operate in a sequential manner. Furthermore, the proposed algorithm based on the new segmentation model is efficient and shows a very good, nearly linear performance growth along with the growing number of processing units.
  • Lenkiewicz, A., Lis, M., & Lenkiewicz, P. (2012). Linguistic concepts described with Media Query Language for automated annotation. In J. C. Meiser (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 477-479).

    Abstract

    Introduction Human spoken communication is multimodal, i.e. it encompasses both speech and gesture. Acoustic properties of voice, body movements, facial expression, etc. are an inherent and meaningful part of spoken interaction; they can provide attitudinal, grammatical and semantic information. In the recent years interest in audio-visual corpora has been rising rapidly as they enable investigation of different communicative modalities and provide more holistic view on communication (Kipp et al. 2009). Moreover, for some languages such corpora are the only available resource, as is the case for endangered languages for which no written resources exist.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The dynamic topology changes model for unsupervised image segmentation. In Proceedings of the 11th IEEE International Workshop on Multimedia Signal Processing (MMSP'09) (pp. 1-5).

    Abstract

    Deformable models are a popular family of image segmentation techniques, which has been gaining significant focus in the last two decades, serving both for real-world applications as well as the base for research work. One of the features that the deformable models offer and that is considered a much desired one, is the ability to change their topology during the segmentation process. Using this characteristic it is possible to perform segmentation of objects with discontinuities in their bodies or to detect an undefined number of objects in the scene. In this paper we present our model for handling the topology changes in image segmentation methods based on the Active Volumes solution. The said model is capable of performing the changes in the structure of objects while the segmentation progresses, what makes it efficient and suitable for implementations over powerful execution environment, like multi-core architectures or computer clusters.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The whole mesh Deformation Model for 2D and 3D image segmentation. In Proceedings of the 2009 IEEE International Conference on Image Processing (ICIP 2009) (pp. 4045-4048).

    Abstract

    In this paper we present a novel approach for image segmentation using Active Nets and Active Volumes. Those solutions are based on the Deformable Models, with slight difference in the method for describing the shapes of interests - instead of using a contour or a surface they represented the segmented objects with a mesh structure, which allows to describe not only the surface of the objects but also to model their interiors. This is obtained by dividing the nodes of the mesh in two categories, namely internal and external ones, which will be responsible for two different tasks. In our new approach we propose to negate this separation and use only one type of nodes. Using that assumption we manage to significantly shorten the time of segmentation while maintaining its quality.
  • Lenkiewicz, P., Van Uytvanck, D., Wittenburg, P., & Drude, S. (2012). Towards automated annotation of audio and video recordings by application of advanced web-services. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 1880-1883).

    Abstract

    In this paper we describe audio and video processing algorithms that are developed in the scope of AVATecH project. The purpose of these algorithms is to shorten the time taken by manual annotation of audio and video recordings by extracting features from media files and creating semi-automated annotations. We show that the use of such supporting algorithms can shorten the annotation time to 30-50% of the time necessary to perform a fully manual annotation of the same kind.
  • Levelt, W. J. M. (1994). Psycholinguistics. In A. M. Colman (Ed.), Companion Encyclopedia of Psychology: Vol. 1 (pp. 319-337). London: Routledge.

    Abstract

    Linguistic skills are primarily tuned to the proper conduct of conversation. The innate ability to converse has provided species with a capacity to share moods, attitudes, and information of almost any kind, to assemble knowledge and skills, to plan coordinated action, to educate its offspring, in short, to create and transmit culture. In conversation the interlocutors are involved in negotiating meaning. Speaking is most complex cognitive-motor skill. It involves the conception of an intention, the selection of information whose expression will make that intention recognizable, the selection of appropriate words, the construction of a syntactic framework, the retrieval of the words’ sound forms, and the computation of an articulatory plan for each word and for the utterance as a whole. The question where communicative intentions come from is a psychodynamic question rather than a psycholinguistic one. Speaking is a form of social action, and it is in the context of action that intentions, goals, and subgoals develop.
  • Levelt, W. J. M. (1989). De connectionistische mode: Symbolische en subsymbolische modellen van het menselijk gedrag. In C. M. Brown, P. Hagoort, & T. Meijering (Eds.), Vensters op de geest: Cognitie op het snijvlak van filosofie en psychologie (pp. 202-219). Utrecht: Stichting Grafiet.
  • Levelt, W. J. M. (1984). Geesteswetenschappelijke theorie als kompas voor de gangbare mening. In S. Dresden, & D. Van de Kaa (Eds.), Wetenschap ten goede en ten kwade (pp. 42-52). Amsterdam: North Holland.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1994). On the skill of speaking: How do we access words? In Proceedings ICSLP 94 (pp. 2253-2258). Yokohama: The Acoustical Society of Japan.
  • Levelt, W. J. M. (1994). Onder woorden brengen: Beschouwingen over het spreekproces. In Haarlemse voordrachten: voordrachten gehouden in de Hollandsche Maatschappij der Wetenschappen te Haarlem. Haarlem: Hollandsche maatschappij der wetenschappen.
  • Levelt, W. J. M. (2020). The alpha and omega of Jerome Bruner's contributions to the Max Planck Institute for Psycholinguistics. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen (pp. 11-18). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    Presentation of the official opening of the Jerome Bruner Library, January 8th, 2020
  • Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. Van Doorn, W. A. Van de Grind, & J. J. Koenderink (Eds.), Limits in perception (pp. 323-358). Utrecht: VNU Science Press.
  • Levelt, W. J. M. (1984). Spontaneous self-repairs in speech: Processes and representations. In M. P. R. Van den Broecke, & A. Cohen (Eds.), Proceedings of the 10th International Congress of Phonetic Sciences (pp. 105-117). Dordrecht: Foris.
  • Levelt, W. J. M. (1994). The skill of speaking. In P. Bertelson, P. Eelen, & G. d'Ydewalle (Eds.), International perspectives on psychological science: Vol. 1. Leading themes (pp. 89-103). Hove: Erlbaum.
  • Levelt, W. J. M. (1983). The speaker's organization of discourse. In Proceedings of the XIIIth International Congress of Linguists (pp. 278-290).
  • Levelt, W. J. M. (1994). What can a theory of normal speaking contribute to AAC? In ISAAC '94 Conference Book and Proceedings. Hoensbroek: IRV.
  • Levelt, W. J. M. (1989). Working models of perception: Five general issues. In B. A. Elsendoorn, & H. Bouma (Eds.), Working models of perception (pp. 489-503). London: Academic Press.
  • Levinson, S. C., & Wilkins, D. P. (2006). Patterns in the data: Towards a semantic typology of spatial description. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 512-552). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). On the human "interaction engine". In N. J. Enfield, & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 39-69). Oxford: Berg.
  • Levinson, S. C., & Wilkins, D. P. (2006). The background to the study of the language of space. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 1-23). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). The language of space in Yélî Dnye. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 157-203). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1989). Conversation. In E. Barnouw (Ed.), International encyclopedia of communications (pp. 407-410). New York: Oxford University Press.
  • Levinson, S. C. (1994). Deixis. In R. E. Asher (Ed.), Encyclopedia of language and linguistics (pp. 853-857). Oxford: Pergamon Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2009). Cognitive anthropology. In G. Senft, J. O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 50-57). Amsterdam: Benjamins.
  • Levinson, S. C. (2006). Introduction: The evolution of culture in a microcosm. In S. C. Levinson, & P. Jaisson (Eds.), Evolution and culture: A Fyssen Foundation Symposium (pp. 1-41). Cambridge: MIT Press.
  • Levinson, S. C. (2009). Foreword. In J. Liep (Ed.), A Papuan plutocracy: Ranked exchange on Rossel Island (pp. ix-xxiii). Copenhagen: Aarhus University Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2009). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication (pp. 95-104). Oxford: Oxford University Press.
  • Levinson, S. C. (2012). Interrogative intimations: On a possible social economics of interrogatives. In J. P. De Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 11-32). New York: Cambridge University Press.
  • Levinson, S. C., & Brown, P. (2012). Put and Take in Yélî Dnye, the Papuan language of Rossel Island. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 273-296). Amsterdam: Benjamins.

    Abstract

    This paper describes the linguistic treatment of placement events in the Rossel Island (Papua New Guinea) language Yélî Dnye. Yélî Dnye is unusual in treating PUT and TAKE events symmetrically with a remarkable consistency. In what follows, we first provide a brief background for the language, then describe the six core PUT/TAKE verbs that were drawn upon by Yélî Dnye speakers to describe the great majority of the PUT/TAKE stimuli clips, along with some of their grammatical properties. In Section 5 we describe alternative verbs usable in particular circumstances and give an indication of the basis for variability in responses across speakers. Section 6 presents some reasons why the Yélî verb pattern for expressing PUT and TAKE events is of broad interest.
  • Levinson, S. C. (2012). Preface. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. xi-xv). Amsterdam: Benjamins.
  • Levinson, S. C., & Majid, A. (2009). Preface and priorities. In A. Majid (Ed.), Field manual volume 12 (pp. III). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Majid, A. (2009). The role of language in mind. In S. Nolen-Hoeksema, B. Fredrickson, G. Loftus, & W. Wagenaar (Eds.), Atkinson and Hilgard's introduction to psychology (15th ed., pp. 352). London: Cengage learning.
  • Levinson, S. C., & Senft, G. (1994). Wie lösen Sprecher von Sprachen mit absoluten und relativen Systemen des räumlichen Verweisens nicht-sprachliche räumliche Aufgaben? In Jahrbuch der Max-Planck-Gesellschaft 1994 (pp. 295-299). München: Generalverwaltung der Max-Planck-Gesellschaft München.
  • Levshina, N. (2020). How tight is your language? A semantic typology based on Mutual Information. In K. Evang, L. Kallmeyer, R. Ehren, S. Petitjean, E. Seyffarth, & D. Seddah (Eds.), Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories (pp. 70-78). Düsseldorf, Germany: Association for Computational Linguistics. doi:10.18653/v1/2020.tlt-1.7.

    Abstract

    Languages differ in the degree of semantic flexibility of their syntactic roles. For example, Eng-
    lish and Indonesian are considered more flexible with regard to the semantics of subjects,
    whereas German and Japanese are less flexible. In Hawkins’ classification, more flexible lan-
    guages are said to have a loose fit, and less flexible ones are those that have a tight fit. This
    classification has been based on manual inspection of example sentences. The present paper
    proposes a new, quantitative approach to deriving the measures of looseness and tightness from
    corpora. We use corpora of online news from the Leipzig Corpora Collection in thirty typolog-
    ically and genealogically diverse languages and parse them syntactically with the help of the
    Universal Dependencies annotation software. Next, we compute Mutual Information scores for
    each language using the matrices of lexical lemmas and four syntactic dependencies (intransi-
    tive subjects, transitive subject, objects and obliques). The new approach allows us not only to
    reproduce the results of previous investigations, but also to extend the typology to new lan-
    guages. We also demonstrate that verb-final languages tend to have a tighter relationship be-
    tween lexemes and syntactic roles, which helps language users to recognize thematic roles early
    during comprehension.

    Additional information

    full text via ACL website
  • Liszkowski, U. (2006). Infant pointing at twelve months: Communicative goals, motives, and social-cognitive abilities. In N. J. Enfield, & S. C. Levinson (Eds.), Roots of human sociality: culture, cognition and interaction (pp. 153-178). New York: Berg.
  • MacDonald, K., Räsänen, O., Casillas, M., & Warlaumont, A. S. (2020). Measuring prosodic predictability in children’s home language environments. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 695-701). Montreal, QB: Cognitive Science Society.

    Abstract

    Children learn language from the speech in their home environment. Recent work shows that more infant-directed speech
    (IDS) leads to stronger lexical development. But what makes IDS a particularly useful learning signal? Here, we expand on an attention-based account first proposed by Räsänen et al. (2018): that prosodic modifications make IDS less predictable, and thus more interesting. First, we reproduce the critical finding from Räsänen et al.: that lab-recorded IDS pitch is less predictable compared to adult-directed speech (ADS). Next, we show that this result generalizes to the home language environment, finding that IDS in daylong recordings is also less predictable than ADS but that this pattern is much less robust than for IDS recorded in the lab. These results link experimental work on attention and prosodic modifications of IDS to real-world language-learning environments, highlighting some challenges of scaling up analyses of IDS to larger datasets that better capture children’s actual input.
  • Yu, J., Mailhammer, R., & Cutler, A. (2020). Vocabulary structure affects word recognition: Evidence from German listeners. In N. Minematsu, M. Kondo, T. Arai, & R. Hayashi (Eds.), Proceedings of Speech Prosody 2020 (pp. 474-478). Tokyo: ISCA. doi:10.21437/SpeechProsody.2020-97.

    Abstract

    Lexical stress is realised similarly in English, German, and
    Dutch. On a suprasegmental level, stressed syllables tend to be
    longer and more acoustically salient than unstressed syllables;
    segmentally, vowels in unstressed syllables are often reduced.
    The frequency of unreduced unstressed syllables (where only
    the suprasegmental cues indicate lack of stress) however,
    differs across the languages. The present studies test whether
    listener behaviour is affected by these vocabulary differences,
    by investigating German listeners’ use of suprasegmental cues
    to lexical stress in German and English word recognition. In a
    forced-choice identification task, German listeners correctly
    assigned single-syllable fragments (e.g., Kon-) to one of two
    words differing in stress (KONto, konZEPT). Thus, German
    listeners can exploit suprasegmental information for
    identifying words. German listeners also performed above
    chance in a similar task in English (with, e.g., DIver, diVERT),
    i.e., their sensitivity to these cues also transferred to a nonnative
    language. An English listener group, in contrast, failed
    in the English fragment task. These findings mirror vocabulary
    patterns: German has more words with unreduced unstressed
    syllables than English does.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2009). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 12 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883570.

    Abstract

    Synaesthesia is a condition in which stimulation of one sensory modality (e.g. hearing) causes additional experiences in a second, unstimulated modality (e.g. seeing colours). The goal of this task is to explore the types (and incidence) of synaesthesia in different cultures. Two simple tests can ascertain the existence of synaesthesia in your community.

    Additional information

    2009_Synaesthesia_audio_files.zip
  • Majid, A. (2012). A guide to stimulus-based elicitation for semantic categories. In N. Thieberger (Ed.), The Oxford handbook of linguistic fieldwork (pp. 54-71). New York: Oxford University Press.
  • Majid, A. (2012). Taste in twenty cultures [Abstract]. Abstracts from the XXIth Congress of European Chemoreception Research Organization, ECRO-2011. Publ. in Chemical Senses, 37(3), A10.

    Abstract

    Scholars disagree about the extent to which language can tell us
    about conceptualisation of the world. Some believe that language
    is a direct window onto concepts: Having a word ‘‘bird’’, ‘‘table’’ or
    ‘‘sour’’ presupposes the corresponding underlying concept, BIRD,
    TABLE, SOUR. Others disagree. Words are thought to be uninformative,
    or worse, misleading about our underlying conceptual representations;
    after all, our mental worlds are full of ideas that we
    struggle to express in language. How could this be so, argue sceptics,
    if language were a direct window on our inner life? In this presentation,
    I consider what language can tell us about the
    conceptualisation of taste. By considering linguistic data from
    twenty unrelated cultures – varying in subsistence mode (huntergatherer
    to industrial), ecological zone (rainforest jungle to desert),
    dwelling type (rural and urban), and so forth – I argue any single language is, indeed, impoverished about what it can reveal about
    taste. But recurrent lexicalisation patterns across languages can
    provide valuable insights about human taste experience. Moreover,
    language patterning is part of the data that a good theory of taste
    perception has to be answerable for. Taste researchers, therefore,
    cannot ignore the crosslinguistic facts.
  • Malaisé, V., Aroyo, L., Brugman, H., Gazendam, L., De Jong, A., Negru, C., & Schreiber, G. (2006). Evaluating a thesaurus browser for an audio-visual archive. In S. Staab, & V. Svatek (Eds.), Managing knowledge in a world of networks (pp. 272-286). Berlin: Springer.
  • Marti, M., Alhama, R. G., & Recasens, M. (2012). Los avances tecnológicos y la ciencia del lenguaje. In T. Jiménez Juliá, B. López Meirama, V. Vázquez Rozas, & A. Veiga (Eds.), Cum corde et in nova grammatica. Estudios ofrecidos a Guillermo Rojo (pp. 543-553). Santiago de Compostela: Universidade de Santiago de Compostela.

    Abstract

    La ciencia moderna nace de la conjunción entre postulados teóricos y el desarrollo de una infraestructura tecnológica que permite observar los hechos de manera adecuada, realizar experimentos y verificar las hipótesis. Desde Galileo, ciencia y tecnología han avanzado conjuntamente. En el mundo occidental, la ciencia ha evolucionado desde pro-puestas puramente especulativas (basadas en postulados apriorísticos) hasta el uso de métodos experimentales y estadísticos para explicar mejor nuestras observaciones. La tecnología se hermana con la ciencia facilitando al investigador una aproximación adecuada a los hechos que pretende explicar. Así, Galileo, para observar los cuerpos celestes, mejoró el utillaje óptico, lo que le permitió un acercamiento más preciso al objeto de estudio y, en consecuencia, unos fundamentos más sólidos para su propuesta teórica. De modo similar, actualmente el desarrollo tecnológico digital ha posibilitado la extracción masiva de datos y el análisis estadístico de éstos para verificar las hipótesis de partida: la lingüística no ha podido dar el paso desde la pura especulación hacia el análisis estadístico de los hechos hasta la aparición de las tecnologías digitales.
  • McDonough, J., Lehnert-LeHouillier, H., & Bardhan, N. P. (2009). The perception of nasalized vowels in American English: An investigation of on-line use of vowel nasalization in lexical access. In Nasal 2009.

    Abstract

    The goal of the presented study was to investigate the use of coarticulatory vowel nasalization in lexical access by native speakers of American English. In particular, we compare the use of coart culatory place of articulation cues to that of coarticulatory vowel nasalization. Previous research on lexical access has shown that listeners use cues to the place of articulation of a postvocalic stop in the preceding vowel. However, vowel nasalization as cue to an upcoming nasal consonant has been argued to be a more complex phenomenon. In order to establish whether coarticulatory vowel nasalization aides in the process of lexical access in the same way as place of articulation cues do, we conducted two perception experiments: an off-line 2AFC discrimination task and an on-line eyetracking study using the visual world paradigm. The results of our study suggest that listeners are indeed able to use vowel nasalization in similar ways to place of articulation information, and that both types of cues aide in lexical access.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Dilley, L. C. (2020). Prosody and spoken-word recognition. In C. Gussenhoven, & A. Chen (Eds.), The Oxford handbook of language prosody (pp. 509-521). Oxford: Oxford University Press.

    Abstract

    This chapter outlines a Bayesian model of spoken-word recognition and reviews how
    prosody is part of that model. The review focuses on the information that assists the lis­
    tener in recognizing the prosodic structure of an utterance and on how spoken-word
    recognition is also constrained by prior knowledge about prosodic structure. Recognition
    is argued to be a process of perceptual inference that ensures that listening is robust to
    variability in the speech signal. In essence, the listener makes inferences about the seg­
    mental content of each utterance, about its prosodic structure (simultaneously at differ­
    ent levels in the prosodic hierarchy), and about the words it contains, and uses these in­
    ferences to form an utterance interpretation. Four characteristics of the proposed
    prosody-enriched recognition model are discussed: parallel uptake of different informa­
    tion types, high contextual dependency, adaptive processing, and phonological abstrac­
    tion. The next steps that should be taken to develop the model are also discussed.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Melinger, A., Schulte im Walde, S., & Weber, A. (2006). Characterizing response types and revealing noun ambiguity in German association norms. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. Trento: Association for Computational Linguistics.

    Abstract

    This paper presents an analysis of semantic association norms for German nouns. In contrast to prior studies, we not only collected associations elicited by written representations of target objects but also by their pictorial representations. In a first analysis, we identified systematic differences in the type and distribution of associate responses for the two presentation forms. In a second analysis, we applied a soft cluster analysis to the collected target-response pairs. We subsequently used the clustering to predict noun ambiguity and to discriminate senses in our target nouns.
  • Mengede, J., Devanna, P., Hörpel, S. G., Firzla, U., & Vernes, S. C. (2020). Studying the genetic bases of vocal learning in bats. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 280-282). Nijmegen: The Evolution of Language Conferences.
  • Merolla, D., & Ameka, F. K. (2012). Reflections on video fieldwork: The making of Verba Africana IV on the Ewe Hogbetsotso Festival. In D. Merolla, J. Jansen, & K. Nait-Zerrad (Eds.), Multimedia research and documentation of oral genres in Africa - The step forward (pp. 123-132). Münster: Lit.
  • Misersky, J., & Redl, T. (2020). A psycholinguistic view on stereotypical and grammatical gender: The effects and remedies. In C. D. J. Bulten, C. F. Perquin-Deelen, M. H. Sinninghe Damsté, & K. J. Bakker (Eds.), Diversiteit. Een multidisciplinaire terreinverkenning (pp. 237-255). Deventer: Wolters Kluwer.
  • Mitterer, H., & Cutler, A. (2006). Speech perception. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 11) (pp. 770-782). Amsterdam: Elsevier.

    Abstract

    The goal of speech perception is understanding a speaker's message. To achieve this, listeners must recognize the words that comprise a spoken utterance. This in turn implies distinguishing these words from other minimally different words (e.g., word from bird, etc.), and this involves making phonemic distinctions. The article summarizes research on the perception of phonemic distinctions, on how listeners cope with the continuity and variability of speech signals, and on how phonemic information is mapped onto the representations of words. Particular attention is paid to theories of speech perception and word recognition.
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). How does social structure shape language variation? A case study of the Kata Kolok lexicon. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 302-304). Nijmegen: The Evolution of Language Conferences.
  • Musgrave, S., & Cutfield, S. (2009). Language documentation and an Australian National Corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages (pp. 10-18). Somerville: Cascadilla Proceedings Project.

    Abstract

    Corpus linguistics and language documentation are usually considered separate subdisciplines within linguistics, having developed from different traditions and often operating on different scales, but the authors will suggest that there are commonalities to the two: both aim to represent language use in a community, and both are concerned with managing digital data. The authors propose that the development of the Australian National Corpus (AusNC) be guided by the experience of language documentation in the management of multimodal digital data and its annotation, and in ethical issues pertaining to making the data accessible. This would allow an AusNC that is distributed, multimodal, and multilingual, with holdings of text, audio, and video data distributed across multiple institutions; and including Indigenous, sign, and migrant community languages. An audit of language material held by Australian institutions and individuals is necessary to gauge the diversity and volume of possible content, and to inform common technical standards.
  • Namjoshi, J., Tremblay, A., Broersma, M., Kim, S., & Cho, T. (2012). Influence of recent linguistic exposure on the segmentation of an unfamiliar language [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 1968.

    Abstract

    Studies have shown that listeners segmenting unfamiliar languages transfer native-language (L1) segmentation cues. These studies, however, conflated L1 and recent linguistic exposure. The present study investigates the relative influences of L1 and recent linguistic exposure on the use of prosodic cues for segmenting an artificial language (AL). Participants were L1-French listeners, high-proficiency L2-French L1-English listeners, and L1-English listeners without functional knowledge of French. The prosodic cue assessed was F0 rise, which is word-final in French, but in English tends to be word-initial. 30 participants heard a 20-minute AL speech stream with word-final boundaries marked by F0 rise, and decided in a subsequent listening task which of two words (without word-final F0 rise) had been heard in the speech stream. The analyses revealed a marginally significant effect of L1 (all listeners) and, importantly, a significant effect of recent linguistic exposure (L1-French and L2-French listeners): accuracy increased with decreasing time in the US since the listeners’ last significant (3+ months) stay in a French-speaking environment. Interestingly, no effect of L2 proficiency was found (L2-French listeners).
  • Narasimhan, B., & Brown, P. (2009). Getting the inside story: Learning to talk about containment in Tzeltal and Hindi. In V. C. Mueller-Gathercole (Ed.), Routes to language: Studies in honor of Melissa Bowerman (pp. 97-132). New York: Psychology Press.

    Abstract

    The present study examines young children's uses of semantically specific and general relational containment terms (e.g. in, enter) in Hindi and Tzeltal, and the extent to which their usage patterns are influenced by input frequency. We hypothesize that if children have a preference for relational terms that are semantically specific, this will be reflected in early acquisition of more semantically specific expressions and underextension of semantically general ones, regardless of the distributional patterns of use of these terms in the input. Our findings however show a strong role for input frequency in guiding children's patterns of use of containment terms in the two languages. Yet language-specific lexicalization patterns play a role as well, since object-specific containment verbs are used as early as the semantically general 'enter' verb by children acquiring Tzeltal.
  • Narasimhan, B., Kopecka, A., Bowerman, M., Gullberg, M., & Majid, A. (2012). Putting and taking events: A crosslinguistic perspective. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 1-18). Amsterdam: Benjamins.
  • Narasimhan, B. (2012). Putting and Taking in Tamil and Hindi. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 201-230). Amsterdam: Benjamins.

    Abstract

    Many languages have general or “light” verbs used by speakers to describe a wide range of situations owing to their relatively schematic meanings, e.g., the English verb do that can be used to describe many different kinds of actions, or the verb put that labels a range of types of placement of objects at locations. Such semantically bleached verbs often become grammaticalized and used to encode an extended (set of) meaning(s), e.g., Tamil veyyii ‘put/place’ is used to encode causative meaning in periphrastic causatives (e.g., okkara veyyii ‘make sit’, nikka veyyii ‘make stand’). But do general verbs in different languages have the same kinds of (schematic) meanings and extensional ranges? Or do they reveal different, perhaps even cross-cutting, ways of structuring the same semantic domain in different languages? These questions require detailed crosslinguistic investigation using comparable methods of eliciting data. The present study is a first step in this direction, and focuses on the use of general verbs to describe events of placement and removal in two South Asian languages, Hindi and Tamil.
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Nordhoff, S., & Hammarström, H. (2012). Glottolog/Langdoc: Increasing the visibility of grey literature for low-density languages. In N. Calzolari (Ed.), Proceedings of the 8th International Conference on Language Resources and Evaluation [LREC 2012], May 23-25, 2012 (pp. 3289-3294). [Paris]: ELRA.

    Abstract

    Language resources can be divided into structural resources treating phonology, morphosyntax, semantics etc. and resources treating the social, demographic, ethnic, political context. A third type are meta-resources, like bibliographies, which provide access to the resources of the first two kinds. This poster will present the Glottolog/Langdoc project, a comprehensive bibliography providing web access to 180k bibliographical records to (mainly) low visibility resources from low-density languages. The resources are annotated for macro-area, content language, and document type and are available in XHTML and RDF.
  • Norris, D., McQueen, J. M., & Cutler, A. (1994). Competition and segmentation in spoken word recognition. In Proceedings of the Third International Conference on Spoken Language Processing: Vol. 1 (pp. 401-404). Yokohama: PACIFICO.

    Abstract

    This paper describes recent experimental evidence which shows that models of spoken word recognition must incorporate both inhibition between competing lexical candidates and a sensitivity to metrical cues to lexical segmentation. A new version of the Shortlist [1][2] model incorporating the Metrical Segmentation Strategy [3] provides a detailed simulation of the data.
  • Nouaouri, N. (2012). The semantics of placement and removal predicates in Moroccan Arabic. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 99-122). Amsterdam: Benjamins.

    Abstract

    This article explores the expression of placement and removal events in Moroccan Arabic, particularly the semantic features of ‘putting’ and ‘taking’ verbs, classified in accordance with their combination with Goal and/or Source NPs. Moroccan Arabic verbs encode a variety of components of placement and removal events, including containment, attachment, features of the figure, and trajectory. Furthermore, accidental events are distinguished from deliberate events either by the inherent semantics of predicates or denoted syntactically. The postures of the Figures, in spite of some predicates distinguishing them, are typically not specified as they are in other languages, such as Dutch. Although Ground locations are frequently mentioned in both source-oriented and goal-oriented clauses, they are used more often in goal-oriented clauses.
  • O'Connor, L. (2006). Sobre los predicados complejos en el Chontal de la baja. In A. Oseguera (Ed.), Historia y etnografía entre los Chontales de Oaxaca (pp. 119-161). Oaxaca: Instituto Nacional de Antroplogía e Historia.
  • O’Connor, L. (2012). Take it up, down, and away: Encoding placement and removal in Lowland Chontal. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 297-326). Amsterdam: Benjamins.

    Abstract

    This paper offers a structural and semantic analysis of expressions of caused motion in Lowland Chontal of Oaxaca, an indigenous language of southern Mexico. The data were collected using a video stimulus designed to elicit a wide range of caused motion event descriptions. The most frequent event types in the corpus depict caused motion to and from relations of support and containment, fundamental notions in the de­scription of spatial relations between two entities and critical semantic components of the linguistic encoding of caused motion in this language. Formal features of verbal construction type and argument realization are examined by sorting event descriptions into semantic types of placement and removal, to and from support and to and from containment. Together with typological factors that shape the distribution of spatial semantics and referent expression, separate treatments of support and containment relations serve to clarify notable asymmetries in patterns of predicate type and argument realization.
  • Offenga, F., Broeder, D., Wittenburg, P., Ducret, J., & Romary, L. (2006). Metadata profile in the ISO data category registry. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1866-1869).
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? In K. Truong, D. Heylen, & M. Czerwinski (Eds.), ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 1-2). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3382507.3419442.
  • Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.

    Abstract

    Gestures are meaningful movements of the body, the hands, and the face during communication,
    which accompany the production of both spoken and signed utterances. Recent
    research has shown that gestures are an integral part of language and that they contribute
    semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore,
    they reveal internal representations of the language user during communication in ways
    that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes
    research on the role of gesture in spoken languages. Subsequently, it gives an overview
    of how gestural components might manifest themselves in sign languages, that is,
    in a situation in which both gesture and sign are expressed by the same articulators.
    Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and
    spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.
  • Ozyurek, A. (1994). How children talk about a conversation. In K. Beals, J. Denton, R. Knippen, L. Melnar, H. Suzuki, & E. Zeinfeld (Eds.), Papers from the Thirtieth Regional Meeting of the Chicago Linguistic Society: Main Session (pp. 309-319). Chicago, Ill: Chicago Linguistic Society.
  • Ozyurek, A. (1994). How children talk about conversations: Development of roles and voices. In E. V. Clark (Ed.), Proceedings of the Twenty-Sixth Annual Child Language Research Forum (pp. 197-206). Stanford: CSLI Publications.
  • Pacheco, A., Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Profiling dislexic children: Phonology and visual naming skills. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 40). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Papafragou, A., & Ozturk, O. (2006). The acquisition of epistemic modality. In A. Botinis (Ed.), Proceedings of ITRW on Experimental Linguistics in ExLing-2006 (pp. 201-204). ISCA Archive.

    Abstract

    In this paper we try to contribute to the body of knowledge about the acquisition of English epistemic modal verbs (e.g. Mary may/has to be at school). Semantically, these verbs encode possibility or necessity with respect to available evidence. Pragmatically, the use of epistemic modals often gives rise to scalar conversational inferences (Mary may be at school -> Mary doesn’t have to be at school). The acquisition of epistemic modals is challenging for children on both these levels. In this paper, we present findings from two studies which were conducted with 5-year-old children and adults. Our findings, unlike previous work, show that 5-yr-olds have mastered epistemic modal semantics, including the notions of necessity and possibility. However, they are still in the process of acquiring epistemic modal pragmatics.
  • Paplu, S. H., Mishra, C., & Berns, K. (2020). Pseudo-randomization in automating robot behaviour during human-robot interaction. In 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 1-6). Institute of Electrical and Electronics Engineers. doi:10.1109/ICDL-EpiRob48136.2020.9278115.

    Abstract

    Automating robot behavior in a specific situation is an active area of research. There are several approaches available in the literature of robotics to cater for the automatic behavior of a robot. However, when it comes to humanoids or human-robot interaction in general, the area has been less explored. In this paper, a pseudo-randomization approach has been introduced to automatize the gestures and facial expressions of an interactive humanoid robot called ROBIN based on its mental state. A significant number of gestures and facial expressions have been implemented to allow the robot more options to perform a relevant action or reaction based on visual stimuli. There is a display of noticeable differences in the behaviour of the robot for the same stimuli perceived from an interaction partner. This slight autonomous behavioural change in the robot clearly shows a notion of automation in behaviour. The results from experimental scenarios and human-centered evaluation of the system help validate the approach.

    Files private

    Request files
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Peeters, D., Vanlangendonck, F., & Willems, R. M. (2012). Bestaat er een talenknobbel? Over taal in ons brein. In M. Boogaard, & M. Jansen (Eds.), Alles wat je altijd al had willen weten over taal: De taalcanon (pp. 41-43). Amsterdam: Meulenhoff.

    Abstract

    Wanneer iemand goed is in het spreken van meerdere talen, wordt wel gezegd dat zo iemand een talenknobbel heeft. Iedereen weet dat dat niet letterlijk bedoeld is: iemand met een talenknobbel herkennen we niet aan een grote bult op zijn hoofd. Toch dacht men vroeger wel degelijk dat mensen een letterlijke talenknobbel konden ontwikkelen. Een goed ontwikkeld taalvermogen zou gepaard gaan met het groeien van het hersengebied dat hiervoor verantwoordelijk was. Dit deel van het brein zou zelfs zo groot kunnen worden dat het van binnenuit tegen de schedel drukte, met name rond de ogen. Nu weten we wel beter. Maar waar in het brein bevindt de taal zich dan wel precies?
  • Pereiro Estevan, Y., Wan, V., Scharenborg, O., & Gallardo Antolín, A. (2006). Segmentación de fonemas no supervisada basada en métodos kernel de máximo margen. In Proceedings of IV Jornadas en Tecnología del Habla.

    Abstract

    En este artículo se desarrolla un método automático de segmentación de fonemas no supervisado. Este método utiliza el algoritmo de agrupación de máximo margen [1] para realizar segmentación de fonemas sobre habla continua sin necesidad de información a priori para el entrenamiento del sistema.
  • Perniss, P. M. (2012). Use of sign space. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language: an International Handbook (pp. 412-431). Berlin: Mouton de Gruyter.

    Abstract

    This chapter focuses on the semantic and pragmatic uses of space. The questions addressed concern how sign space (i.e. the area of space in front of the signer’s body) is used for meaning construction, how locations in sign space are associated with discourse referents, and how signers choose to structure sign space for their communicative intents. The chapter gives an overview of linguistic analyses of the use of space, starting with the distinction between syntactic and topographic uses of space and the different types of signs that function to establish referent-location associations, and moving to analyses based on mental spaces and conceptual blending theories. Semantic-pragmatic conventions for organizing sign space are discussed, as well as spatial devices notable in the visual-spatial modality (particularly, classifier predicates and signing perspective), which influence and determine the way meaning is created in sign space. Finally, the special role of simultaneity in sign languages is discussed, focusing on the semantic and discourse-pragmatic functions of simultaneous constructions.
  • Petersen, J. H. (2012). How to put and take in Kalasha. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 349-366). Amsterdam: Benjamins.

    Abstract

    In Kalasha, an Indo-Aryan language spoken in Northwest Pakistan, the linguistic encoding of ‘put’ and ‘take’ events reveals a symmetry between lexical ‘put’ and ‘take’ verbs that implies ‘placement on’ and ‘removal from’ a supporting surface. As regards ‘placement in’ and ‘removal from’ an enclosure, the data reveal a lexical asymmetry as ‘take’ verbs display a larger degree of linguistic elaboration of the Figure-Ground relation and the type of caused motion than ‘put’ verbs. When considering syntactic patterns, more instances of asymmetry between these two event types show up. The analysis presented here supports the proposal that an asymmetry exists in the encoding of goals versus sources as suggested in Nam (2004) and Ikegami (1987), but it calls into question the statement put forward by Regier and Zheng (2007) that endpoints (goals) are more finely differentiated semantically than starting points (sources).
  • Petersson, K. M., & Reis, A. (2006). Characteristics of illiterate and literate cognitive processing: Implications of brain- behavior co-constructivism. In P. B. Baltes, P. Reuter-Lorenz, & F. Rösler (Eds.), Lifespan development and the brain: The perspective of biocultural co-constructivism (pp. 279-305). Cambridge: Cambridge University Press.

    Abstract

    Literacy and education represent essential aspects of contemporary society and subserve important aspects of socialization and cultural transmission. The study of illiterate subjects represents one approach to investigate the interactions between neurobiological and cultural factors in cognitive development, individual learning, and their influence on the functional organization of the brain. In this chapter we review some recent cognitive, neuroanatomic, and functional neuroimaging results indicating that formal education influences important aspects of the human brain. Taken together this provides strong support for the idea that the brain is modulated by literacy and formal education, which in turn change the brains capacity to interact with its environment, including the individual's contemporary culture. In other words, the individual is able to participate in, interact with, and actively contribute to the process of cultural transmission in new ways through acquired cognitive skills.
  • Petersson, K. M., Ingvar, M., & Reis, A. (2009). Language and literacy from a cognitive neuroscience perspective. In D. Olsen, & N. Torrance (Eds.), Cambridge handbook of literacy (pp. 152-181). Cambridge: Cambridge University Press.
  • Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2006). The role of morphology in fine phonetic detail: The case of Dutch -igheid. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 53-54).
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2006). Effects of word frequency on the acoustic durations of affixes. In Proceedings of Interspeech 2006 (pp. 953-956). Pittsburgh: ICSLP.

    Abstract

    This study investigates whether the acoustic durations of derivational affixes in Dutch are affected by the frequency of the word they occur in. In a word naming experiment, subjects were presented with a large number of words containing one of the affixes ge-, ver-, ont, or -lijk. Their responses were recorded on DAT tapes, and the durations of the affixes were measured using Automatic Speech Recognition technology. To investigate whether frequency also affected durations when speech rate was high, the presentation rate of the stimuli was varied. The results show that a higher frequency of the word as a whole led to shorter acoustic realizations for all affixes. Furthermore, affixes became shorter as the presentation rate of the stimuli increased. There was no interaction between word frequency and presentation rate, suggesting that the frequency effect also applies in situations in which the speed of articulation is very high.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 2053.

    Abstract

    Two eye-tracking experiments tested whether native listeners can adapt
    to reductions in casual Dutch speech. Listeners were exposed to segmental
    ([b] > [m]), syllabic (full-vowel-deletion), or no reductions. In a subsequent
    test phase, all three listener groups were tested on how efficiently they could
    recognize both types of reduced words. In the first Experiment’s exposure
    phase, the (un)reduced target words were predictable. The segmental reductions
    were completely consistent (i.e., involved the same input sequences).
    Learning about them was found to be pattern-specific and generalized in the
    test phase to new reduced /b/-words. The syllabic reductions were not consistent
    (i.e., involved variable input sequences). Learning about them was
    weak and not pattern-specific. Experiment 2 examined effects of word repetition
    and predictability. The (un-)reduced test words appeared in the exposure
    phase and were not predictable. There was no evidence of learning for
    the segmental reductions, probably because they were not predictable during
    exposure. But there was word-specific learning for the vowel-deleted words.
    The results suggest that learning about reductions is pattern-specific and
    generalizes to new words if the input is consistent and predictable. With
    variable input, there is more likely to be adaptation to a general speaking
    style and word-specific learning.
  • Poletiek, F. H., & Chater, N. (2006). Grammar induction profits from representative stimulus sampling. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 1968-1973). Austin, TX, USA: Cognitive Science Society.
  • Poletiek, F. H. (2006). Natural sampling of stimuli in (artificial) grammar learning. In K. Fiedler, & P. Juslin (Eds.), Information sampling and adaptive cognition (pp. 440-455). Cambridge: Cambridge University Press.
  • Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2012). The type of shared activity shapes caregiver and infant communication [Reprint]. In J.-M. Colletta, & M. Guidetti (Eds.), Gesture and multimodal development (pp. 157-174). Amsterdam: John Benjamins.

    Abstract

    For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language.
  • Rakoczy, H., & Haun, D. B. M. (2012). Vor- und nichtsprachliche Kognition. In W. Schneider, & U. Lindenberger (Eds.), Entwicklungspsychologie. 7. vollständig überarbeitete Auflage (pp. 337-362). Weinheim: Beltz Verlag.
  • Ramus, F., & Fisher, S. E. (2009). Genetics of language. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 855-871). Cambridge, MA: MIT Press.

    Abstract

    It has long been hypothesised that the human faculty to acquire a language is in some way encoded in our genetic program. However, only recently has genetic evidence been available to begin to substantiate the presumed genetic basis of language. Here we review the first data from molecular genetic studies showing association between gene variants and language disorders (specific language impairment, speech sound disorder, developmental dyslexia), we discuss the biological function of these genes, and we further speculate on the more general question of how the human genome builds a brain that can learn a language.
  • Rapold, C. J., & Zaugg-Coretti, S. (2009). Exploring the periphery of the central Ethiopian Linguistic area: Data from Yemsa and Benchnon. In J. Crass, & R. Meyer (Eds.), Language contact and language change in Ethiopia (pp. 59-81). Köln: Köppe.
  • Rapold, C. J. (2012). The encoding of placement and removal events in ǂAkhoe Haiǁom. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 79-98). Amsterdam: Benjamins.

    Abstract

    This paper explores the semantics of placement and removal verbs in Ākhoe Hai om based on event descriptions elicited with a set of video stimuli. After a brief sketch of the morphosyntax of placement/removal constructions in Ākhoe Haiom, four situation types are identified semantically that cover both placement and removal events. The language exhibits a clear tendency to make more fine-grained semantic distinctions in placement verbs, as opposed to semantically more general removal verbs.
  • Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2020). Lexical and gestural alignment in interaction and the emergence of novel shared symbols. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 356-358). Nijmegen: The Evolution of Language Conferences.
  • Ravignani, A., & Fitch, W. T. (2012). Sonification of experimental parameters as a new method for efficient coding of behavior. In A. Spink, F. Grieco, O. E. Krips, L. W. S. Loijens, L. P. P. J. Noldus, & P. H. Zimmerman (Eds.), Measuring Behavior 2012, 8th International Conference on Methods and Techniques in Behavioral Research (pp. 376-379).

    Abstract

    Cognitive research is often focused on experimental condition-driven reactions. Ethological studies frequently
    rely on the observation of naturally occurring specific behaviors. In both cases, subjects are filmed during the
    study, so that afterwards behaviors can be coded on video. Coding should typically be blind to experimental
    conditions, but often requires more information than that present on video. We introduce a method for blindcoding
    of behavioral videos that takes care of both issues via three main innovations. First, of particular
    significance for playback studies, it allows creation of a “soundtrack” of the study, that is, a track composed of
    synthesized sounds representing different aspects of the experimental conditions, or other events, over time.
    Second, it facilitates coding behavior using this audio track, together with the possibly muted original video.
    This enables coding blindly to conditions as required, but not ignoring other relevant events. Third, our method
    makes use of freely available, multi-platform software, including scripts we developed.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). Network structure and the cultural evolution of linguistic structure: A group communication experiment. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 359-361). Nijmegen: The Evolution of Language Conferences.

Share this page