Publications

Displaying 201 - 300 of 437
  • Klein, W. (2009). Concepts of time. In W. Klein, & P. Li (Eds.), The expression of time (pp. 5-38). Berlin: Mouton de Gruyter.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W. (2009). Finiteness, universal grammar, and the language faculty. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 333-344). New York: Psychology Press.
  • Klein, W. (2009). How time is encoded. In W. Klein, & P. Li (Eds.), The expression of time (pp. 39-82). Berlin: Mouton de Gruyter.
  • Klein, W. (2013). L'effettivo declino e la crescita potenziale della lessicografia tedesca. In N. Maraschio, D. De Martiono, & G. Stanchina (Eds.), L'italiano dei vocabolari: Atti di La piazza delle lingue 2012 (pp. 11-20). Firenze: Accademia della Crusca.
  • Klein, W. (1989). La variation linguistique. In P. Cadiot, & N. Dittmar (Eds.), La sociolinguistique en pays de langue allemande (pp. 101-124). Lille: Presses Universitaires de Lille.
  • Klein, W., & Li, P. (2009). Introduction. In W. Klein, & P. Li (Eds.), The expression of time (pp. 1-4). Berlin: Mouton de Gruyter.
  • Klein, W. (2013). European Science Foundation (ESF) Project. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 220-221). New York: Routledge.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W., & Perdue, C. (1989). The learner's problem of arranging words. In B. MacWhinney, & E. Bates (Eds.), The crosslinguistic study of sentence processing (pp. 292-327). Cambridge: Cambridge University Press.
  • Klein, W., & Musan, R. (2009). Werden. In W. Eins, & F. Schmoë (Eds.), Wie wir sprechen und schreiben: Festschrift für Helmut Glück zum 60. Geburtstag (pp. 45-61). Wiesbaden: Harrassowitz Verlag.
  • Klein, W., & Dimroth, C. (2009). Untutored second language acquisition. In W. C. Ritchie, & T. K. Bhatia (Eds.), The new handbook of second language acquisition (2nd rev. ed., pp. 503-522). Bingley: Emerald.
  • Klein, W. (2013). Von Reichtum und Armut des deutschen Wortschatzes. In Deutsche Akademie für Sprache und Dichtung, & Union der deutschen Akademien der Wissenschaften (Eds.), Reichtum und Armut der deutschen Sprache (pp. 15-55). Boston: de Gruyter.
  • Koch, X. (2018). Age and hearing loss effects on speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Koenig, A., Ringersma, J., & Trilsbeek, P. (2009). The Language Archiving Technology domain. In Z. Vetulani (Ed.), Human Language Technologies as a Challenge for Computer Science and Linguistics (pp. 295-299).

    Abstract

    The Max Planck Institute for Psycholinguistics (MPI) manages an archive of linguistic research data with a current size of almost 20 Terabytes. Apart from in-house researchers other projects also store their data in the archive, most notably the Documentation of Endangered Languages (DoBeS) projects. The archive is available online and can be accessed by anybody with Internet access. To be able to manage this large amount of data the MPI's technical group has developed a software suite called Language Archiving Technology (LAT) that on the one hand helps researchers and archive managers to manage the data and on the other hand helps users in enriching their primary data with additional layers. All the MPI software is Java-based and developed according to open source principles (GNU, 2007). All three major operating systems (Windows, Linux, MacOS) are supported and the software works similarly on all of them. As the archive is online, many of the tools, especially the ones for accessing the data, are browser based. Some of these browser-based tools make use of Adobe Flex to create nice-looking GUIs. The LAT suite is a complete set of management and enrichment tools, and given the interaction between the tools the result is a complete LAT software domain. Over the last 10 years, this domain has proven its functionality and use, and is being deployed to servers in other institutions. This deployment is an important step in getting the archived resources back to the members of the speech communities whose languages are documented. In the paper we give an overview of the tools of the LAT suite and we describe their functionality and role in the integrated process of archiving, management and enrichment of linguistic data.
  • Kolipakam, V. (2018). A holistic approach to understanding pre-history. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kopecka, A. (2009). Continuity and change in the representation of motion events in French. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Özçaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 415-426). New York: Psychology Press.
  • De Kovel, C. G. F., & Fisher, S. E. (2018). Molecular genetic methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 330-353). Hoboken: Wiley.
  • Kristoffersen, J. H., Troelsgard, T., & Zwitserlood, I. (2013). Issues in sign language lexicography. In H. Jackson (Ed.), The Bloomsbury companion to lexicography (pp. 259-283). London: Bloomsbury.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kung, C. (2018). Speech comprehension in a tone language: The role of lexical tone, context, and intonation in Cantonese-Chinese. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kuzla, C. (2009). Prosodic structure in speech production and perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ladd, D. R., & Dediu, D. (2013). Genes and linguistic tone. In H. Pashler (Ed.), Encyclopedia of the mind (pp. 372-373). London: Sage Publications.

    Abstract

    It is usually assumed that the language spoken by a human community is independent of the community's genetic makeup, an assumption supported by an overwhelming amount of evidence. However, the possibility that language is influenced by its speakers' genes cannot be ruled out a priori, and a recently discovered correlation between the geographic distribution of tone languages and two human genes seems to point to a genetically influenced bias affecting language. This entry describes this specific correlation and highlights its major implications. Voice pitch has a variety of communicative functions. Some of these are probably universal, such as conveying information about the speaker's sex, age, and emotional state. In many languages, including the European languages, voice pitch also conveys certain sentence-level meanings such as signaling that an utterance is a question or an exclamation; these uses of pitch are known as intonation. Some languages, however, known as tone languages, nian ...
  • Lai, V. T., & Frajzyngier, Z. (2009). Change of functions of the first person pronouns in Chinese. In M. Dufresne, M. Dupuis, & E. Vocaj (Eds.), Historical Linguistics 2007: Selected papers from the 18th International Conference on Historical Linguistics Montreal, 6-11 August 2007 (pp. 223-232). Amsterdam: John Benjamins.

    Abstract

    Selected papers from the 18th International Conference on Historical Linguistics, Montreal, 6-11 August 2007
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lausberg, H., & Sloetjes, H. (2013). NEUROGES in combination with the annotation tool ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 199-200). Frankfurt a/M: Lang.
  • Lausberg, H., & Sloetjes, H. (2009). NGCS/ELAN - Coding movement behaviour in psychotherapy [Meeting abstract]. PPmP - Psychotherapie · Psychosomatik · Medizinische Psychologie, 59: A113, 103.

    Abstract

    Individual and interactive movement behaviour (non-verbal behaviour / communication) specifically reflects implicit processes in psychotherapy [1,4,11]. However, thus far, the registration of movement behaviour has been a methodological challenge. We will present a coding system combined with an annotation tool for the analysis of movement behaviour during psychotherapy interviews [9]. The NGCS coding system enables to classify body movements based on their kinetic features alone [5,7]. The theoretical assumption behind the NGCS is that its main kinetic and functional movement categories are differentially associated with specific psychological functions and thus, have different neurobiological correlates [5-8]. ELAN is a multimodal annotation tool for digital video media [2,3,12]. The NGCS / ELAN template enables to link any movie to the same coding system and to have different raters independently work on the same file. The potential of movement behaviour analysis as an objective tool for psychotherapy research and for supervision in the psychosomatic practice is discussed by giving examples of the NGCS/ELAN analyses of psychotherapy sessions. While the quality of kinetic turn-taking and the therapistrsquor;s (implicit) adoption of the patientrsquor;s movements may predict therapy outcome, changes in the patientrsquor;s movement behaviour pattern may indicate changes in cognitive concepts and emotional states and thus, may help to identify therapeutically relevant processes [10].
  • Lauscher, A., Eckert, K., Galke, L., Scherp, A., Rizvi, S. T. R., Ahmed, S., Dengel, A., Zumstein, P., & Klein, A. (2018). Linked open citation database: Enabling libraries to contribute to an open and interconnected citation graph. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 109-118). New York: ACM. doi:10.1145/3197026.3197050.

    Abstract

    Citations play a crucial role in the scientific discourse, in information retrieval, and in bibliometrics. Many initiatives are currently promoting the idea of having free and open citation data. Creation of citation data, however, is not part of the cataloging workflow in libraries nowadays.
    In this paper, we present our project Linked Open Citation Database, in which we design distributed processes and a system infrastructure based on linked data technology. The goal is to show that efficiently cataloging citations in libraries using a semi-automatic approach is possible. We specifically describe the current state of the workflow and its implementation. We show that we could significantly improve the automatic reference extraction that is crucial for the subsequent data curation. We further give insights on the curation and linking process and provide evaluation results that not only direct the further development of the project, but also allow us to discuss its overall feasibility.
  • Lefever, E., Hendrickx, I., Croijmans, I., Van den Bosch, A., & Majid, A. (2018). Discovering the language of wine reviews: A text mining account. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, & T. Tokunaga (Eds.), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (pp. 3297-3302). Paris: LREC.

    Abstract

    It is widely held that smells and flavors are impossible to put into words. In this paper we test this claim by seeking predictive patterns in wine reviews, which ostensibly aim to provide guides to perceptual content. Wine reviews have previously been critiqued as random and meaningless. We collected an English corpus of wine reviews with their structured metadata, and applied machine learning techniques to automatically predict the wine's color, grape variety, and country of origin. To train the three supervised classifiers, three different information sources were incorporated: lexical bag-of-words features, domain-specific terminology features, and semantic word embedding features. In addition, using regression analysis we investigated basic review properties, i.e., review length, average word length, and their relationship to the scalar values of price and review score. Our results show that wine experts do share a common vocabulary to describe wines and they use this in a consistent way, which makes it possible to automatically predict wine characteristics based on the review text alone. This means that odors and flavors may be more expressible in language than typically acknowledged.
  • Lenkiewicz, A., & Drude, S. (2013). Automatic annotation of linguistic 2D and Kinect recordings with the Media Query Language for Elan. In Proceedings of Digital Humanities 2013 (pp. 276-278).

    Abstract

    Research in body language with use of gesture recognition and speech analysis has gained much attention in the recent times, influencing disciplines related to image and speech processing.

    This study aims to design the Media Query Language (MQL) (Lenkiewicz, et al. 2012) combined with the Linguistic Media Query Interface (LMQI) for Elan (Wittenburg, et al. 2006). The system integrated with the new achievements in audio-video recognition will allow querying media files with predefined gesture phases (or motion primitives) and speech characteristics as well as combinations of both. For the purpose of this work the predefined motions and speech characteristics are called patterns for atomic elements and actions for a sequence of patterns. The main assumption is that a user-customized library of patterns and actions and automated media annotation with LMQI will reduce annotation time, hence decreasing costs of creation of annotated corpora. Increase of the number of annotated data should influence the speed and number of possible research in disciplines in which human multimodal interaction is a subject of interest and where annotated corpora are required.
  • Lenkiewicz, P., Pereira, M., Freire, M. M., & Fernandes, J. (2009). A new 3D image segmentation method for parallel architectures. In Proceedings of the 2009 IEEE International Conference on Multimedia and Expo [ICME 2009] June 28 – July 3, 2009, New York (pp. 1813-1816).

    Abstract

    This paper presents a novel model for 3D image segmentation and reconstruction. It has been designed with the aim to be implemented over a computer cluster or a multi-core platform. The required features include a nearly absolute independence between the processes participating in the segmentation task and providing amount of work as equal as possible for all the participants. As a result, it is avoid many drawbacks often encountered when performing a parallelization of an algorithm that was constructed to operate in a sequential manner. Furthermore, the proposed algorithm based on the new segmentation model is efficient and shows a very good, nearly linear performance growth along with the growing number of processing units.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The dynamic topology changes model for unsupervised image segmentation. In Proceedings of the 11th IEEE International Workshop on Multimedia Signal Processing (MMSP'09) (pp. 1-5).

    Abstract

    Deformable models are a popular family of image segmentation techniques, which has been gaining significant focus in the last two decades, serving both for real-world applications as well as the base for research work. One of the features that the deformable models offer and that is considered a much desired one, is the ability to change their topology during the segmentation process. Using this characteristic it is possible to perform segmentation of objects with discontinuities in their bodies or to detect an undefined number of objects in the scene. In this paper we present our model for handling the topology changes in image segmentation methods based on the Active Volumes solution. The said model is capable of performing the changes in the structure of objects while the segmentation progresses, what makes it efficient and suitable for implementations over powerful execution environment, like multi-core architectures or computer clusters.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The whole mesh Deformation Model for 2D and 3D image segmentation. In Proceedings of the 2009 IEEE International Conference on Image Processing (ICIP 2009) (pp. 4045-4048).

    Abstract

    In this paper we present a novel approach for image segmentation using Active Nets and Active Volumes. Those solutions are based on the Deformable Models, with slight difference in the method for describing the shapes of interests - instead of using a contour or a surface they represented the segmented objects with a mesh structure, which allows to describe not only the surface of the objects but also to model their interiors. This is obtained by dividing the nodes of the mesh in two categories, namely internal and external ones, which will be responsible for two different tasks. In our new approach we propose to negate this separation and use only one type of nodes. Using that assumption we manage to significantly shorten the time of segmentation while maintaining its quality.
  • Levelt, W. J. M. (1989). De connectionistische mode: Symbolische en subsymbolische modellen van het menselijk gedrag. In C. M. Brown, P. Hagoort, & T. Meijering (Eds.), Vensters op de geest: Cognitie op het snijvlak van filosofie en psychologie (pp. 202-219). Utrecht: Stichting Grafiet.
  • Levelt, W. J. M. (1984). Geesteswetenschappelijke theorie als kompas voor de gangbare mening. In S. Dresden, & D. Van de Kaa (Eds.), Wetenschap ten goede en ten kwade (pp. 42-52). Amsterdam: North Holland.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. Van Doorn, W. A. Van de Grind, & J. J. Koenderink (Eds.), Limits in perception (pp. 323-358). Utrecht: VNU Science Press.
  • Levelt, W. J. M. (1984). Spontaneous self-repairs in speech: Processes and representations. In M. P. R. Van den Broecke, & A. Cohen (Eds.), Proceedings of the 10th International Congress of Phonetic Sciences (pp. 105-117). Dordrecht: Foris.
  • Levelt, W. J. M. (1989). Working models of perception: Five general issues. In B. A. Elsendoorn, & H. Bouma (Eds.), Working models of perception (pp. 489-503). London: Academic Press.
  • Levinson, S. C. (2013). Action formation and ascription. In T. Stivers, & J. Sidnell (Eds.), The handbook of conversation analysis (pp. 103-130). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch6.

    Abstract

    Since the core matrix for language use is interaction, the main job of language
    is not to express propositions or abstract meanings, but to deliver actions.
    For in order to respond in interaction we have to ascribe to the prior turn
    a primary ‘action’ – variously thought of as an ‘illocution’, ‘speech act’, ‘move’,
    etc. – to which we then respond. The analysis of interaction also relies heavily
    on attributing actions to turns, so that, e.g., sequences can be characterized in
    terms of actions and responses. Yet the process of action ascription remains way
    understudied. We don’t know much about how it is done, when it is done, nor even
    what kind of inventory of possible actions might exist, or the degree to which they
    are culturally variable.
    The study of action ascription remains perhaps the primary unfulfilled task in
    the study of language use, and it needs to be tackled from conversationanalytic,
    psycholinguistic, cross-linguistic and anthropological perspectives.
    In this talk I try to take stock of what we know, and derive a set of goals for and
    constraints on an adequate theory. Such a theory is likely to employ, I will suggest,
    a top-down plus bottom-up account of action perception, and a multi-level notion
    of action which may resolve some of the puzzles that have repeatedly arisen.
  • Levinson, S. C. (1989). Conversation. In E. Barnouw (Ed.), International encyclopedia of communications (pp. 407-410). New York: Oxford University Press.
  • Levinson, S. C. (2013). Cross-cultural universals and communication structures. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 67-80). Cambridge, MA: MIT Press.

    Abstract

    Given the diversity of languages, it is unlikely that the human capacity for language resides in rich universal syntactic machinery. More likely, it resides centrally in the capacity for vocal learning combined with a distinctive ethology for communicative interaction, which together (no doubt with other capacities) make diverse languages learnable. This chapter focuses on face-to-face communication, which is characterized by the mapping of sounds and multimodal signals onto speech acts and which can be deeply recursively embedded in interaction structure, suggesting an interactive origin for complex syntax. These actions are recognized through Gricean intention recognition, which is a kind of “ mirroring” or simulation distinct from the classic mirror neuron system. The multimodality of conversational interaction makes evident the involvement of body, hand, and mouth, where the burden on these can be shifted, as in the use of speech and gesture, or hands and face in sign languages. Such shifts having taken place during the course of human evolution. All this suggests a slightly different approach to the mystery of music, whose origins should also be sought in joint action, albeit with a shift from turn-taking to simultaneous expression, and with an affective quality that may tap ancient sources residual in primate vocalization. The deep connection of language to music can best be seen in the only universal form of music, namely song.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2009). Cognitive anthropology. In G. Senft, J. O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 50-57). Amsterdam: Benjamins.
  • Levinson, S. C. (2009). Foreword. In J. Liep (Ed.), A Papuan plutocracy: Ranked exchange on Rossel Island (pp. ix-xxiii). Copenhagen: Aarhus University Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2009). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication (pp. 95-104). Oxford: Oxford University Press.
  • Levinson, S. C. (2018). Introduction: Demonstratives: Patterns in diversity. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 1-42). Cambridge: Cambridge University Press.
  • Levinson, S. C., & Majid, A. (2009). Preface and priorities. In A. Majid (Ed.), Field manual volume 12 (pp. III). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Majid, A. (2009). The role of language in mind. In S. Nolen-Hoeksema, B. Fredrickson, G. Loftus, & W. Wagenaar (Eds.), Atkinson and Hilgard's introduction to psychology (15th ed., pp. 352). London: Cengage learning.
  • Levinson, S. C., & Dediu, D. (2013). The interplay of genetic and cultural factors in ongoing language evolution. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 219-232). Cambridge, Mass: MIT Press.
  • Levinson, S. C. (2018). Yélî Dnye: Demonstratives in the language of Rossel Island, Papua New Guinea. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 318-342). Cambridge: Cambridge University Press.
  • Long, M. (2018). The lifelong interplay between language and cognition: From language learning to perspective-taking, new insights into the ageing mind. PhD Thesis, University of Edinburgh, Edinburgh.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Mai, F., Galke, L., & Scherp, A. (2018). Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 169-178). New York: ACM.

    Abstract

    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  • Mainz, N. (2018). Vocabulary knowledge and learning: Individual differences in adult native speakers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2009). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 12 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883570.

    Abstract

    Synaesthesia is a condition in which stimulation of one sensory modality (e.g. hearing) causes additional experiences in a second, unstimulated modality (e.g. seeing colours). The goal of this task is to explore the types (and incidence) of synaesthesia in different cultures. Two simple tests can ascertain the existence of synaesthesia in your community.

    Additional information

    2009_Synaesthesia_audio_files.zip
  • Majid, A. (2018). Cultural factors shape olfactory language [Reprint]. In D. Howes (Ed.), Senses and Sensation: Critical and Primary Sources. Volume 3 (pp. 307-310). London: Bloomsbury Publishing.
  • Majid, A. (2018). Language and cognition. In H. Callan (Ed.), The International Encyclopedia of Anthropology. Hoboken: John Wiley & Sons Ltd.

    Abstract

    What is the relationship between the language we speak and the way we think? Researchers working at the interface of language and cognition hope to understand the complex interplay between linguistic structures and the way the mind works. This is thorny territory in anthropology and its closely allied disciplines, such as linguistics and psychology.

    Additional information

    home page encyclopedia
  • Majid, A. (2013). Olfactory language and cognition. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 68). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0025/index.html.

    Abstract

    Since the cognitive revolution, a widely held assumption has been that—whereas content may vary across cultures—cognitive processes would be universal, especially those on the more basic levels. Even if scholars do not fully subscribe to this assumption, they often conceptualize, or tend to investigate, cognition as if it were universal (Henrich, Heine, & Norenzayan, 2010). The insight that universality must not be presupposed but scrutinized is now gaining ground, and cognitive diversity has become one of the hot (and controversial) topics in the field (Norenzayan & Heine, 2005). We argue that, for scrutinizing the cultural dimension of cognition, taking an anthropological perspective is invaluable, not only for the task itself, but for attenuating the home-field disadvantages that are inescapably linked to cross-cultural research (Medin, Bennis, & Chandler, 2010).
  • Majid, A. (2013). Psycholinguistics. In J. L. Jackson (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press.
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • McDonough, J., Lehnert-LeHouillier, H., & Bardhan, N. P. (2009). The perception of nasalized vowels in American English: An investigation of on-line use of vowel nasalization in lexical access. In Nasal 2009.

    Abstract

    The goal of the presented study was to investigate the use of coarticulatory vowel nasalization in lexical access by native speakers of American English. In particular, we compare the use of coart culatory place of articulation cues to that of coarticulatory vowel nasalization. Previous research on lexical access has shown that listeners use cues to the place of articulation of a postvocalic stop in the preceding vowel. However, vowel nasalization as cue to an upcoming nasal consonant has been argued to be a more complex phenomenon. In order to establish whether coarticulatory vowel nasalization aides in the process of lexical access in the same way as place of articulation cues do, we conducted two perception experiments: an off-line 2AFC discrimination task and an on-line eyetracking study using the visual world paradigm. The results of our study suggest that listeners are indeed able to use vowel nasalization in similar ways to place of articulation information, and that both types of cues aide in lexical access.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (Eds.), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.

    Abstract

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.

    Additional information

    full text via Radboud Repository
  • Musgrave, S., & Cutfield, S. (2009). Language documentation and an Australian National Corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages (pp. 10-18). Somerville: Cascadilla Proceedings Project.

    Abstract

    Corpus linguistics and language documentation are usually considered separate subdisciplines within linguistics, having developed from different traditions and often operating on different scales, but the authors will suggest that there are commonalities to the two: both aim to represent language use in a community, and both are concerned with managing digital data. The authors propose that the development of the Australian National Corpus (AusNC) be guided by the experience of language documentation in the management of multimodal digital data and its annotation, and in ethical issues pertaining to making the data accessible. This would allow an AusNC that is distributed, multimodal, and multilingual, with holdings of text, audio, and video data distributed across multiple institutions; and including Indigenous, sign, and migrant community languages. An audit of language material held by Australian institutions and individuals is necessary to gauge the diversity and volume of possible content, and to inform common technical standards.
  • Narasimhan, B., & Brown, P. (2009). Getting the inside story: Learning to talk about containment in Tzeltal and Hindi. In V. C. Mueller-Gathercole (Ed.), Routes to language: Studies in honor of Melissa Bowerman (pp. 97-132). New York: Psychology Press.

    Abstract

    The present study examines young children's uses of semantically specific and general relational containment terms (e.g. in, enter) in Hindi and Tzeltal, and the extent to which their usage patterns are influenced by input frequency. We hypothesize that if children have a preference for relational terms that are semantically specific, this will be reflected in early acquisition of more semantically specific expressions and underextension of semantically general ones, regardless of the distributional patterns of use of these terms in the input. Our findings however show a strong role for input frequency in guiding children's patterns of use of containment terms in the two languages. Yet language-specific lexicalization patterns play a role as well, since object-specific containment verbs are used as early as the semantically general 'enter' verb by children acquiring Tzeltal.
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norcliffe, E. (2009). Head-marking in usage and grammar: A study of variation and change in Yucatec Maya. PhD Thesis, Stanford University, Stanford, CA.

    Abstract

    Many Mayan languages make use of a special dependent verb form (the Agent Focus, or AF verb form), which alternates with the normal transitive verb form (the synthetic verb form) of main clauses when the subject of a transitive verb is focused, questioned or relativized. It has been a centerpiece of research in Mayan morphosyntax over the last forty years, due to its puzzling formal and distributional properties. In this dissertation I show how a usage-oriented approach to the phenomenon can provide important insights into this area of grammar which resists any categorical explanation. I propose that the historical origins of these special verb forms can be traced to the emergence of head marking. Drawing on cross-linguistic and historical data, I argue that the special verbs that occur in A-bar dependencies in Yucatec and a range of head-marking languages are byproducts of the frequency-sensitive gramaticalization process by which independent pronouns become pronominal inflection on verbs. I show that the relatively low frequency of adjacent pronoun-verb combinations in extraction contexts (where gaps are more frequent than resumptive pronouns) can give rise to asymmetric patterns of pronoun grammaticalization, and thus lead to the emergence of these morphological alternations. The asymmetric frequency distributions of gaps and RPs (within and across languages) in turn can be explained by processing preferences. I present three experiments which show that Yucatec speakers are more likely to use the resumptive verb form in embedded environments, and where the antecedent is indefinite. Specifically, these studies indicate the need to bring discourse-level processing principles into the account of what have often been taken to be autonomously sentence-internal phenomena: factors such as distance and the referential salience of the antecedent have been shown to influence referential form choice in discourse, suggesting that the same cognitive principles lie behind both types of variation. More generally, the Yucatec studies demonstrate that production preferences in Yucatec relative clauses reflect patterns of RP/gap distributions that have been attested across grammars. The Highest Subject Restriction (the ban on subject RPs in local dependencies), which is apparently a categorical constraint in many languages, is reflected probabilistically in Yucatec in terms of production preferences. The definiteness restriction (RPs are obligatory with indefinite antecedents), which has been reported categorically in other languages, is also visible probabilistically in Yucatec production. This lends some statistically robust support to the view that typological patterns can arise via the conventionalization of processing preferences.
  • Ortega, G. (2013). Acquisition of a signed phonological system by hearing adults: The Role of sign structure and iconcity. PhD Thesis, University College London, London.

    Abstract

    The phonological system of a sign language comprises meaningless sub-lexical units that define the structure of a sign. A number of studies have examined how learners of a sign language as a first language (L1) acquire these components. However, little is understood about the mechanism by which hearing adults develop visual phonological categories when learning a sign language as a second language (L2). Developmental studies have shown that sign complexity and iconicity, the clear mapping between the form of a sign and its referent, shape in different ways the order of emergence of a visual phonology. The aim of the present dissertation was to investigate how these two factors affect the development of a visual phonology in hearing adults learning a sign language as L2. The empirical data gathered in this dissertation confirms that sign structure and iconicity are important factors that determine L2 phonological development. Non-signers perform better at discriminating the contrastive features of phonologically simple signs than signs with multiple elements. Handshape was the parameter most difficult to learn, followed by movement, then orientation and finally location which is the same order of acquisition reported in L1 sign acquisition. In addition, the ability to access the iconic properties of signs had a detrimental effect in phonological development because iconic signs were consistently articulated less accurately than arbitrary signs. Participants tended to retain the iconic elements of signs but disregarded their exact phonetic structure. Further, non-signers appeared to process iconic signs as iconic gestures at least at the early stages of sign language acquisition. The empirical data presented in this dissertation suggest that non-signers exploit their gestural system as scaffolding of the new manual linguistic system and that sign L2 phonological development is strongly influenced by the structural complexity of a sign and its degree of iconicity.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (Eds.), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer.
  • Ostarek, M. (2018). Envisioning language: An exploration of perceptual processes in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Pacheco, A., Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Profiling dislexic children: Phonology and visual naming skills. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 40). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Petersson, K. M., Ingvar, M., & Reis, A. (2009). Language and literacy from a cognitive neuroscience perspective. In D. Olsen, & N. Torrance (Eds.), Cambridge handbook of literacy (pp. 152-181). Cambridge: Cambridge University Press.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Puccini, D. (2013). The use of deictic versus representational gestures in infancy. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ramus, F., & Fisher, S. E. (2009). Genetics of language. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 855-871). Cambridge, MA: MIT Press.

    Abstract

    It has long been hypothesised that the human faculty to acquire a language is in some way encoded in our genetic program. However, only recently has genetic evidence been available to begin to substantiate the presumed genetic basis of language. Here we review the first data from molecular genetic studies showing association between gene variants and language disorders (specific language impairment, speech sound disorder, developmental dyslexia), we discuss the biological function of these genes, and we further speculate on the more general question of how the human genome builds a brain that can learn a language.
  • Rapold, C. J., & Zaugg-Coretti, S. (2009). Exploring the periphery of the central Ethiopian Linguistic area: Data from Yemsa and Benchnon. In J. Crass, & R. Meyer (Eds.), Language contact and language change in Ethiopia (pp. 59-81). Köln: Köppe.

Share this page