Publications

Displaying 201 - 300 of 412
  • Koch, X., & Janse, E. (2015). Effects of age and hearing loss on articulatory precision for sibilants. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    This study investigates the effects of adult age and speaker abilities on articulatory precision for sibilant productions. Normal-hearing young adults with
    better sibilant discrimination have been shown to produce greater spectral sibilant contrasts. As reduced auditory feedback may gradually impact on feedforward
    commands, we investigate whether articulatory precision as indexed by spectral mean for [s] and [S] decreases with age, and more particularly with agerelated
    hearing loss. Younger, middle-aged and older adults read aloud words starting with the sibilants [s] or [S]. Possible effects of cognitive, perceptual, linguistic and sociolinguistic background variables
    on the sibilants’ acoustics were also investigated. Sibilant contrasts were less pronounced for male than female speakers. Most importantly, for the fricative
    [s], the spectral mean was modulated by individual high-frequency hearing loss, but not age. These results underscore that even mild hearing loss already affects articulatory precision.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth., S. (2021). Lexical priming as evidence for language-nonselective access in the simultaneous bilingual child's lexicon. In D. Dionne, & L.-A. Vidal Covas (Eds.), BUCLD 45: Proceedings of the 45th annual Boston University Conference on Language Development (pp. 413-430). Sommerville, MA: Cascadilla Press.
  • De Kovel, C. G. F., & Fisher, S. E. (2018). Molecular genetic methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 330-353). Hoboken: Wiley.
  • Kruspe, N., Burenhult, N., & Wnuk, E. (2015). Northern Aslian. In P. Sidwell, & M. Jenny (Eds.), Handbook of Austroasiatic Languages (pp. 419-474). Leiden: Brill.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kupisch, T., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2021). Multilingualism and Chomsky's Generative Grammar. In N. Allott (Ed.), A companion to Chomsky (pp. 232-242). doi:10.1002/9781119598732.ch15.

    Abstract

    Like Einstein's general theory of relativity is concerned with explaining the basics of an observable experience – i.e., gravity – most people take for granted that Chomsky's theory of generative grammar (GG) is concerned with the basic nature of language. This chapter highlights a mere subset of central constructs in GG, showing how they have featured prominently and thus shaped formal linguistic studies in multilingualism. Because multilingualism includes a wide range of nonmonolingual populations, the constructs are divided across child bilingualism and adult third language for greater coverage. In the case of the former, the chapter examines how poverty of the stimulus has been investigated. Using the nascent field of L3/Ln acquisition as the backdrop, it discusses how the GG constructs of I-language versus E-language sit at the core of debates regarding the very notion of what linguistic transfer and mental representations should be taken to be.
  • Lai, V. T., & Narasimhan, B. (2015). Verb representation and thinking-for-speaking effects in Spanish-English bilinguals. In R. G. De Almeida, & C. Manouilidou (Eds.), Cognitive science perspectives on verb representation and processing (pp. 235-256). Cham: Springer.

    Abstract

    Speakers of English habitually encode motion events using manner-of-motion verbs (e.g., spin, roll, slide) whereas Spanish speakers rely on path-of-motion verbs (e.g., enter, exit, approach). Here, we ask whether the language-specific verb representations used in encoding motion events induce different modes of “thinking-for-speaking” in Spanish–English bilinguals. That is, assuming that the verb encodes the most salient information in the clause, do bilinguals find the path of motion to be more salient than manner of motion if they had previously described the motion event using Spanish versus English? In our study, Spanish–English bilinguals described a set of target motion events in either English or Spanish and then participated in a nonlinguistic similarity judgment task in which they viewed the target motion events individually (e.g., a ball rolling into a cave) followed by two variants a “same-path” variant such as a ball sliding into a cave or a “same-manner” variant such as a ball rolling away from a cave). Participants had to select one of the two variants that they judged to be more similar to the target event: The event that shared the same path of motion as the target versus the one that shared the same manner of motion. Our findings show that bilingual speakers were more likely to classify two motion events as being similar if they shared the same path of motion and if they had previously described the target motion events in Spanish versus in English. Our study provides further evidence for the “thinking-for-speaking” hypothesis by demonstrating that bilingual speakers can flexibly shift between language-specific construals of the same event “on-the-fly.”
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lauscher, A., Eckert, K., Galke, L., Scherp, A., Rizvi, S. T. R., Ahmed, S., Dengel, A., Zumstein, P., & Klein, A. (2018). Linked open citation database: Enabling libraries to contribute to an open and interconnected citation graph. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 109-118). New York: ACM. doi:10.1145/3197026.3197050.

    Abstract

    Citations play a crucial role in the scientific discourse, in information retrieval, and in bibliometrics. Many initiatives are currently promoting the idea of having free and open citation data. Creation of citation data, however, is not part of the cataloging workflow in libraries nowadays.
    In this paper, we present our project Linked Open Citation Database, in which we design distributed processes and a system infrastructure based on linked data technology. The goal is to show that efficiently cataloging citations in libraries using a semi-automatic approach is possible. We specifically describe the current state of the workflow and its implementation. We show that we could significantly improve the automatic reference extraction that is crucial for the subsequent data curation. We further give insights on the curation and linking process and provide evaluation results that not only direct the further development of the project, but also allow us to discuss its overall feasibility.
  • Lee, R., Chambers, C. G., Huettig, F., & Ganea, P. A. (2017). Children’s semantic and world knowledge overrides fictional information during anticipatory linguistic processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (CogSci 2017) (pp. 730-735). Austin, TX: Cognitive Science Society.

    Abstract

    Using real-time eye-movement measures, we asked how a fantastical discourse context competes with stored representations of semantic and world knowledge to influence children's and adults' moment-by-moment interpretation of a story. Seven-year- olds were less effective at bypassing stored semantic and world knowledge during real-time interpretation than adults. Nevertheless, an effect of discourse context on comprehension was still apparent.
  • Lefever, E., Hendrickx, I., Croijmans, I., Van den Bosch, A., & Majid, A. (2018). Discovering the language of wine reviews: A text mining account. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, & T. Tokunaga (Eds.), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (pp. 3297-3302). Paris: LREC.

    Abstract

    It is widely held that smells and flavors are impossible to put into words. In this paper we test this claim by seeking predictive patterns in wine reviews, which ostensibly aim to provide guides to perceptual content. Wine reviews have previously been critiqued as random and meaningless. We collected an English corpus of wine reviews with their structured metadata, and applied machine learning techniques to automatically predict the wine's color, grape variety, and country of origin. To train the three supervised classifiers, three different information sources were incorporated: lexical bag-of-words features, domain-specific terminology features, and semantic word embedding features. In addition, using regression analysis we investigated basic review properties, i.e., review length, average word length, and their relationship to the scalar values of price and review score. Our results show that wine experts do share a common vocabulary to describe wines and they use this in a consistent way, which makes it possible to automatically predict wine characteristics based on the review text alone. This means that odors and flavors may be more expressible in language than typically acknowledged.
  • Lehecka, T. (2015). Collocation and colligation. In J.-O. Östman, & J. Verschueren (Eds.), Handbook of Pragmatics Online. Amsterdam: Benjamins. doi:10.1075/hop.19.col2.
  • Lev-Ari, S. (2015). Adjusting the manner of language processing to the social context: Attention allocation during interactions with non-native speakers. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and Vision in Language Processing (pp. 185-195). New York: Springer. doi:10.1007/978-81-322-2443-3_11.
  • Levelt, W. J. M., & Ruijssenaars, A. (1995). Levensbericht Johan Joseph Dumont. In Jaarboek Koninklijke Nederlandse Akademie van Wetenschappen (pp. 31-36).
  • Levelt, W. J. M. (1995). Chapters of psychology: An interview with Wilhelm Wundt. In R. L. Solso, & D. W. Massaro (Eds.), The science of mind: 2001 and beyond (pp. 184-202). Oxford University Press.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (2015). Levensbericht George Armitage Miller 1920 - 2012. In KNAW levensberichten en herdenkingen 2014 (pp. 38-42). Amsterdam: KNAW.
  • Levelt, W. J. M. (1986). Herdenking van Joseph Maria Franciscus Jaspars (16 maart 1934 - 31 juli 1985). In Jaarboek 1986 Koninklijke Nederlandse Akademie van Wetenschappen (pp. 187-189). Amsterdam: North Holland.
  • Levelt, W. J. M. (1987). Hochleistung in Millisekunden - Sprechen und Sprache verstehen. In Jahrbuch der Max-Planck-Gesellschaft (pp. 61-77). Göttingen: Vandenhoeck & Ruprecht.
  • Levelt, W. J. M. (1995). Psycholinguistics. In C. C. French, & A. M. Colman (Eds.), Cognitive psychology (reprint, pp. 39- 57). London: Longman.
  • Levelt, W. J. M. (2015). Sleeping Beauties. In I. Toivonen, P. Csúrii, & E. Van der Zee (Eds.), Structures in the Mind: Essays on Language, Music, and Cognition in Honor of Ray Jackendoff (pp. 235-255). Cambridge, MA: MIT Press.
  • Levelt, W. J. M., & d'Arcais, F. (1987). Snelheid en uniciteit bij lexicale toegang. In H. Crombag, L. Van der Kamp, & C. Vlek (Eds.), De psychologie voorbij: Ontwikkelingen rond model, metriek en methode in de gedragswetenschappen (pp. 55-68). Lisse: Swets & Zeitlinger.
  • Levelt, W. J. M., & Schriefers, H. (1987). Stages of lexical access. In G. A. Kempen (Ed.), Natural language generation: new results in artificial intelligence, psychology and linguistics (pp. 395-404). Dordrecht: Nijhoff.
  • Levelt, W. J. M. (1986). Zur sprachlichen Abbildung des Raumes: Deiktische und intrinsische Perspektive. In H. Bosshardt (Ed.), Perspektiven auf Sprache. Interdisziplinäre Beiträge zum Gedenken an Hans Hörmann (pp. 187-211). Berlin: De Gruyter.
  • Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.

    Abstract

    It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (1987). Minimization and conversational inference. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 61-129). Benjamins.
  • Levinson, S. C. (2017). Living with Manny's dangerous idea. In G. Raymond, G. H. Lerner, & J. Heritage (Eds.), Enabling human conduct: Studies of talk-in-interaction in honor of Emanuel A. Schegloff (pp. 327-349). Amsterdam: Benjamins.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2018). Introduction: Demonstratives: Patterns in diversity. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 1-42). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2017). Speech acts. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 199-216). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.22.

    Abstract

    The essential insight of speech act theory was that when we use language, we perform actions—in a more modern parlance, core language use in interaction is a form of joint action. Over the last thirty years, speech acts have been relatively neglected in linguistic pragmatics, although important work has been done especially in conversation analysis. Here we review the core issues—the identifying characteristics, the degree of universality, the problem of multiple functions, and the puzzle of speech act recognition. Special attention is drawn to the role of conversation structure, probabilistic linguistic cues, and plan or sequence inference in speech act recognition, and to the centrality of deep recursive structures in sequences of speech acts in conversation

    Files private

    Request files
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2018). Yélî Dnye: Demonstratives in the language of Rossel Island, Papua New Guinea. In S. C. Levinson, S. Cutfield, M. Dunn, N. J. Enfield, & S. Meira (Eds.), Demonstratives in cross-linguistic perspective (pp. 318-342). Cambridge: Cambridge University Press.
  • Levshina, N. (2021). Conditional inference trees and random forests. In M. Paquot, & T. Gries (Eds.), Practical Handbook of Corpus Linguistics (pp. 611-643). New York: Springer.
  • Levshina, N., & Moran, S. (Eds.). (2021). Efficiency in human languages: Corpus evidence for universal principles [Special Issue]. Linguistics Vanguard, 7(s3).
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). A new artificial sign-space proxy for investigating the emergence of structure and categories in speech. In The Scottish Consortium for ICPhS 2015 (Ed.), The proceedings of the 18th International Congress of Phonetic Sciences. (ICPhS 2015).
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). Linguistic modality affects the creation of structure and iconicity in signals. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. Jennings, & P. Maglio (Eds.), The 37th annual meeting of the Cognitive Science Society (CogSci 2015) (pp. 1392-1398). Austin, TX: Cognitive Science Society.

    Abstract

    Different linguistic modalities (speech or sign) offer different levels at which signals can iconically represent the world. One hypothesis argues that this iconicity has an effect on how linguistic structure emerges. However, exactly how and why these effects might come about is in need of empirical investigation. In this contribution, we present a signal creation experiment in which both the signalling space and the meaning space are manipulated so that different levels and types of iconicity are available between the signals and meanings. Signals are produced using an infrared sensor that detects the hand position of participants to generate auditory feedback. We find evidence that iconicity may be maladaptive for the discrimination of created signals. Further, we implemented Hidden Markov Models to characterise the structure within signals, which was also used to inform a metric for iconicity.
  • Little, H., Perlman, M., & Eryilmaz, K. (2017). Repeated interactions can lead to more iconic signals. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 760-765). Austin, TX: Cognitive Science Society.

    Abstract

    Previous research has shown that repeated interactions can cause iconicity in signals to reduce. However, data from several recent studies has shown the opposite trend: an increase in iconicity as the result of repeated interactions. Here, we discuss whether signals may become less or more iconic as a result of the modality used to produce them. We review several recent experimental results before presenting new data from multi-modal signals, where visual input creates audio feedback. Our results show that the growth in iconicity present in the audio information may come at a cost to iconicity in the visual information. Our results have implications for how we think about and measure iconicity in artificial signalling experiments. Further, we discuss how iconicity in real world speech may stem from auditory, kinetic or visual information, but iconicity in these different modalities may conflict.
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Mai, F., Galke, L., & Scherp, A. (2018). Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text. In J. Chen, M. A. Gonçalves, J. M. Allen, E. A. Fox, M.-Y. Kan, & V. Petras (Eds.), JCDL '18: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries (pp. 169-178). New York: ACM.

    Abstract

    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  • Majid, A., & Enfield, N. J. (2017). Body. In H. Burkhardt, J. Seibt, G. Imaguire, & S. Gerogiorgakis (Eds.), Handbook of mereology (pp. 100-103). Munich: Philosophia.
  • Majid, A. (2015). Comparing lexicons cross-linguistically. In J. R. Taylor (Ed.), The Oxford Handbook of the Word (pp. 364-379). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199641604.013.020.

    Abstract

    The lexicon is central to the concerns of disparate disciplines and has correspondingly elicited conflicting proposals about some of its foundational properties. Some suppose that word meanings and their associated concepts are largely universal, while others note that local cultural interests infiltrate every category in the lexicon. This chapter reviews research in two semantic domains—perception and the body—in order to illustrate crosslinguistic similarities and differences in semantic fields. Data is considered from a wide array of languages, especially those from small-scale indigenous communities which are often overlooked. In every lexical field we find considerable variation across cultures, raising the question of where this variation comes from. Is it the result of different ecological or environmental niches, cultural practices, or accidents of historical pasts? Current evidence suggests that diverse pressures differentially shape lexical fields.
  • Majid, A. (2018). Cultural factors shape olfactory language [Reprint]. In D. Howes (Ed.), Senses and Sensation: Critical and Primary Sources. Volume 3 (pp. 307-310). London: Bloomsbury Publishing.
  • Majid, A. (2018). Language and cognition. In H. Callan (Ed.), The International Encyclopedia of Anthropology. Hoboken: John Wiley & Sons Ltd.

    Abstract

    What is the relationship between the language we speak and the way we think? Researchers working at the interface of language and cognition hope to understand the complex interplay between linguistic structures and the way the mind works. This is thorny territory in anthropology and its closely allied disciplines, such as linguistics and psychology.

    Additional information

    home page encyclopedia
  • Majid, A., Manko, P., & De Valk, J. (2017). Language of the senses. In S. Dekker (Ed.), Scientific breakthroughs in the classroom! (pp. 40-76). Nijmegen: Science Education Hub Radboud University.

    Abstract

    The project that we describe in this chapter has the theme ‘Language of the senses’. This theme is
    based on the research of Asifa Majid and her team regarding the influence of language and culture on
    sensory perception. The chapter consists of two sections. Section 2.1 describes how different sensory
    perceptions are spoken of in different languages. Teachers can use this section as substantive preparation
    before they launch this theme in the classroom. Section 2.2 describes how teachers can handle
    this theme in accordance with the seven phases of inquiry-based learning. Chapter 1, in which the
    general guideline of the seven phases is described, forms the basis for this. We therefore recommend
    the use of chapter 1 as the starting point for the execution of a project in the classroom. This chapter
    provides the thematic additions.

    Additional information

    Materials Language of the senses
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Manko, P., & de Valk, J. (2017). Taal der Zintuigen. In S. Dekker, & J. Van Baren-Nawrocka (Eds.), Wetenschappelijke doorbraken de klas in! Molecuulbotsingen, Stress en Taal der Zintuigen (pp. 128-166). Nijmegen: Wetenschapsknooppunt Radboud Universiteit.

    Abstract

    Taal der zintuigen gaat over de invloed van taal en cultuur op zintuiglijke waarnemingen. Hoe omschrijf je wat je ziet, voelt, proeft of ruikt? In sommige culturen zijn er veel verschillende woorden voor kleur, in andere culturen juist weer heel weinig. Worden we geboren met deze verschillende kleurgroepen? En bepaalt hoe je ergens over praat ook wat je waarneemt?
  • Mak, M., & Willems, R. M. (2021). Mental simulation during literary reading. In D. Kuiken, & A. M. Jacobs (Eds.), Handbook of empirical literary studies (pp. 63-84). Berlin: De Gruyter.

    Abstract

    Readers experience a number of sensations during reading. They do
    not – or do not only – process words and sentences in a detached, abstract
    manner. Instead they “perceive” what they read about. They see descriptions of
    scenery, feel what characters feel, and hear the sounds in a story. These sensa-
    tions tend to be grouped under the umbrella terms “mental simulation” and
    “mental imagery.” This chapter provides an overview of empirical research on
    the role of mental simulation during literary reading. Our chapter also discusses
    what mental simulation is and how it relates to mental imagery. Moreover, it
    explores how mental simulation plays a role in leading models of literary read-
    ing and investigates under what circumstances mental simulation occurs dur-
    ing literature reading. Finally, the effect of mental simulation on the literary
    reader’s experience is discussed, and suggestions and unresolved issues in this
    field are formulated.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Saji, N., & Majid, A. (2015). Where are the concepts? What words can and can’t reveal. In E. Margolis, & S. Laurence (Eds.), The conceptual Mind: New directions in the study of concepts (pp. 291-326). Cambridge, MA: MIT Press.

    Abstract

    Concepts are so fundamental to human cognition that Fodor declared the heart of a cognitive science to be its theory of concepts. To study concepts, though, cognitive scientists need to be able to identify some. The prevailing assumption has been that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world with names. Either ordinary concepts must be heavily language dependent, or names cannot be a direct route to concepts. We asked speakers of English, Dutch, Spanish, and Japanese to name a set of 36 video clips of human locomotion and to judge the similarities among them. We investigated what name inventories, name extensions, scaling solutions on name similarity, and scaling solutions on nonlinguistic similarity from the groups, individually and together, suggest about the underlying concepts. Aggregated naming data and similarity solutions converged on results distinct from individual languages.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.

    Abstract

    Visual and auditory channels have different affordances and
    this is mirrored in what information is available for linguistic
    encoding. The visual channel has high spatial acuity, whereas
    the auditory channel has better temporal acuity. These
    differences may lead to different conceptualizations of events
    and affect multimodal language production. Previous studies of
    motion events typically present visual input to elicit speech and
    gesture. The present study compared events presented as audio-
    only, visual-only, or multimodal (visual+audio) input and
    assessed speech and co-speech gesture for path and manner of
    motion in Turkish. Speakers with audio-only input mentioned
    path more and manner less in verbal descriptions, compared to
    speakers who had visual input. There was no difference in the
    type or frequency of gestures across conditions, and gestures
    were dominated by path-only gestures. This suggests that input
    modality influences speakers’ encoding of path and manner of
    motion events in speech, but not in co-speech gestures.
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Martin, R. C., & Tan, Y. (2015). Sentence comprehension deficits: Independence and interaction of syntax, semantics, and working memory. In A. E. Hillis (Ed.), Handbook of adult language disorders (2nd ed., pp. 303-327). Boca Raton: CRC Press.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. In Proceedings of Interspeech 2017 (pp. 586-590). doi:10.21437/Interspeech.2017-1517.

    Abstract

    Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue.
  • Matić, D. (2015). Information structure in linguistics. In J. D. Wright (Ed.), The International Encyclopedia of Social and Behavioral Sciences (2nd ed.) Vol. 12 (pp. 95-99). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53013-X.

    Abstract

    Information structure is a subfield of linguistic research dealing with the ways speakers encode instructions to the hearer on how to process the message relative to their temporary mental states. To this end, sentences are segmented into parts conveying known and yet-unknown information, usually labeled ‘topic’ and ‘focus.’ Many languages have developed specialized grammatical and lexical means of indicating this segmentation.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., & Frank, S. L. (2021). Human sentence processing: Recurrence or attention? In E. Chersoni, N. Hollenstein, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2021) (pp. 12-22). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2021.cmcl-1.2.

    Abstract

    Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models’ ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.
  • Merkx, D., Frank, S. L., & Ernestus, M. (2021). Semantic sentence similarity: Size does not always matter. In Proceedings of Interspeech 2021 (pp. 4393-4397). doi:10.21437/Interspeech.2021-1464.

    Abstract

    This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge. We produce synthetic and natural spoken versions of a well known semantic textual similarity database and show that our VGS model produces embeddings that correlate well with human semantic similarity judgements. Our results show that a model trained on a small image-caption database outperforms two models trained on much larger databases, indicating that database size is not all that matters. We also investigate the importance of having multiple captions per image and find that this is indeed helpful even if the total number of images is lower, suggesting that paraphrasing is a valuable learning signal. While the general trend in the field is to create ever larger datasets to train models on, our findings indicate other characteristics of the database can just as important.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.

    Abstract

    Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood.
  • Moisik, S. R., & Dediu, D. (2015). Anatomical biasing and clicks: Preliminary biomechanical modelling. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 8-13). Glasgow: ICPhS.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A preliminary biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics, however, further modelling and experimental research is required to solidify the claim.
  • Monaghan, P., Brand, J., Frost, R. L. A., & Taylor, G. (2017). Multiple variable cues in the environment promote accurate and robust word learning. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 817-822). Retrieved from https://mindmodeling.org/cogsci2017/papers/0164/index.html.

    Abstract

    Learning how words refer to aspects of the environment is a complex task, but one that is supported by numerous cues within the environment which constrain the possibilities for matching words to their intended referents. In this paper we tested the predictions of a computational model of multiple cue integration for word learning, that predicted variation in the presence of cues provides an optimal learning situation. In a cross-situational learning task with adult participants, we varied the reliability of presence of distributional, prosodic, and gestural cues. We found that the best learning occurred when cues were often present, but not always. The effect of variability increased the salience of individual cues for the learner, but resulted in robust learning that was not vulnerable to individual cues’ presence or absence. Thus, variability of multiple cues in the language-learning environment provided the optimal circumstances for word learning.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Mudd, K., Lutzenberger, H., De Vos, C., & De Boer, B. (2021). Social structure and lexical uniformity: A case study of gender differences in the Kata Kolok community. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2692-2698). Vienna: Cognitive Science Society.

    Abstract

    Language emergence is characterized by a high degree of lex-
    ical variation. It has been suggested that the speed at which
    lexical conventionalization occurs depends partially on social
    structure. In large communities, individuals receive input from
    many sources, creating a pressure for lexical convergence.
    In small, insular communities, individuals can remember id-
    iolects and share common ground with interlocuters, allow-
    ing these communities to retain a high degree of lexical vari-
    ation. We look at lexical variation in Kata Kolok, a sign lan-
    guage which emerged six generations ago in a Balinese vil-
    lage, where women tend to have more tightly-knit social net-
    works than men. We test if there are differing degrees of lexical
    uniformity between women and men by reanalyzing a picture
    description task in Kata Kolok. We find that women’s produc-
    tions exhibit less lexical uniformity than men’s. One possible
    explanation of this finding is that women’s more tightly-knit
    social networks allow for remembering idiolects, alleviating
    the pressure for lexical convergence, but social network data
    from the Kata Kolok community is needed to support this ex-
    planation.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Mulder, K., Brekelmans, G., & Ernestus, M. (2015). The processing of schwa reduced cognates and noncognates in non-native listeners of English. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    In speech, words are often reduced rather than fully pronounced (e.g., (/ˈsʌmri/ for /ˈsʌməri/, summary). Non-native listeners may have problems in processing these reduced forms, because they have encountered them less often. This paper addresses the question whether this also holds for highly proficient non-natives and for words with similar forms and meanings in the non-natives' mother tongue (i.e., cognates). In an English auditory lexical decision task, natives and highly proficient Dutch non-natives of English listened to cognates and non-cognates that were presented in full or without their post-stress schwa. The data show that highly proficient learners are affected by reduction as much as native speakers. Nevertheless, the two listener groups appear to process reduced forms differently, because non-natives produce more errors on reduced cognates than on non-cognates. While listening to reduced forms, non-natives appear to be hindered by the co-activated lexical representations of cognate forms in their native language.
  • Muysken, P., Hammarström, H., Birchall, J., van Gijn, R., Krasnoukhova, O., & Müller, N. (2015). Linguistic Areas, bottom up or top down? The case of the Guaporé-Mamoré region. In B. Comrie, & L. Golluscio (Eds.), Language Contact and Documentation / Contacto lingüístico y documentación (pp. 205-238). Berlin: De Gruyter.
  • Neger, T. M., Rietveld, T., & Janse, E. (2015). Adult age effects in auditory statistical learning. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) and cognitive functioning in both age groups. Thirty younger and thirty older adults performed an auditory artificial-grammar-learning task to assess their statistical learning ability. In younger adults, perceptual effort came at the cost of processing resources required for learning. Inhibitory control (as indexed by Stroop colornaming performance) did not predict auditory learning. Overall, younger and older adults showed the same amount of auditory learning, indicating that statistical learning ability is preserved over the adult life span.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2015). Exemplar effects arise in a lexical decision task, but only under adverse listening conditions. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper studies the influence of adverse listening conditions on exemplar effects in priming experiments that do not instruct participants to use their episodic memories. We conducted two lexical decision experiments, in which a prime and a target represented the same word type and could be spoken by the same or a different speaker. In Experiment 1, participants listened to clear speech, and showed no exemplar effects: they recognised repetitions by the same speaker as quickly as different speaker repetitions. In Experiment 2, the stimuli contained noise, and exemplar effects did arise. Importantly, Experiment 1 elicited longer average RTs than Experiment 2, a result that contradicts the time-course hypothesis, according to which exemplars only play a role when processing is slow. Instead, our findings support the hypothesis that exemplar effects arise under adverse listening conditions, when participants are stimulated to use their episodic memories in addition to their mental lexicons.
  • Noordman, L. G. M., Vonk, W., Cozijn, R., & Frank, S. (2015). Causal inferences and world knowledge. In E. J. O'Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 260-289). Cambridge, UK: Cambridge University Press.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (2015). Inferences in Discourse, Psychology of. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed.) Vol. 12 (pp. 37-44). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57012-3.

    Abstract

    An inference is defined as the information that is not expressed explicitly by the text but is derived on the basis of the understander's knowledge and is encoded in the mental representation of the text. Inferencing is considered as a central component in discourse understanding. Experimental methods to detect inferences, established findings, and some developments are reviewed. Attention is paid to the relation between inference processes and the brain.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.

    Abstract

    To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons.
  • O'Meara, C., & Majid, A. (2017). El léxico olfativo en la lengua seri. In A. L. M. D. Ruiz, & A. Z. Pérez (Eds.), La Dimensión Sensorial de la Cultura: Diez contribuciones al estudio de los sentidos en México. (pp. 101-118). Mexico City: Universidad Autónoma Metropolitana.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2017). Speakers’ gestures predict the meaning and perception of iconicity in signs. In G. Gunzelmann, A. Howe, & T. Tenbrink (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 889-894). Austin, TX: Cognitive Science Society.

    Abstract

    Sign languages stand out in that there is high prevalence of
    conventionalised linguistic forms that map directly to their
    referent (i.e., iconic). Hearing adults show low performance
    when asked to guess the meaning of iconic signs suggesting
    that their iconic features are largely inaccessible to them.
    However, it has not been investigated whether speakers’
    gestures, which also share the property of iconicity, may
    assist non-signers in guessing the meaning of signs. Results
    from a pantomime generation task (Study 1) show that
    speakers’ gestures exhibit a high degree of systematicity, and
    share different degrees of form overlap with signs (full,
    partial, and no overlap). Study 2 shows that signs with full
    and partial overlap are more accurately guessed and are
    assigned higher iconicity ratings than signs with no overlap.
    Deaf and hearing adults converge in their iconic depictions
    for some concepts due to the shared conceptual knowledge
    and manual-visual modality.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2017). Function and processing of gesture in the context of language. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 39-58). Amsterdam: John Benjamins Publishing. doi:10.1075/gs.7.03ozy.

    Abstract

    Most research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the linguistic encoding of a speaker’s message. Drawing on crosslinguistic research with adults and children as well as bilinguals on iconic/pointing gesture production it shows that the specific language speakers use modulates the rate and the shape of the iconic gesture production of the same events. The findings challenge the claims aiming to understand gesture’s function for “thinking only” in adults and during development.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.

    Abstract

    Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
    that subserve the integration of pointing gestures and speech in visual contexts in comprehension
    are unclear. Here we present the results of an fMRI study in which participants watched images of an
    actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
    revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
    inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
    semantically integrates information across modalities and semiotic domains.
  • Perlman, M., Paul, J., & Lupyan, G. (2015). Congenitally deaf children generate iconic vocalizations to communicate magnitude. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size “frequency code”, but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
  • Perlman, M., Fusaroli, R., Fein, D., & Naigles, L. (2017). The use of iconic words in early child-parent interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 913-918). Austin, TX: Cognitive Science Society.

    Abstract

    This paper examines the use of iconic words in early conversations between children and caregivers. The longitudinal data include a span of six observations of 35 children-parent dyads in the same semi-structured activity. Our findings show that children’s speech initially has a high proportion of iconic words, and over time, these words become diluted by an increase of arbitrary words. Parents’ speech is also initially high in iconic words, with a decrease in the proportion of iconic words over time – in this case driven by the use of fewer iconic words. The level and development of iconicity are related to individual differences in the children’s cognitive skills. Our findings fit with the hypothesis that iconicity facilitates early word learning and may play an important role in learning to produce new words.
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (Eds.). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113.
  • Perry, L., Perlman, M., & Lupyan, G. (2015). Iconicity in English vocabulary and its relation to toddlers’ word learning. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Popov, V., Ostarek, M., & Tenison, C. (2017). Inferential Pitfalls in Decoding Neural Representations. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 961-966). Austin, TX: Cognitive Science Society.

    Abstract

    A key challenge for cognitive neuroscience is to decipher the representational schemes of the brain. A recent class of decoding algorithms for fMRI data, stimulus-feature-based encoding models, is becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid, because decoding can occur even if the neural representational space and the stimulus-feature space use different representational schemes. This can happen when there is a systematic mapping between them. In a simulation, we successfully decoded the binary representation of numbers from their decimal features. Since binary and decimal number systems use different representations, we cannot conclude that the binary representation encodes decimal features. The same argument applies to the decoding of neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations.
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Pouw, W., Aslanidou, A., Kamermans, K. L., & Paas, F. (2017). Is ambiguity detection in haptic imagery possible? Evidence for Enactive imaginings. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2925-2930). Austin, TX: Cognitive Science Society.

    Abstract

    A classic discussion about visual imagery is whether it affords reinterpretation, like discovering two interpretations in the duck/rabbit illustration. Recent findings converge on reinterpretation being possible in visual imagery, suggesting functional equivalence with pictorial representations. However, it is unclear whether such reinterpretations are necessarily a visual-pictorial achievement. To assess this, 68 participants were briefly presented 2-d ambiguous figures. One figure was presented visually, the other via manual touch alone. Afterwards participants mentally rotated the memorized figures as to discover a novel interpretation. A portion (20.6%) of the participants detected a novel interpretation in visual imagery, replicating previous research. Strikingly, 23.6% of participants were able to reinterpret figures they had only felt. That reinterpretation truly involved haptic processes was further supported, as some participants performed co-thought gestures on an imagined figure during retrieval. These results are promising for further development of an Enactivist approach to imagination.

Share this page