Publications

Displaying 201 - 300 of 466
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Järvikivi, J., & Pyykkönen, P. (2010). Lauseiden ymmärtäminen [Engl. Sentence comprehension]. In P. Korpilahti, O. Aaltonen, & M. Laine (Eds.), Kieli ja aivot: Kommunikaation perusteet, häiriöt ja kuntoutus (pp. 117-125). Turku: Turku yliopisto.

    Abstract

    Kun kuuntelemme puhetta tai luemme tekstiä, alamme välittömästi rakentaa koherenttia tulkintaa. Toisin kuin lukemisessa, puheen havaitsemisessa kuulija voi harvoin kontrolloida nopeutta, jolla hänelle puhutaan. Huolimatta hyvin nopeasta syötteestä - noin 4-7 tavua sekunnissa - ihmiset kykenevät tulkitsemaan puhetta hyvin vaivattomasti. Lauseen ymmärtämisen tutkimuksessa selvitetäänkin, miten tällainen nopea ja useimmiten vaivaton tulkintaprosessi tapahtuu, mitkä kognitiiviset prosessit osallistuvat reaaliaikaiseen tulkintaan ja millaista informaatiota missäkin vaiheessa prosessointia ihminen käyttää hyväkseen johdonmukaisen tulkinnan muodostamiseksi. Tämä kappale on katsaus lauseen ymmärtämisen prosesseihin ja niiden tutkimukseen. Käsittelemme lyhyesti prosessointimalleja, aikuisten ja lasten kielen suhdetta, lauseen sisäisten ja välisten viittaussuhteiden tulkintaa ja sensorisen ympäristön sekä motorisen toiminnan roolia lauseiden tulkintaprosessissa.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jesse, A., & Johnson, E. K. (2008). Audiovisual alignment in child-directed speech facilitates word learning. In Proceedings of the International Conference on Auditory-Visual Speech Processing (pp. 101-106). Adelaide, Aust: Causal Productions.

    Abstract

    Adult-to-child interactions are often characterized by prosodically-exaggerated speech accompanied by visually captivating co-speech gestures. In a series of adult studies, we have shown that these gestures are linked in a sophisticated manner to the prosodic structure of adults' utterances. In the current study, we use the Preferential Looking Paradigm to demonstrate that two-year-olds can use the alignment of these gestures to speech to deduce the meaning of words.
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P., Matsuo, A., & Perdue, C. (2008). Comparing the acquisition of finiteness: A cross-linguistic approach. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 261-276). Frankfurt am Main: Lang.
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (Eds.), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.

    Abstract

    A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to
    spatial terms and their later memory accuracy. In the current
    study, we tested whether the production of spatial terms by
    children themselves predicts memory accuracy and whether
    and how language modality of these encodings modulates
    memory accuracy differently. Hearing child speakers of
    Turkish and deaf child signers of Turkish Sign Language
    described pictures of objects in various spatial relations to each
    other and later tested for their memory accuracy of these
    pictures in a surprise memory task. We found that having
    described the spatial relation between the objects predicted
    better memory accuracy. However, the modality of these
    descriptions in sign, speech, or speech-plus-gesture did not
    reveal differences in memory accuracy. We discuss the
    implications of these findings for the relation between spatial
    language, memory, and the modality of encoding.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G., & Harbusch, K. (2008). Comparing linguistic judgments and corpus frequencies as windows on grammatical competence: A study of argument linearization in German clauses. In A. Steube (Ed.), The discourse potential of underspecified structures (pp. 179-192). Berlin: Walter de Gruyter.

    Abstract

    We present an overview of several corpus studies we carried out into the frequencies of argument NP orderings in the midfield of subordinate and main clauses of German. Comparing the corpus frequencies with grammaticality ratings published by Keller’s (2000), we observe a “grammaticality–frequency gap”: Quite a few argument orderings with zero corpus frequency are nevertheless assigned medium–range grammaticality ratings. We propose an explanation in terms of a two-factor theory. First, we hypothesize that the grammatical induction component needs a sufficient number of exposures to a syntactic pattern to incorporate it into its repertoire of more or less stable rules of grammar. Moderately to highly frequent argument NP orderings are likely have attained this status, but not their zero-frequency counterparts. This is why the latter argument sequences cannot be produced by the grammatical encoder and are absent from the corpora. Secondly, we assumed that an extraneous (nonlinguistic) judgment process biases the ratings of moderately grammatical linear order patterns: Confronted with such structures, the informants produce their own “ideal delivery” variant of the to-be-rated target sentence and evaluate the similarity between the two versions. A high similarity score yielded by this judgment then exerts a positive bias on the grammaticality rating—a score that should not be mistaken for an authentic grammaticality rating. We conclude that, at least in the linearization domain studied here, the goal of gaining a clear view of the internal grammar of language users is best served by a combined strategy in which grammar rules are founded on structures that elicit moderate to high grammaticality ratings and attain at least moderate usage frequencies.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kemps-Snijders, M., Klassmann, A., Zinn, C., Berck, P., Russel, A., & Wittenburg, P. (2008). Exploring and enriching a language resource archive via the web. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The ”download first, then process paradigm” is still the predominant working method amongst the research community. The web-based paradigm, however, offers many advantages from a tool development and data management perspective as they allow a quick adaptation to changing research environments. Moreover, new ways of combining tools and data are increasingly becoming available and will eventually enable a true web-based workflow approach, thus challenging the ”download first, then process” paradigm. The necessary infrastructure for managing, exploring and enriching language resources via the Web will need to be delivered by projects like CLARIN and DARIAH
  • Kemps-Snijders, M., Zinn, C., Ringersma, J., & Windhouwer, M. (2008). Ensuring semantic interoperability on lexical resources. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    In this paper, we describe a unifying approach to tackle data heterogeneity issues for lexica and related resources. We present LEXUS, our software that implements the Lexical Markup Framework (LMF) to uniformly describe and manage lexica of different structures. LEXUS also makes use of a central Data Category Registry (DCR) to address terminological issues with regard to linguistic concepts as well as the handling of working and object languages. Finally, we report on ViCoS, a LEXUS extension, providing support for the definition of arbitrary semantic relations between lexical entries or parts thereof.
  • Kemps-Snijders, M., Windhouwer, M., Wittenburg, P., & Wright, S. E. (2008). ISOcat: Corralling data categories in the wild. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    To achieve true interoperability for valuable linguistic resources different levels of variation need to be addressed. ISO Technical Committee 37, Terminology and other language and content resources, is developing a Data Category Registry. This registry will provide a reusable set of data categories. A new implementation, dubbed ISOcat, of the registry is currently under construction. This paper shortly describes the new data model for data categories that will be introduced in this implementation. It goes on with a sketch of the standardization process. Completed data categories can be reused by the community. This is done by either making a selection of data categories using the ISOcat web interface, or by other tools which interact with the ISOcat system using one of its various Application Programming Interfaces. Linguistic resources that use data categories from the registry should include persistent references, e.g. in the metadata or schemata of the resource, which point back to their origin. These data category references can then be used to determine if two or more resources share common semantics, thus providing a level of interoperability close to the source data and a promising layer for semantic alignment on higher levels
  • Kemps-Snijders, M., Koller, T., Sloetjes, H., & Verweij, H. (2010). LAT bridge: Bridging tools for annotation and exploration of rich linguistic data. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2648-2651). European Language Resources Association (ELRA).

    Abstract

    We present a software module, the LAT Bridge, which enables bidirectionalcommunication between the annotation and exploration tools developed at the MaxPlanck Institute for Psycholinguistics as part of our Language ArchivingTechnology (LAT) tool suite. These existing annotation and exploration toolsenable the annotation, enrichment, exploration and archive management oflinguistic resources. The user community has expressed the desire to usedifferent combinations of LAT tools in conjunction with each other. The LATBridge is designed to cater for a number of basic data interaction scenariosbetween the LAT annotation and exploration tools. These interaction scenarios(e.g. bootstrapping a wordlist, searching for annotation examples or lexicalentries) have been identified in collaboration with researchers at ourinstitute.We had to take into account that the LAT tools for annotation and explorationrepresent a heterogeneous application scenario with desktop-installed andweb-based tools. Additionally, the LAT Bridge has to work in situations wherethe Internet is not available or only in an unreliable manner (i.e. with a slowconnection or with frequent interruptions). As a result, the LAT Bridge’sarchitecture supports both online and offline communication between the LATannotation and exploration tools.
  • Khetarpal, N., Majid, A., Malt, B. C., Sloman, S., & Regier, T. (2010). Similarity judgments reflect both language and cross-language tendencies: Evidence from two semantic domains. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 358-363). Austin, TX: Cognitive Science Society.

    Abstract

    Many theories hold that semantic variation in the world’s languages can be explained in terms of a universal conceptual space that is partitioned differently by different languages. Recent work has supported this view in the semantic domain of containers (Malt et al., 1999), and assumed it in the domain of spatial relations (Khetarpal et al., 2009), based in both cases on similarity judgments derived from pile-sorting of stimuli. Here, we reanalyze data from these two studies and find a more complex picture than these earlier studies suggested. In both cases we find that sorting is similar across speakers of different languages (in line with the earlier studies), but nonetheless reflects the sorter’s native language (in contrast with the earlier studies). We conclude that there are cross-culturally shared conceptual tendencies that can be revealed by pile-sorting, but that these tendencies may be modulated to some extent by language. We discuss the implications of these findings for accounts of semantic variation.
  • Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific.
  • Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Gesture and Sign-Language in Human-Computer Interaction (Lecture Notes in Artificial Intelligence - LNCS Subseries, Vol. 1371) (pp. 23-35). Berlin, Germany: Springer-Verlag.

    Abstract

    The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.
  • Klaas, G. (2008). Hints and recommendations concerning field equipment. In A. Majid (Ed.), Field manual volume 11 (pp. vi-vii). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Klein, W. (2008). Sprache innerhalb und ausserhalb der Schule. In Deutschen Akademie für Sprache und Dichtung (Ed.), Jahrbuch 2007 (pp. 140-150). Darmstadt: Wallstein Verlag.
  • Klein, W. (2008). The topic situation. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 287-305). Frankfurt am Main: Lang.
  • Klein, W. (2008). Time in language, language in time. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 1-12). Oxford: Blackwell.
  • Klein, W. (2021). Das „Heidelberger Forschungsprojekt Pidgin-Deutsch “und die Folgen. In B. Ahrenholz, & M. Rost-Roth (Eds.), Ein Blick zurück nach vorn: Frühe deutsche Forschung zu Zweitspracherwerb, Migration, Mehrsprachigkeit und zweitsprachbezogener Sprachdidaktik sowie ihre Bedeutung heute (pp. 50-95). Berlin: De Gruyter.
  • Klein, W. (1984). Bühler Ellipse. In C. F. Graumann, & T. Herrmann (Eds.), Karl Bühlers Axiomatik: Fünfzig Jahre Axiomatik der Sprachwissenschaften (pp. 117-141). Frankfurt am Main: Klostermann.
  • Klein, W., & Geyken, A. (2010). Das Digitale Wörterbuch der Deutschen Sprache (DWDS). In U. Heid, S. Schierholz, W. Schweickard, H. E. Wiegand, R. H. Gouws, & W. Wolski (Eds.), Lexicographica: International annual for lexicography (pp. 79-96). Berlin, New York: De Gruyter.

    Abstract

    No area in the study of human languages has a longer history and a higher practical signifi cance than lexicography. The advent of the computer has dramaticually changed this discipline in ways which go far beyond the digitisation of materials in combination with effi cient search tools, or the transfer of an existing dictionary onto the computer. They allow the stepwise elaboration of what is called here Digital Lexical Systems, i.e., computerized systems in which the underlying data - in form of an extendable corpus - and description of lexical properties on various levels can be effi ciently combined. This paper discusses the range of these possibilities and describes the present form of the German „Digital Lexical System of the Academy“, a project of the Berlin-Brandenburg Academy of Sciences (www.dwds.de).
  • Klein, W. (2010). Der mühselige Weg zur Erforschung des Schönen. In S. Walther, G. Staupe, & T. Macho (Eds.), Was ist schön? Begleitbuch zur Ausstellung (pp. 124-131). Göttingen: Wallstein.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W., & Winkler, S. (Eds.). (2010). Ambiguität [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 40(158).
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W. (2008). Mündliche Textproduktion: Informationsorganisation in Texten. In N. Janich (Ed.), Textlinguistik: 15 Einführungen (pp. 217-235). Tübingen: Narr Verlag.
  • Klein, W., & Schnell, R. (Eds.). (2008). Literaturwissenschaft und Linguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (150).
  • Klein, W. (Ed.). (2008). Ist Schönheit messbar? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 152.
  • Klein, W. (Ed.). (1998). Kaleidoskop [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (112).
  • Klein, W. (Ed.). (1984). Textverständlichkeit - Textverstehen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (55).
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (2010). Typen und Konzepte des Spracherwerbs. In H. Ludger (Ed.), Sprachwissenschaft, ein Reader (pp. 902-924). Berlin: De Gruyter Studium.
  • Klein, W. (2010). Über die zwänglerische Befolgung sprachlicher Normen. In P. Eisenberg (Ed.), Der Jugend zuliebe: Literarische Texte, für die Schule verändert (pp. 77-87). Göttingen: Wallstein.
  • Kooijman, V., Johnson, E. K., & Cutler, A. (2008). Reflections on reflections of infant word recognition. In A. D. Friederici, & G. Thierry (Eds.), Early language development: Bridging brain and behaviour (pp. 91-114). Amsterdam: Benjamins.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth., S. (2021). Lexical priming as evidence for language-nonselective access in the simultaneous bilingual child's lexicon. In D. Dionne, & L.-A. Vidal Covas (Eds.), BUCLD 45: Proceedings of the 45th annual Boston University Conference on Language Development (pp. 413-430). Sommerville, MA: Cascadilla Press.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kung, C., Chwilla, D. J., Gussenhoven, C., Bögels, S., & Schriefers, H. (2010). What did you say just now, bitterness or wife? An ERP study on the interaction between tone, intonation and context in Cantonese Chinese. In Proceedings of Speech Prosody 2010 (pp. 1-4).

    Abstract

    Previous studies on Cantonese Chinese showed that rising
    question intonation contours on low-toned words lead to
    frequent misperceptions of the tones. Here we explored the
    processing consequences of this interaction between tone and
    intonation by comparing the processing and identification of
    monosyllabic critical words at the end of questions and
    statements, using a tone identification task, and ERPs as an
    online measure of speech comprehension. Experiment 1
    yielded higher error rates for the identification of low tones at
    the end of questions and a larger N400-P600 pattern, reflecting
    processing difficulty and reanalysis, compared to other
    conditions. In Experiment 2, we investigated the effect of
    immediate lexical context on the tone by intonation interaction.
    Increasing contextual constraints led to a reduction in errors
    and the disappearance of the P600 effect. These results
    indicate that there is an immediate interaction between tone,
    intonation, and context in online speech comprehension. The
    difference in performance and activation patterns between the
    two experiments highlights the significance of context in
    understanding a tone language, like Cantonese-Chinese.
  • Kupisch, T., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2021). Multilingualism and Chomsky's Generative Grammar. In N. Allott (Ed.), A companion to Chomsky (pp. 232-242). doi:10.1002/9781119598732.ch15.

    Abstract

    Like Einstein's general theory of relativity is concerned with explaining the basics of an observable experience – i.e., gravity – most people take for granted that Chomsky's theory of generative grammar (GG) is concerned with the basic nature of language. This chapter highlights a mere subset of central constructs in GG, showing how they have featured prominently and thus shaped formal linguistic studies in multilingualism. Because multilingualism includes a wide range of nonmonolingual populations, the constructs are divided across child bilingualism and adult third language for greater coverage. In the case of the former, the chapter examines how poverty of the stimulus has been investigated. Using the nascent field of L3/Ln acquisition as the backdrop, it discusses how the GG constructs of I-language versus E-language sit at the core of debates regarding the very notion of what linguistic transfer and mental representations should be taken to be.
  • Kuzla, C., Ernestus, M., & Mitterer, H. (2010). Compensation for assimilatory devoicing and prosodic structure in German fricative perception. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 731-757). Berlin: De Gruyter.
  • Lai, J., & Poletiek, F. H. (2010). The impact of starting small on the learnability of recursion. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32rd Annual Conference of the Cognitive Science Society (CogSci 2010) (pp. 1387-1392). Austin, TX, USA: Cognitive Science Society.
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (Eds.). (2010). Non-native speech perception in adverse conditions [Special Issue]. Speech Communication, 52(11/12).
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2008). Accelerating 3D medical image segmentation with high performance computing. In Proceedings of the IEEE International Workshops on Image Processing Theory, Tools and Applications - IPT (pp. 1-8).

    Abstract

    Digital processing of medical images has helped physicians and patients during past years by allowing examination and diagnosis on a very precise level. Nowadays possibly the biggest deal of support it can offer for modern healthcare is the use of high performance computing architectures to treat the huge amounts of data that can be collected by modern acquisition devices. This paper presents a parallel processing implementation of an image segmentation algorithm that operates on a computer cluster equipped with 10 processing units. Thanks to well-organized distribution of the workload we manage to significantly shorten the execution time of the developed algorithm and reach a performance gain very close to linear.
  • Levelt, W. J. M. (2016). Localism versus holism. Historical origins of studying language in the brain. In R. Rubens, & M. Van Dijk (Eds.), Sartoniana vol. 29 (pp. 37-60). Ghent: Ghent University.
  • Levelt, W. J. M. (2016). The first golden age of psycholinguistics 1865-World War I. In R. Rubens, & M. Van Dyck (Eds.), Sartoniana vol. 29 (pp. 15-36). Ghent: Ghent University.
  • Levelt, W. J. M., & De Swaan, A. (2016). Levensbericht Nico Frijda. In Koninklijke Nederlandse Akademie van Wetenschappen (Ed.), Levensberichten en herdenkingen 2016 (pp. 16-25). Amsterdam: KNAW.
  • Levelt, W. J. M. (1984). Geesteswetenschappelijke theorie als kompas voor de gangbare mening. In S. Dresden, & D. Van de Kaa (Eds.), Wetenschap ten goede en ten kwade (pp. 42-52). Amsterdam: North Holland.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M., & Plomp, K. (1968). The appreciation of musical intervals. In J. M. M. Aler (Ed.), Proceedings of the fifth International Congress of Aesthetics, Amsterdam 1964 (pp. 901-904). The Hague: Mouton.
  • Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. Van Doorn, W. A. Van de Grind, & J. J. Koenderink (Eds.), Limits in perception (pp. 323-358). Utrecht: VNU Science Press.
  • Levelt, W. J. M. (1984). Spontaneous self-repairs in speech: Processes and representations. In M. P. R. Van den Broecke, & A. Cohen (Eds.), Proceedings of the 10th International Congress of Phonetic Sciences (pp. 105-117). Dordrecht: Foris.
  • Levelt, W. J. M. (2008). What has become of formal grammars in linguistics and psycholinguistics? [Postscript]. In Formal Grammars in linguistics and psycholinguistics (pp. 1-17). Amsterdam: John Benjamins.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2010). Generalized conversational implicature. In L. Cummings (Ed.), The pragmatics encyclopedia (pp. 201-203). London: Routledge.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2016). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication [3rd ed.] (pp. 68-80). Oxford: Oxford University Press.
  • Levinson, S. C. (2016). The countable singulare tantum. In A. Reuneker, R. Boogaart, & S. Lensink (Eds.), Aries netwerk: Een constructicon (pp. 145-146). Leiden: Leiden University.
  • Levinson, S. C., & Majid, A. (2008). Preface and priorities. In A. Majid (Ed.), Field manual volume 11 (pp. iii-iv). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2008). Time and space questionnaire. In A. Majid (Ed.), Field Manual Volume 11 (pp. 42-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492955.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levshina, N. (2021). Conditional inference trees and random forests. In M. Paquot, & T. Gries (Eds.), Practical Handbook of Corpus Linguistics (pp. 611-643). New York: Springer.
  • Levshina, N., & Moran, S. (Eds.). (2021). Efficiency in human languages: Corpus evidence for universal principles [Special Issue]. Linguistics Vanguard, 7(s3).
  • Liszkowski, U. (2010). Before L1: A differentiated perspective on infant gestures. In M. Gullberg, & K. De Bot (Eds.), Gestures in language development (pp. 35-51). Amsterdam: Benjamins.
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Emergence of signal structure: Effects of duration constraints. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Recent work has investigated the emergence of structure in speech using experiments which use artificial continuous signals. Some experiments have had no limit on the duration which signals can have (e.g. Verhoef et al., 2014), and others have had time limitations (e.g. Verhoef et al., 2015). However, the effect of time constraints on the structure in signals has never been experimentally investigated.
  • Little, H., & de Boer, B. (2016). Did the pressure for discrimination trigger the emergence of combinatorial structure? In Proceedings of the 2nd Conference of the International Association for Cognitive Semiotics (pp. 109-110).
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Differing signal-meaning dimensionalities facilitates the emergence of structure. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Structure of language is not only caused by cognitive processes, but also by physical aspects of the signalling modality. We test the assumptions surrounding the role which the physical aspects of the signal space will have on the emergence of structure in speech. Here, we use a signal creation task to test whether a signal space and a meaning space having similar dimensionalities will generate an iconic system with signal-meaning mapping and whether, when the topologies differ, the emergence of non-iconic structure is facilitated. In our experiments, signals are created using infrared sensors which use hand position to create audio signals. We find that people take advantage of signal-meaning mappings where possible. Further, we use trajectory probabilities and measures of variance to show that when there is a dimensionality mismatch, more structural strategies are used.
  • Little, H. (2016). Nahran Bhannamz: Language Evolution in an Online Zombie Apocalypse Game. In Createvolang: creativity and innovation in language evolution.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized Size-Sound Sound Symbolism. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1823-1828). Austin, TX: Cognitive Science Society.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Lucas, C., Griffiths, T., Xu, F., & Fawcett, C. (2008). A rational model of preference learning and choice prediction by children. In D. Koller, Y. Bengio, D. Schuurmans, L. Bottou, & A. Culotta (Eds.), Advances in Neural Information Processing Systems.

    Abstract

    Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge. We use a rational model of preference learning, drawing on ideas from economics and computer science, to explain the behavior of children in several recent experiments. Specifically, we show how a simple econometric model can be extended to capture two- to four-year-olds’ use of statistical information in inferring preferences, and their generalization of these preferences.
  • Macuch Silva, V., & Roberts, S. G. (2016). Language adapts to signal disruption in interaction. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/20.html.

    Abstract

    Linguistic traits are often seen as reflecting cognitive biases and constraints (e.g. Christiansen & Chater, 2008). However, language must also adapt to properties of the channel through which communication between individuals occurs. Perhaps the most basic aspect of any communication channel is noise. Communicative signals can be blocked, degraded or distorted by other sources in the environment. This poses a fundamental problem for communication. On average, channel disruption accompanies problems in conversation every 3 minutes (27% of cases of other-initiated repair, Dingemanse et al., 2015). Linguistic signals must adapt to this harsh environment. While modern language structures are robust to noise (e.g. Piantadosi et al., 2011), we investigate how noise might have shaped the early emergence of structure in language. The obvious adaptation to noise is redundancy. Signals which are maximally different from competitors are harder to render ambiguous by noise. Redundancy can be increased by adding differentiating segments to each signal (increasing the diversity of segments). However, this makes each signal more complex and harder to learn. Under this strategy, holistic languages may emerge. Another strategy is reduplication - repeating parts of the signal so that noise is less likely to disrupt all of the crucial information. This strategy does not increase the difficulty of learning the language - there is only one extra rule which applies to all signals. Therefore, under pressures for learnability, expressivity and redundancy, reduplicated signals are expected to emerge. However, reduplication is not a pervasive feature of words (though it does occur in limited domains like plurals or iconic meanings). We suggest that this is due to the pressure for redundancy being lifted by conversational infrastructure for repair. Receivers can request that senders repeat signals only after a problem occurs. That is, robustness is achieved by repeating the signal across conversational turns (when needed) instead of within single utterances. As a proof of concept, we ran two iterated learning chains with pairs of individuals in generations learning and using an artificial language (e.g. Kirby et al., 2015). The meaning space was a structured collection of unfamiliar images (3 shapes x 2 textures x 2 outline types). The initial language for each chain was the same written, unstructured, fully expressive language. Signals produced in each generation formed the training language for the next generation. Within each generation, pairs played an interactive communication game. The director was given a target meaning to describe, and typed a word for the matcher, who guessed the target meaning from a set. With a 50% probability, a contiguous section of 3-5 characters in the typed word was replaced by ‘noise’ characters (#). In one chain, the matcher could initiate repair by requesting that the director type and send another signal. Parallel generations across chains were matched for the number of signals sent (if repair was initiated for a meaning, then it was presented twice in the parallel generation where repair was not possible) and noise (a signal for a given meaning which was affected by noise in one generation was affected by the same amount of noise in the parallel generation). For the final set of signals produced in each generation we measured the signal redundancy (the zip compressibility of the signals), the character diversity (entropy of the characters of the signals) and systematic structure (z-score of the correlation between signal edit distance and meaning hamming distance). In the condition without repair, redundancy increased with each generation (r=0.97, p=0.01), and the character diversity decreased (r=-0.99,p=0.001) which is consistent with reduplication, as shown below (part of the initial and the final language): Linear regressions revealed that generations with repair had higher overall systematic structure (main effect of condition, t = 2.5, p < 0.05), increasing character diversity (interaction between condition and generation, t = 3.9, p = 0.01) and redundancy increased at a slower rate (interaction between condition and generation, t = -2.5, p < 0.05). That is, the ability to repair counteracts the pressure from noise, and facilitates the emergence of compositional structure. Therefore, just as systems to repair damage to DNA replication are vital for the evolution of biological species (O’Brien, 2006), conversational repair may regulate replication of linguistic forms in the cultural evolution of language. Future studies should further investigate how evolving linguistic structure is shaped by interaction pressures, drawing on experimental methods and naturalistic studies of emerging languages, both spoken (e.g Botha, 2006; Roberge, 2008) and signed (e.g Senghas, Kita, & Ozyurek, 2004; Sandler et al., 2005).
  • Magyari, L., & De Ruiter, J. P. (2008). Timing in conversation: The anticipation of turn endings. In J. Ginzburg, P. Healey, & Y. Sato (Eds.), Proceedings of the 12th Workshop on the Semantics and Pragmatics Dialogue (pp. 139-146). London: King's college.

    Abstract

    We examined how communicators can switch between speaker and listener role with such accurate timing. During conversations, the majority of role transitions happens with a gap or overlap of only a few hundred milliseconds. This suggests that listeners can predict when the turn of the current speaker is going to end. Our hypothesis is that listeners know when a turn ends because they know how it ends. Anticipating the last words of a turn can help the next speaker in predicting when the turn will end, and also in anticipating the content of the turn, so that an appropriate response can be prepared in advance. We used the stimuli material of an earlier experiment (De Ruiter, Mitterer & Enfield, 2006), in which subjects were listening to turns from natural conversations and had to press a button exactly when the turn they were listening to ended. In the present experiment, we investigated if the subjects can complete those turns when only an initial fragment of the turn is presented to them. We found that the subjects made better predictions about the last words of those turns that had more accurate responses in the earlier button press experiment.
  • Magyari, L. (2008). A mentális lexikon modelljei és a magyar nyelv (Models of mental lexicon and the Hungarian language). In J. Gervain, & C. Pléh (Eds.), A láthatatlan nyelv (Invisible Language). Budapest: Gondolat Kiadó.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2008). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 11 (pp. 37-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492960.

    Abstract

    This Field Manual entry has been superceded by the 2009 version:
    https://doi.org/10.17617/2.883570

    Files private

    Request files
  • Majid, A. (2008). Focal colours. In A. Majid (Ed.), Field Manual Volume 11 (pp. 8-10). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492958.

    Abstract

    In this task we aim to find what the best exemplars or “focal colours” of each basic colour term is in our field languages. This is an important part of the evidence we need in order to understand the colour data collected using 'The Language of Vision I: Colour'. This task consists of an experiment where participants pick out the best exemplar for the colour terms in their language. The goal is to establish language specific focal colours.
  • Majid, A. (2016). Was wir von anderen Kulturen über den Geruchsinn lernen können. In Museum Tinguely (Ed.), Belle Haleine – Der Duft der Kunst. Interdisziplinäres Symposium (pp. 73-79). Heidelberg: Kehrer.
  • Majid, A. (2016). What other cultures can tell us about the sense of smell. In Museum Tinguely (Ed.), Belle haleine - the scent of art: interdisciplinary symposium (pp. 72-77). Heidelberg: Kehrer.
  • Majid, A. (2010). Words for parts of the body. In B. C. Malt, & P. Wolff (Eds.), Words and the Mind: How words capture human experience (pp. 58-71). New York: Oxford University Press.
  • Mak, M., & Willems, R. M. (2021). Mental simulation during literary reading. In D. Kuiken, & A. M. Jacobs (Eds.), Handbook of empirical literary studies (pp. 63-84). Berlin: De Gruyter.

    Abstract

    Readers experience a number of sensations during reading. They do
    not – or do not only – process words and sentences in a detached, abstract
    manner. Instead they “perceive” what they read about. They see descriptions of
    scenery, feel what characters feel, and hear the sounds in a story. These sensa-
    tions tend to be grouped under the umbrella terms “mental simulation” and
    “mental imagery.” This chapter provides an overview of empirical research on
    the role of mental simulation during literary reading. Our chapter also discusses
    what mental simulation is and how it relates to mental imagery. Moreover, it
    explores how mental simulation plays a role in leading models of literary read-
    ing and investigates under what circumstances mental simulation occurs dur-
    ing literature reading. Finally, the effect of mental simulation on the literary
    reader’s experience is discussed, and suggestions and unresolved issues in this
    field are formulated.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.

    Abstract

    Visual and auditory channels have different affordances and
    this is mirrored in what information is available for linguistic
    encoding. The visual channel has high spatial acuity, whereas
    the auditory channel has better temporal acuity. These
    differences may lead to different conceptualizations of events
    and affect multimodal language production. Previous studies of
    motion events typically present visual input to elicit speech and
    gesture. The present study compared events presented as audio-
    only, visual-only, or multimodal (visual+audio) input and
    assessed speech and co-speech gesture for path and manner of
    motion in Turkish. Speakers with audio-only input mentioned
    path more and manner less in verbal descriptions, compared to
    speakers who had visual input. There was no difference in the
    type or frequency of gestures across conditions, and gestures
    were dominated by path-only gestures. This suggests that input
    modality influences speakers’ encoding of path and manner of
    motion events in speech, but not in co-speech gestures.
  • Matic, D. (2010). Discourse and syntax in linguistic change: Decline of postverbal topical subjects in Serbo-Croat. In G. Ferraresi, & R. Lühr (Eds.), Diachronic studies on information structure: Language acquisition and change (pp. 117-142). Berlin: Mouton de Gruyter.
  • Matić, D., Hammond, J., & Van Putten, S. (2016). Left-dislocation, sentences and clauses in Avatime, Tundra Yukaghir and Whitesands. In J. Fleischhauer, A. Latrouite, & R. Osswald (Eds.), Exploring the Syntax-Semantics Interface. Festschrift for Robert D. Van Valin, Jr. (pp. 339-367). Düsseldorf: Düsseldorf University Press.
  • Matić, D. (2016). Tag questions and focus markers: Evidence from the Tompo dialect of Even. In M. M. J. Fernandez-Vest, & R. D. Van Valin Jr. (Eds.), Information structure and spoken language in a cross-linguistic perspective (pp. 167-190). Berlin: Mouton de Gruyter.
  • Mazzone, M., & Campisi, E. (2010). Embodiment, metafore, comunicazione. In G. P. Storari, & E. Gola (Eds.), Forme e formalizzazioni. Atti del XVI congresso nazionale. Cagliari: CUEC.
  • Mazzone, M., & Campisi, E. (2010). Are there communicative intentions? In L. A. Pérez Miranda, & A. I. Madariaga (Eds.), Advances in cognitive science. IWCogSc-10. Proceedings of the ILCLI International Workshop on Cognitive Science Workshop on Cognitive Science (pp. 307-322). Bilbao, Spain: The University of the Basque Country.

    Abstract

    Grice in pragmatics and Levelt in psycholinguistics have proposed models of human communication where the starting point of communicative action is an individual intention. This assumption, though, has to face serious objections with regard to the alleged existence of explicit representations of the communicative goals to be pursued. Here evidence is surveyed which shows that in fact speaking may ordinarily be a quite automatic activity prompted by contextual cues and driven by behavioural schemata abstracted away from social regularities. On the one hand, this means that there could exist no intentions in the sense of explicit representations of communicative goals, following from deliberate reasoning and triggering the communicative action. On the other hand, however, there are reasons to allow for a weaker notion of intention than this, according to which communication is an intentional affair, after all. Communicative action is said to be intentional in this weaker sense to the extent that it is subject to a double mechanism of control, with respect both to present-directed and future-directed intentions.
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., & Frank, S. L. (2021). Human sentence processing: Recurrence or attention? In E. Chersoni, N. Hollenstein, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2021) (pp. 12-22). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2021.cmcl-1.2.

    Abstract

    Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models’ ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.
  • Merkx, D., Frank, S. L., & Ernestus, M. (2021). Semantic sentence similarity: Size does not always matter. In Proceedings of Interspeech 2021 (pp. 4393-4397). doi:10.21437/Interspeech.2021-1464.

    Abstract

    This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge. We produce synthetic and natural spoken versions of a well known semantic textual similarity database and show that our VGS model produces embeddings that correlate well with human semantic similarity judgements. Our results show that a model trained on a small image-caption database outperforms two models trained on much larger databases, indicating that database size is not all that matters. We also investigate the importance of having multiple captions per image and find that this is indeed helpful even if the total number of images is lower, suggesting that paraphrasing is a valuable learning signal. While the general trend in the field is to create ever larger datasets to train models on, our findings indicate other characteristics of the database can just as important.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.

Share this page