Publications

Displaying 101 - 200 of 361
  • Enfield, N. J. (2008). Common ground as a resource for social affiliation. In I. Kecskes, & J. L. Mey (Eds.), Intention, common ground and the egocentric speaker-hearer (pp. 223-254). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2008). Lao linguistics in the 20th century and since. In Y. Goudineau, & M. Lorrillard (Eds.), Recherches nouvelles sur le Laos (pp. 435-452). Paris: EFEO.
  • Enfield, N. J., & Levinson, S. C. (2008). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 11 (pp. 77-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492937.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Bohnemeyer, J. (2001). Hidden colour-chips task: Demonstratives, attention, and interaction. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 21-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874636.

    Abstract

    Demonstratives are typically described as encoding degrees of physical distance between the object referred to, and the speaker or addressee. For example, this in English is used to talk about things that are physically near the speaker, and that for things that are not. But is this how speakers really choose between these words in actual talk? This task aims to generate spontaneous language data concerning deixis, gesture, and demonstratives, and to investigate the significance of different factors (e.g., physical distance, attention) in demonstrative selection. In the presence of one consultant (the “memoriser”), sixteen colour chips are hidden under objects in a specified array. Another consultant enters the area and asks the memoriser to recount the locations of the chips. The task is designed to create a situation where the speaker genuinely attempts to manipulate the addressee’s attention on objects in the immediate physical space.
  • Enfield, N. J. (2001). Linguistic evidence for a Lao perspective on facial expression of emotion. In J. Harkins, & A. Wierzbicka (Eds.), Emotions in crosslinguistic perspective (pp. 149-166). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2001). On genetic and areal linguistics in Mainland South-East Asia: Parallel polyfunctionality of ‘acquire’. In A. Y. Aikhenvald, & R. M. Dixon (Eds.), Areal diffusion and genetic inheritance: Problems in comparative linguistics (pp. 255-290). Oxford University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2008). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 11 (pp. 80-81). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492939.

    Abstract

    This Field Manual entry has been superceded by the 2009 version: https://doi.org/10.17617/2.883564

    Files private

    Request files
  • Enfield, N. J., & Dunn, M. (2001). Supplements to the Wilkins 1999 demonstrative questionnaire. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 82-84). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874638.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Falk, J. J., Zhang, Y., Scheutz, M., & Yu, C. (2021). Parents adaptively use anaphora during parent-child social interaction. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1472-1478). Vienna: Cognitive Science Society.

    Abstract

    Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.
  • Fernald, A., McRoberts, G. W., & Swingley, D. (2001). Infants' developing competence in recognizing and understanding words in fluent speech. In J. Weissenborn, & B. Höhle (Eds.), Approaches to Bootstrapping: Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition. Volume 1 (pp. 97-123). Amsterdam: Benjamins.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Fisher, S. E., & Smith, S. (2001). Progress towards the identification of genes influencing developmental dyslexia. In A. Fawcett (Ed.), Dyslexia: Theory and good practice (pp. 39-64). London: Whurr.
  • Fitz, H., & Chang, F. (2008). The role of the input in a connectionist model of the accessibility hierarchy in development. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 120-131). Somerville, Mass.: Cascadilla Press.
  • Floyd, S. (2016). Insubordination in Interaction: The Cha’palaa counter-assertive. In N. Evans, & H. Wananabe (Eds.), Dynamics of Insubordination (pp. 341-366). Amsterdam: John Benjamins.

    Abstract

    In the Cha’palaa language of Ecuador the main-clause use of the otherwise non-finite morpheme -ba can be accounted for by a specific interactive practice: the ‘counter-assertion’ of statement or implicature of a previous conversational turn. Attention to the ways in which different constructions are deployed in such recurrent conversational contexts reveals a plausible account for how this type of dependent clause has come to be one of the options for finite clauses. After giving some background on Cha’palaa and placing ba clauses within a larger ecology of insubordination constructions in the language, this chapter uses examples from a video corpus of informal conversation to illustrate how interactive data provides answers that may otherwise be elusive for understanding how the different grammatical options for Cha’palaa finite verb constructions have been structured by insubordination
  • Floyd, S., & Norcliffe, E. (2016). Switch reference systems in the Barbacoan languages and their neighbors. In R. Van Gijn, & J. Hammond (Eds.), Switch Reference 2.0 (pp. 207-230). Amsterdam: Benjamins.

    Abstract

    This chapter surveys the available data on Barbacoan languages and their neighbors to explore a case study of switch reference within a single language family and in a situation of areal contact. To the extent possible given the available data, we weigh accounts appealing to common inheritance and areal convergence to ask what combination of factors led to the current state of these languages. We discuss the areal distribution of switch reference systems in the northwest Andean region, the different types of systems and degrees of complexity observed, and scenarios of contact and convergence, particularly in the case of Barbacoan and Ecuadorian Quechua. We then covers each of the Barbacoan languages’ systems (with the exception of Totoró, represented by its close relative Guambiano), identifying limited formal cognates, primarily between closely-related Tsafiki and Cha’palaa, as well as broader functional similarities, particularly in terms of interactions with topic/focus markers. n accounts for the current state of affairs with a complex scenario of areal prevalence of switch reference combined with deep structural family inheritance and formal re-structuring of the systems over time
  • Frost, R. L. A., & Casillas, M. (2021). Investigating statistical learning of nonadjacent dependencies: Running statistical learning tasks in non-WEIRD populations. In SAGE Research Methods Cases. doi:10.4135/9781529759181.

    Abstract

    Language acquisition is complex. However, one thing that has been suggested to help learning is the way that information is distributed throughout language; co-occurrences among particular items (e.g., syllables and words) have been shown to help learners discover the words that a language contains and figure out how those words are used. Humans’ ability to draw on this information—“statistical learning”—has been demonstrated across a broad range of studies. However, evidence from non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies is critically lacking, which limits theorizing on the universality of this skill. We extended work on statistical language learning to a new, non-WEIRD linguistic population: speakers of Yélî Dnye, who live on a remote island off mainland Papua New Guinea (Rossel Island). We performed a replication of an existing statistical learning study, training adults on an artificial language with statistically defined words, then examining what they had learnt using a two-alternative forced-choice test. Crucially, we implemented several key amendments to the original study to ensure the replication was suitable for remote field-site testing with speakers of Yélî Dnye. We made critical changes to the stimuli and materials (to test speakers of Yélî Dnye, rather than English), the instructions (we re-worked these significantly, and added practice tasks to optimize participants’ understanding), and the study format (shifting from a lab-based to a portable tablet-based setup). We discuss the requirement for acute sensitivity to linguistic, cultural, and environmental factors when adapting studies to test new populations.

  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2016). Using Statistics to Learn Words and Grammatical Categories: How High Frequency Words Assist Language Acquisition. In A. Papafragou, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 81-86). Austin, Tx: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2016/papers/0027/index.html.

    Abstract

    Recent studies suggest that high-frequency words may benefit speech segmentation (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) and grammatical categorisation (Monaghan, Christiansen, & Chater, 2007). To date, these tasks have been examined separately, but not together. We familiarised adults with continuous speech comprising repetitions of target words, and compared learning to a language in which targets appeared alongside high-frequency marker words. Marker words reliably preceded targets, and distinguished them into two otherwise unidentifiable categories. Participants completed a 2AFC segmentation test, and a similarity judgement categorisation test. We tested transfer to a word-picture mapping task, where words from each category were used either consistently or inconsistently to label actions/objects. Participants segmented the speech successfully, but only demonstrated effective categorisation when speech contained high-frequency marker words. The advantage of marker words extended to the early stages of the transfer task. Findings indicate the same high-frequency words may assist speech segmentation and grammatical categorisation.
  • Galke, L., Franke, B., Zielke, T., & Scherp, A. (2021). Lifelong learning of graph neural networks for open-world node classification. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). Piscataway, NJ: IEEE. doi:10.1109/IJCNN52387.2021.9533412.

    Abstract

    Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification. However, real-world graphs are often evolving over time and even new classes may arise. We model these challenges as an instance of lifelong learning, in which a learner faces a sequence of tasks and may take over knowledge acquired in past tasks. Such knowledge may be stored explicitly as historic data or implicitly within model parameters. In this work, we systematically analyze the influence of implicit and explicit knowledge. Therefore, we present an incremental training method for lifelong learning on graphs and introduce a new measure based on k-neighborhood time differences to address variances in the historic data. We apply our training method to five representative GNN architectures and evaluate them on three new lifelong node classification datasets. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over the complete history of the graph data. Furthermore, our experiments confirm that implicit knowledge becomes more important when fewer explicit knowledge is available.
  • Galke, L., Seidlmayer, E., Lüdemann, G., Langnickel, L., Melnychuk, T., Förstner, K. U., Tochtermann, K., & Schultz, C. (2021). COVID-19++: A citation-aware Covid-19 dataset for the analysis of research dynamics. In Y. Chen, H. Ludwig, Y. Tu, U. Fayyad, X. Zhu, X. Hu, S. Byna, X. Liu, J. Zhang, S. Pan, V. Papalexakis, J. Wang, A. Cuzzocrea, & C. Ordonez (Eds.), Proceedings of the 2021 IEEE International Conference on Big Data (pp. 4350-4355). Piscataway, NJ: IEEE.

    Abstract

    COVID-19 research datasets are crucial for analyzing research dynamics. Most collections of COVID-19 research items do not to include cited works and do not have annotations
    from a controlled vocabulary. Starting with ZB MED KE data on COVID-19, which comprises CORD-19, we assemble a new dataset that includes cited work and MeSH annotations for all records. Furthermore, we conduct experiments on the analysis of research dynamics, in which we investigate predicting links in a co-annotation graph created on the basis of the new dataset. Surprisingly, we find that simple heuristic methods are better at
    predicting future links than more sophisticated approaches such as graph neural networks.
  • Gannon, E., He, J., Gao, X., & Chaparro, B. (2016). RSVP Reading on a Smart Watch. In Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting (pp. 1130-1134).

    Abstract

    Reading with Rapid Serial Visual Presentation (RSVP) has shown promise for optimizing screen space and increasing reading speed without compromising comprehension. Given the wide use of small-screen devices, the present study compared RSVP and traditional reading on three types of reading comprehension, reading speed, and subjective measures on a smart watch. Results confirm previous studies that show faster reading speed with RSVP without detracting from comprehension. Subjective data indicate that Traditional is strongly preferred to RSVP as a primary reading method. Given the optimal use of screen space, increased speed and comparable comprehension, future studies should focus on making RSVP a more comfortable format.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Gerwien, J., & Flecken, M. (2016). First things first? Top-down influences on event apprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2633-2638). Austin, TX: Cognitive Science Society.

    Abstract

    Not much is known about event apprehension, the earliest stage of information processing in elicited language production studies, using pictorial stimuli. A reason for our lack of knowledge on this process is that apprehension happens very rapidly (<350 ms after stimulus onset, Griffin & Bock 2000), making it difficult to measure the process directly. To broaden our understanding of apprehension, we analyzed landing positions and onset latencies of first fixations on visual stimuli (pictures of real-world events) given short stimulus presentation times, presupposing that the first fixation directly results from information processing during apprehension
  • Gordon, P. C., Lowder, M. W., & Hoedemaker, R. S. (2016). Reading in normally aging adults. In H. Wright (Ed.), Cognitive-Linguistic Processes and Aging (pp. 165-192). Amsterdam: Benjamins. doi:10.1075/z.200.07gor.

    Abstract

    The activity of reading raises fundamental theoretical and practical questions about healthy cognitive aging. Reading relies greatly on knowledge of patterns of language and of meaning at the level of words and topics of text. Further, this knowledge must be rapidly accessed so that it can be coordinated with processes of perception, attention, memory and motor control that sustain skilled reading at rates of four-to-five words a second. As such, reading depends both on crystallized semantic intelligence which grows or is maintained through healthy aging, and on components of fluid intelligence which decline with age. Reading is important to older adults because it facilitates completion of everyday tasks that are essential to independent living. In addition, it entails the kind of active mental engagement that can preserve and deepen the cognitive reserve that may mitigate the negative consequences of age-related changes in the brain. This chapter reviews research on the front end of reading (word recognition) and on the back end of reading (text memory) because both of these abilities are surprisingly robust to declines associated with cognitive aging. For word recognition, that robustness is surprising because rapid processing of the sort found in reading is usually impaired by aging; for text memory, it is surprising because other types of episodic memory performance (e.g., paired associates) substantially decline in aging. These two otherwise quite different levels of reading comprehension remain robust because they draw on the knowledge of language that older adults gain through a life-time of experience with language.
  • Le Guen, O., Senft, G., & Sicoli, M. A. (2008). Language of perception: Views from anthropology. In A. Majid (Ed.), Field Manual Volume 11 (pp. 29-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446079.

    Abstract

    To understand the underlying principles of categorisation and classification of sensory input semantic analyses must be based on both language and culture. The senses are not only physiological phenomena, but they are also linguistic, cultural, and social. The goal of this task is to explore and describe sociocultural patterns relating language of perception, ideologies of perception, and perceptual practice in our speech communities.
  • Gullberg, M. (2008). A helping hand? Gestures, L2 learners, and grammar. In S. G. McCafferty, & G. Stam (Eds.), Gesture: Second language acquisition and classroom research (pp. 185-210). New York: Routledge.

    Abstract

    This chapter explores what L2 learners' gestures reveal about L2 grammar. The focus is on learners’ difficulties with maintaining reference in discourse caused by their incomplete mastery of pronouns. The study highlights the systematic parallels between properties of L2 speech and gesture, and the parallel effects of grammatical development in both modalities. The validity of a communicative account of interlanguage grammar in this domain is tested by taking the cohesive properties of the gesture-speech ensemble into account. Specifically, I investigate whether learners use gestures to compensate for and to license over-explicit reference in speech. The results rule out a communicative account for the spoken variety of maintained reference. In contrast, cohesive gestures are found to be multi-functional. While the presence of cohesive gestures is not communicatively motivated, their spatial realisation is. It is suggested that gestures are exploited as a grammatical communication strategy to disambiguate speech wherever possible, but that they may also be doing speaker-internal work. The methodological importance of considering L2 gestures when studying grammar is also discussed.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 207-216). Oxford: Blackwell.
  • Gullberg, M. (2008). Gestures and second language acquisition. In P. Robinson, & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276-305). New York: Routledge.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language at multiple levels, and reflect cognitive and linguistic activities in non-trivial ways. This chapter presents an overview of what gestures can tell us about the processes of second language acquisition. It focuses on two key aspects, (a) gestures and the developing language system and (b) gestures and learning, and discusses some implications of an expanded view of language acquisition that takes gestures into account.
  • Gullberg, M., & Holmqvist, K. (2001). Eye tracking and the perception of gestures in face-to-face interaction vs on screen. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité (2001) (pp. 381-384). Paris, France: Editions Harmattan.
  • Hagoort, P., & Ramsey, N. (2001). De gereedschapskist van de cognitieve neurowetenschap. In F. Wijnen, & F. Verstraten (Eds.), Het brein te kijk (pp. 39-67). Lisse: Swets & Zeitlinger.
  • Hagoort, P. (2001). De verbeelding aan de macht: Hoe het menselijk taalvermogen zichtbaar wordt in de (beeld) analyse van hersenactiviteit. In J. Joosse (Ed.), Biologie en psychologie: Naar vruchtbare kruisbestuivingen (pp. 41-60). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Hagoort, P., Ramsey, N. F., & Jensen, O. (2008). De gereedschapskist van de cognitieve neurowetenschap. In F. Wijnen, & F. Verstraten (Eds.), Het brein te kijk: Verkenning van de cognitieve neurowetenschap (pp. 41-75). Amsterdam: Harcourt Assessment.
  • Hagoort, P. (2016). MUC (Memory, Unification, Control): A Model on the Neurobiology of Language Beyond Single Word Processing. In G. Hickok, & S. Small (Eds.), Neurobiology of language (pp. 339-347). Amsterdam: Elsever. doi:10.1016/B978-0-12-407794-2.00028-6.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content.
  • Hagoort, P. (2016). Zij zijn ons brein. In J. Brockman (Ed.), Machines die denken: Invloedrijke denkers over de komst van kunstmatige intelligentie (pp. 184-186). Amsterdam: Maven Publishing.
  • Hagoort, P. (2008). Über Broca, Gehirn und Bindung. In Jahrbuch 2008: Tätigkeitsberichte der Institute. München: Generalverwaltung der Max-Planck-Gesellschaft. Retrieved from http://www.mpg.de/306524/forschungsSchwerpunkt1?c=166434.

    Abstract

    Beim Sprechen und beim Sprachverstehen findet man die Wortbedeutung im Gedächtnis auf und kombiniert sie zu größeren Einheiten (Unifikation). Solche Unifikations-Operationen laufen auf unterschiedlichen Ebenen der Sprachverarbeitung ab. In diesem Beitrag wird ein Rahmen vorgeschlagen, in dem psycholinguistische Modelle mit neurobiologischer Sprachbetrachtung in Verbindung gebracht werden. Diesem Vorschlag zufolge spielt der linke inferiore frontale Gyrus (LIFG) eine bedeutende Rolle bei der Unifi kation
  • Hanulikova, A. (2008). Word recognition in possible word contexts. In M. Kokkonidis (Ed.), Proceedings of LingO 2007 (pp. 92-99). Oxford: Faculty of Linguistics, Philology, and Phonetics, University of Oxford.

    Abstract

    The Possible-Word Constraint (PWC; Norris, McQueen, Cutler, and Butterfield 1997) suggests that segmentation of continuous speech operates with a universal constraint that feasible words should contain a vowel. Single consonants, because they do not constitute syllables, are treated as non-viable residues. Two word-spotting experiments are reported that investigate whether the PWC really is a language-universal principle. According to the PWC, Slovak listeners should, just like Germans, be slower at spotting words in single consonant contexts (not feasible words) as compared to syllable contexts (feasible words)—even if single consonants can be words in Slovak. The results confirm the PWC in German but not in Slovak.
  • Hanulikova, A., & Dietrich, R. (2008). Die variable Coda in der slowakisch-deutschen Interimsprache. In M. Tarvas (Ed.), Tradition und Geschichte im literarischen und sprachwissenschaftlichen Kontext (pp. 119-130). Bern: Peter Lang.
  • Harbusch, K., Kempen, G., & Vosse, T. (2008). A natural-language paraphrase generator for on-line monitoring and commenting incremental sentence construction by L2 learners of German. In Proceedings of WorldCALL 2008.

    Abstract

    Certain categories of language learners need feedback on the grammatical structure of sentences they wish to produce. In contrast with the usual NLP approach to this problem—parsing student-generated texts—we propose a generation-based approach aiming at preventing errors (“scaffolding”). In our ICALL system, students construct sentences by composing syntactic trees out of lexically anchored “treelets” via a graphical drag&drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree, and intervenes immediately when the latter tree does not belong to the set of well-formed alternatives. Feedback is based on comparisons between the student-composed tree and the well-formed set. Frequently occurring errors are handled in terms of “malrules.” The system (implemented in JAVA and C++) currently focuses constituent order in German as L2.
  • Harmon, Z., Barak, L., Shafto, P., Edwards, J., & Feldman, N. H. (2021). Making heads or tails of it: A competition–compensation account of morphological deficits in language impairment. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1872-1878). Vienna: Cognitive Science Society.

    Abstract

    Children with developmental language disorder (DLD) regularly use the base form of verbs (e.g., dance) instead of inflected forms (e.g., danced). We propose an account of this behavior in which children with DLD have difficulty processing novel inflected verbs in their input. This leads the inflected form to face stronger competition from alternatives. Competition is resolved by the production of a more accessible alternative with high semantic overlap with the inflected form: in English, the bare form. We test our account computationally by training a nonparametric Bayesian model that infers the productivity of the inflectional suffix (-ed). We systematically vary the number of novel types of inflected verbs in the input to simulate the input as processed by children with and without DLD. Modeling results are consistent with our hypothesis, suggesting that children’s inconsistent use of inflectional morphemes could stem from inferences they make on the basis of impoverished data.
  • Harmon, Z., & Kapatsinski, V. (2016). Fuse to be used: A weak cue’s guide to attracting attention. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 520-525). Austin, TX: Cognitive Science Society.

    Abstract

    Several studies examined cue competition in human learning by testing learners on a combination of conflicting cues rooting for different outcomes, with each cue perfectly predicting its outcome. A common result has been that learners faced with cue conflict choose the outcome associated with the rare cue (the Inverse Base Rate Effect, IBRE). Here, we investigate cue competition including IBRE with sentences containing cues to meanings in a visual world. We do not observe IBRE. Instead we find that position in the sentence strongly influences cue salience. Faced with conflict between an initial cue and a non-initial cue, learners choose the outcome associated with the initial cue, whether frequent or rare. However, a frequent configuration of non-initial cues that are not sufficiently salient on their own can overcome a competing salient initial cue rooting for a different meaning. This provides a possible explanation for certain recurring patterns in language change.
  • Harmon, Z., & Kapatsinski, V. (2016). Determinants of lengths of repetition disfluencies: Probabilistic syntactic constituency in speech production. In R. Burkholder, C. Cisneros, E. R. Coppess, J. Grove, E. A. Hanink, H. McMahan, C. Meyer, N. Pavlou, Ö. Sarıgül, A. R. Singerman, & A. Zhang (Eds.), Proceedings of the Fiftieth Annual Meeting of the Chicago Linguistic Society (pp. 237-248). Chicago: Chicago Linguistic Society.
  • Heeschen, V., Eibl-Eibesfeldt, I., Grammer, K., Schiefenhövel, W., & Senft, G. (1986). Sprachliches Verhalten. In Generalverwaltung der MPG (Ed.), Max-Planck-Gesellschaft Jahrbuch 1986 (pp. 394-396). Göttingen: Vandenhoeck and Ruprecht.
  • Hellwig, B., Defina, R., Kidd, E., Allen, S. E. M., Davidson, L., & Kelly, B. F. (2021). Child language documentation: The sketch acquisition project. In G. Haig, S. Schnell, & F. Seifart (Eds.), Doing corpus-based typology with spoken language data: State of the art (pp. 29-58). Honolulu, HI: University of Hawai'i Press.

    Abstract

    This paper reports on an on-going project designed to collect comparable corpus data on child language and child-directed language in under-researched languages. Despite a long history of cross-linguistic research, there is a severe empirical bias within language acquisition research: Data is available for less than 2% of the world's languages, heavily skewed towards the larger and better-described languages. As a result, theories of language development tend to be grounded in a non-representative sample, and we know little about the acquisition of typologically-diverse languages from different families, regions, or sociocultural contexts. It is very likely that the reasons are to be found in the forbidding methodological challenges of constructing child language corpora under fieldwork conditions with their strict requirements on participant selection, sampling intervals, and amounts of data. There is thus an urgent need for proposals that facilitate and encourage language acquisition research across a wide variety of languages. Adopting a language documentation perspective, we illustrate an approach that combines the construction of manageable corpora of natural interaction with and between children with a sketch description of the corpus data – resulting in a set of comparable corpora and comparable sketches that form the basis for cross-linguistic comparisons.
  • Hellwig, F. M., & Lüpke, F. (2001). Caused positions. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 126-128). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874644.

    Abstract

    What kinds of resources to languages have for describing location and position? For some languages, verbs have an important role to play in describing different kinds of situations (e.g., whether a bottle is standing or lying on the table). This task is designed to examine the use of positional verbs in locative constructions, with respect to the presence or absence of a human “positioner”. Participants are asked to describe video clips showing locative states that occur spontaneously, or because of active interference from a person. The task follows on from two earlier tools for the elicitation of static locative descriptions (BowPed and the Ameka picture book task). A number of additional variables (e.g. canonical v. non-canonical orientation of the figure) are also targeted in the stimuli set.

    Additional information

    2001_Caused_positions.zip
  • Hendricks, I., Lefever, E., Croijmans, I., Majid, A., & Van den Bosch, A. (2016). Very quaffable and great fun: Applying NLP to wine reviews. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 306-312). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We automatically predict properties of
    wines on the basis of smell and flavor de-
    scriptions from experts’ wine reviews. We
    show wine experts are capable of describ-
    ing their smell and flavor experiences in
    wine reviews in a sufficiently consistent
    manner, such that we can use their descrip-
    tions to predict properties of a wine based
    solely on language. The experimental re-
    sults show promising F-scores when using
    lexical and semantic information to predict
    the color, grape variety, country of origin,
    and price of a wine. This demonstrates,
    contrary to popular opinion, that wine ex-
    perts’ reviews really are informative.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). The effects of onset and offset masking on the time course of non-native spoken-word recognition in noise. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 133-139). Vienna: Cognitive Science Society.

    Abstract

    Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.

    Additional information

    Link to Preprint on BioRxiv
  • Hintz, F., & Scharenborg, O. (2016). Neighbourhood density influences word recognition in native and non-native speech recognition in noise. In H. Van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments (SPIRE) workshop (pp. 46-47). Groningen.
  • Hintz, F., & Scharenborg, O. (2016). The effect of background noise on the activation of phonological and semantic information during spoken-word recognition. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2816-2820).

    Abstract

    During spoken-word recognition, listeners experience phonological competition between multiple word candidates, which increases, relative to optimal listening conditions, when speech is masked by noise. Moreover, listeners activate semantic word knowledge during the word’s unfolding. Here, we replicated the effect of background noise on phonological competition and investigated to which extent noise affects the activation of semantic information in phonological competitors. Participants’ eye movements were recorded when they listened to sentences containing a target word and looked at three types of displays. The displays either contained a picture of the target word, or a picture of a phonological onset competitor, or a picture of a word semantically related to the onset competitor, each along with three unrelated distractors. The analyses revealed that, in noise, fixations to the target and to the phonological onset competitor were delayed and smaller in magnitude compared to the clean listening condition, most likely reflecting enhanced phonological competition. No evidence for the activation of semantic information in the phonological competitors was observed in noise and, surprisingly, also not in the clear. We discuss the implications of the lack of an effect and differences between the present and earlier studies.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.
  • Isaac, A., Matthezing, H., Van der Meij, L., Schlobach, S., Wang, S., & Zinn, C. (2008). Putting ontology alignment in context: Usage, scenarios, deployment and evaluation in a library case. In S. Bechhofer, M. Hauswirth, J. Hoffmann, & M. Koubarakis (Eds.), The semantic web: Research and applications (pp. 402-417). Berlin: Springer.

    Abstract

    Thesaurus alignment plays an important role in realising efficient access to heterogeneous Cultural Heritage data. Current ontology alignment techniques, however, provide only limited value for such access as they consider little if any requirements from realistic use cases or application scenarios. In this paper, we focus on two real-world scenarios in a library context: thesaurus merging and book re-indexing. We identify their particular requirements and describe our approach of deploying and evaluating thesaurus alignment techniques in this context. We have applied our approach for the Ontology Alignment Evaluation Initiative, and report on the performance evaluation of participants’ tools wrt. the application scenario at hand. It shows that evaluations of tools requires significant effort, but when done carefully, brings many benefits.
  • Janse, E. (2001). Comparing word-level intelligibility after linear vs. non-linear time-compression. In Proceedings of the VIIth European Conference on Speech Communication and Technology Eurospeech (pp. 1407-1410).
  • Janssen, R., Winter, B., Dediu, D., Moisik, S. R., & Roberts, S. G. (2016). Nonlinear biases in articulation constrain the design space of language. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/86.html.

    Abstract

    In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?
    The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.
    While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.
    Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).
    Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.
    We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jesse, A., & Johnson, E. K. (2008). Audiovisual alignment in child-directed speech facilitates word learning. In Proceedings of the International Conference on Auditory-Visual Speech Processing (pp. 101-106). Adelaide, Aust: Causal Productions.

    Abstract

    Adult-to-child interactions are often characterized by prosodically-exaggerated speech accompanied by visually captivating co-speech gestures. In a series of adult studies, we have shown that these gestures are linked in a sophisticated manner to the prosodic structure of adults' utterances. In the current study, we use the Preferential Looking Paradigm to demonstrate that two-year-olds can use the alignment of these gestures to speech to deduce the meaning of words.
  • Jordens, P., Matsuo, A., & Perdue, C. (2008). Comparing the acquisition of finiteness: A cross-linguistic approach. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 261-276). Frankfurt am Main: Lang.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (Eds.), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.

    Abstract

    A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to
    spatial terms and their later memory accuracy. In the current
    study, we tested whether the production of spatial terms by
    children themselves predicts memory accuracy and whether
    and how language modality of these encodings modulates
    memory accuracy differently. Hearing child speakers of
    Turkish and deaf child signers of Turkish Sign Language
    described pictures of objects in various spatial relations to each
    other and later tested for their memory accuracy of these
    pictures in a surprise memory task. We found that having
    described the spatial relation between the objects predicted
    better memory accuracy. However, the modality of these
    descriptions in sign, speech, or speech-plus-gesture did not
    reveal differences in memory accuracy. We discuss the
    implications of these findings for the relation between spatial
    language, memory, and the modality of encoding.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G. (1986). Beyond word processing. In E. Cluff, & G. Bunting (Eds.), Information management yearbook 1986 (pp. 178-181). London: IDPM Publications.
  • Kempen, G., & Harbusch, K. (2008). Comparing linguistic judgments and corpus frequencies as windows on grammatical competence: A study of argument linearization in German clauses. In A. Steube (Ed.), The discourse potential of underspecified structures (pp. 179-192). Berlin: Walter de Gruyter.

    Abstract

    We present an overview of several corpus studies we carried out into the frequencies of argument NP orderings in the midfield of subordinate and main clauses of German. Comparing the corpus frequencies with grammaticality ratings published by Keller’s (2000), we observe a “grammaticality–frequency gap”: Quite a few argument orderings with zero corpus frequency are nevertheless assigned medium–range grammaticality ratings. We propose an explanation in terms of a two-factor theory. First, we hypothesize that the grammatical induction component needs a sufficient number of exposures to a syntactic pattern to incorporate it into its repertoire of more or less stable rules of grammar. Moderately to highly frequent argument NP orderings are likely have attained this status, but not their zero-frequency counterparts. This is why the latter argument sequences cannot be produced by the grammatical encoder and are absent from the corpora. Secondly, we assumed that an extraneous (nonlinguistic) judgment process biases the ratings of moderately grammatical linear order patterns: Confronted with such structures, the informants produce their own “ideal delivery” variant of the to-be-rated target sentence and evaluate the similarity between the two versions. A high similarity score yielded by this judgment then exerts a positive bias on the grammaticality rating—a score that should not be mistaken for an authentic grammaticality rating. We conclude that, at least in the linearization domain studied here, the goal of gaining a clear view of the internal grammar of language users is best served by a combined strategy in which grammar rules are founded on structures that elicit moderate to high grammaticality ratings and attain at least moderate usage frequencies.
  • Kempen, G. (1986). Kunstmatige intelligentie en gezond verstand. In P. Hagoort, & R. Maessen (Eds.), Geest, computer, kunst (pp. 118-123). Utrecht: Stichting Grafiet.
  • Kempen, G. (1981). Taalpsychologie. In H. Duijker, & P. Vroon (Eds.), Codex Psychologicus (pp. 205-221). Amsterdam: Elsevier.
  • Kemps-Snijders, M., Klassmann, A., Zinn, C., Berck, P., Russel, A., & Wittenburg, P. (2008). Exploring and enriching a language resource archive via the web. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The ”download first, then process paradigm” is still the predominant working method amongst the research community. The web-based paradigm, however, offers many advantages from a tool development and data management perspective as they allow a quick adaptation to changing research environments. Moreover, new ways of combining tools and data are increasingly becoming available and will eventually enable a true web-based workflow approach, thus challenging the ”download first, then process” paradigm. The necessary infrastructure for managing, exploring and enriching language resources via the Web will need to be delivered by projects like CLARIN and DARIAH
  • Kemps-Snijders, M., Zinn, C., Ringersma, J., & Windhouwer, M. (2008). Ensuring semantic interoperability on lexical resources. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    In this paper, we describe a unifying approach to tackle data heterogeneity issues for lexica and related resources. We present LEXUS, our software that implements the Lexical Markup Framework (LMF) to uniformly describe and manage lexica of different structures. LEXUS also makes use of a central Data Category Registry (DCR) to address terminological issues with regard to linguistic concepts as well as the handling of working and object languages. Finally, we report on ViCoS, a LEXUS extension, providing support for the definition of arbitrary semantic relations between lexical entries or parts thereof.
  • Kemps-Snijders, M., Windhouwer, M., Wittenburg, P., & Wright, S. E. (2008). ISOcat: Corralling data categories in the wild. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    To achieve true interoperability for valuable linguistic resources different levels of variation need to be addressed. ISO Technical Committee 37, Terminology and other language and content resources, is developing a Data Category Registry. This registry will provide a reusable set of data categories. A new implementation, dubbed ISOcat, of the registry is currently under construction. This paper shortly describes the new data model for data categories that will be introduced in this implementation. It goes on with a sketch of the standardization process. Completed data categories can be reused by the community. This is done by either making a selection of data categories using the ISOcat web interface, or by other tools which interact with the ISOcat system using one of its various Application Programming Interfaces. Linguistic resources that use data categories from the registry should include persistent references, e.g. in the metadata or schemata of the resource, which point back to their origin. These data category references can then be used to determine if two or more resources share common semantics, thus providing a level of interoperability close to the source data and a promising layer for semantic alignment on higher levels
  • Kidd, E., Bavin, E. L., & Rhodes, B. (2001). Two-year-olds' knowledge of verbs and argument structures. In M. Almgren, A. Barreña, M.-J. Ezeuzabarrena, I. Idiazabal, & B. MacWhinney (Eds.), Research on child language acquisition: Proceedings of the 8th Conference of the International Association for the Study of Child language (pp. 1368-1382). Sommerville: Cascadilla Press.
  • Kita, S., Danziger, E., & Stolz, C. (2001). Cultural specificity of spatial schemas, as manifested in spontaneous gestures. In M. Gattis (Ed.), Spatial Schemas and Abstract Thought (pp. 115-146). Cambridge, MA, USA: MIT Press.
  • Kita, S. (2001). Locally-anchored spatial gestures, version 2: Historical description of the local environment as a gesture elicitation task. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 132-135). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874647.

    Abstract

    Gesture is an integral part of face-to-face communication, and provides a rich area for cross-cultural comparison. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. For example, such gestures may point to a location or a thing, trace the shape of a path, or indicate the direction of a particular area. The goal of this task is to elicit locally-anchored spatial gestures across different cultures. The task follows an interview format, where one participant prompts another to talk in detail about a specific area that the main speaker knows well. The data can be used for additional purposes such as the investigation of demonstratives.
  • Kita, S. (2001). Recording recommendations for gesture studies. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 130-131). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Klaas, G. (2008). Hints and recommendations concerning field equipment. In A. Majid (Ed.), Field manual volume 11 (pp. vi-vii). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Klein, W. (2008). Sprache innerhalb und ausserhalb der Schule. In Deutschen Akademie für Sprache und Dichtung (Ed.), Jahrbuch 2007 (pp. 140-150). Darmstadt: Wallstein Verlag.
  • Klein, W. (2008). The topic situation. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 287-305). Frankfurt am Main: Lang.
  • Klein, W. (2008). Time in language, language in time. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 1-12). Oxford: Blackwell.
  • Klein, W. (2021). Das „Heidelberger Forschungsprojekt Pidgin-Deutsch “und die Folgen. In B. Ahrenholz, & M. Rost-Roth (Eds.), Ein Blick zurück nach vorn: Frühe deutsche Forschung zu Zweitspracherwerb, Migration, Mehrsprachigkeit und zweitsprachbezogener Sprachdidaktik sowie ihre Bedeutung heute (pp. 50-95). Berlin: De Gruyter.
  • Klein, W., & Rath, R. (1981). Automatische Lemmatisierung deutscher Flexionsformen. In R. Herzog (Ed.), Computer in der Übersetzungswissenschaft (pp. 94-142). Framkfurt am Main, Bern: Verlag Peter Lang.
  • Klein, W. (2001). Das Ende vor Augen: Deutsch als Wissenschaftssprache. In F. Debus, F. Kollmann, & U. Pörken (Eds.), Deutsch als Wissenschaftssprache im 20. Jahrhundert (pp. 289-293). Mainz: Akademie der Wissenschaften und der Literatur.
  • Klein, W. (2001). Deiktische Orientierung. In M. Haspelmath, E. König, W. Oesterreicher, & W. Raible (Eds.), Sprachtypologie und sprachliche Universalien: Vol. 1/1 (pp. 575-590). Berlin: de Gruyter.
  • Klein, W. (1981). Eine kommentierte Bibliographie zur Computerlinguistik. In R. Herzog (Ed.), Computer in der Übersetzungswissenschaft (pp. 95-142). Frankfurt am Main: Lang.
  • Klein, W. (2001). Elementary forms of linguistic organisation. In S. Ward, & J. Trabant (Eds.), The origins of language (pp. 81-102). Berlin: Mouton de Gruyter.
  • Klein, W. (2001). Die Linguistik ist anders geworden. In S. Anschütz, S. Kanngießer, & G. Rickheit (Eds.), A Festschrift for Manfred Briegel: Spektren der Linguistik (pp. 51-72). Wiesbaden: Deutscher Universitätsverlag.
  • Klein, W., & Perdue, C. (1986). Comment résourdre une tache verbale complexe avec peu de moyens linguistiques? In A. Giacomi, & D. Véronique (Eds.), Acquisition d'une langue étrangère (pp. 306-330). Aix-en-Provence: Service des Publications de l'Universite de Provence.
  • Klein, W. (2008). Mündliche Textproduktion: Informationsorganisation in Texten. In N. Janich (Ed.), Textlinguistik: 15 Einführungen (pp. 217-235). Tübingen: Narr Verlag.
  • Klein, W. (1981). Knowing a language and knowing to communicate: A case study in foreign workers' communication. In A. Vermeer (Ed.), Language problems of minority groups (pp. 75-95). Tilburg: Tilburg University.
  • Klein, W. (2001). Lexicology and lexicography. In N. Smelser, & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences: Vol. 13 (pp. 8764-8768). Amsterdam: Elsevier Science.
  • Klein, W. (1981). Logik der Argumentation. In Institut für deutsche Sprache (Ed.), Dialogforschung: Jahrbuch 1980 des Instituts für deutsche Sprache (pp. 226-264). Düsseldorf: Schwann.
  • Klein, W. (1986). Intonation und Satzmodalität in einfachen Fällen: Einige Beobachtungen. In E. Slembek (Ed.), Miteinander sprechen und handeln: Festschrift für Hellmut Geissner (pp. 161-177). Königstein Ts.: Scriptor.
  • Klein, W. (1981). Some rules of regular ellipsis in German. In W. Klein, & W. J. M. Levelt (Eds.), Crossing the boundaries in linguistics: Studies presented to Manfred Bierwisch (pp. 51-78). Dordrecht: Reidel.
  • Klein, W. (2001). Second language acquisition. In N. Smelser, & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences: Vol. 20 (pp. 13768-13771). Amsterdam: Elsevier science.
  • Klein, W. (2001). Time and again. In C. Féry, & W. Sternefeld (Eds.), Audiatur vox sapientiae: A festschrift for Arnim von Stechow (pp. 267-286). Berlin: Akademie Verlag.
  • Klein, W. (2001). Typen und Konzepte des Spracherwerbs. In L. Götze, G. Helbig, G. Henrici, & H. Krumm (Eds.), Deutsch als Fremdsprache (pp. 604-616). Berlin: de Gruyter.
  • Kooijman, V., Johnson, E. K., & Cutler, A. (2008). Reflections on reflections of infant word recognition. In A. D. Friederici, & G. Thierry (Eds.), Early language development: Bridging brain and behaviour (pp. 91-114). Amsterdam: Benjamins.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth., S. (2021). Lexical priming as evidence for language-nonselective access in the simultaneous bilingual child's lexicon. In D. Dionne, & L.-A. Vidal Covas (Eds.), BUCLD 45: Proceedings of the 45th annual Boston University Conference on Language Development (pp. 413-430). Sommerville, MA: Cascadilla Press.
  • Kupisch, T., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2021). Multilingualism and Chomsky's Generative Grammar. In N. Allott (Ed.), A companion to Chomsky (pp. 232-242). doi:10.1002/9781119598732.ch15.

    Abstract

    Like Einstein's general theory of relativity is concerned with explaining the basics of an observable experience – i.e., gravity – most people take for granted that Chomsky's theory of generative grammar (GG) is concerned with the basic nature of language. This chapter highlights a mere subset of central constructs in GG, showing how they have featured prominently and thus shaped formal linguistic studies in multilingualism. Because multilingualism includes a wide range of nonmonolingual populations, the constructs are divided across child bilingualism and adult third language for greater coverage. In the case of the former, the chapter examines how poverty of the stimulus has been investigated. Using the nascent field of L3/Ln acquisition as the backdrop, it discusses how the GG constructs of I-language versus E-language sit at the core of debates regarding the very notion of what linguistic transfer and mental representations should be taken to be.
  • Lausberg, H., & Kita, S. (2001). Hemispheric specialization in nonverbal gesticulation investigated in patients with callosal disconnection. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 266-270). Paris, France: Éditions L'Harmattan.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2008). Accelerating 3D medical image segmentation with high performance computing. In Proceedings of the IEEE International Workshops on Image Processing Theory, Tools and Applications - IPT (pp. 1-8).

    Abstract

    Digital processing of medical images has helped physicians and patients during past years by allowing examination and diagnosis on a very precise level. Nowadays possibly the biggest deal of support it can offer for modern healthcare is the use of high performance computing architectures to treat the huge amounts of data that can be collected by modern acquisition devices. This paper presents a parallel processing implementation of an image segmentation algorithm that operates on a computer cluster equipped with 10 processing units. Thanks to well-organized distribution of the workload we manage to significantly shorten the execution time of the developed algorithm and reach a performance gain very close to linear.
  • Levelt, W. J. M. (2016). Localism versus holism. Historical origins of studying language in the brain. In R. Rubens, & M. Van Dijk (Eds.), Sartoniana vol. 29 (pp. 37-60). Ghent: Ghent University.

Share this page