Publications

Displaying 201 - 300 of 328
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Merolla, D., & Ameka, F. K. (2012). Reflections on video fieldwork: The making of Verba Africana IV on the Ewe Hogbetsotso Festival. In D. Merolla, J. Jansen, & K. Nait-Zerrad (Eds.), Multimedia research and documentation of oral genres in Africa - The step forward (pp. 123-132). Münster: Lit.
  • Mitterer, H. (Ed.). (2012). Ecological aspects of speech perception [Research topic] [Special Issue]. Frontiers in Cognition.

    Abstract

    Our knowledge of speech perception is largely based on experiments conducted with carefully recorded clear speech presented under good listening conditions to undistracted listeners - a near-ideal situation, in other words. But the reality poses a set of different challenges. First of all, listeners may need to divide their attention between speech comprehension and another task (e.g., driving). Outside the laboratory, the speech signal is often slurred by less than careful pronunciation and the listener has to deal with background noise. Moreover, in a globalized world, listeners need to understand speech in more than their native language. Relatedly, the speakers we listen to often have a different language background so we have to deal with a foreign or regional accent we are not familiar with. Finally, outside the laboratory, speech perception is not an end in itself, but rather a mean to contribute to a conversation. Listeners do not only need to understand the speech they are hearing, they also need to use this information to plan and time their own responses. For this special topic, we invite papers that address any of these ecological aspects of speech perception.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Namjoshi, J., Tremblay, A., Broersma, M., Kim, S., & Cho, T. (2012). Influence of recent linguistic exposure on the segmentation of an unfamiliar language [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 1968.

    Abstract

    Studies have shown that listeners segmenting unfamiliar languages transfer native-language (L1) segmentation cues. These studies, however, conflated L1 and recent linguistic exposure. The present study investigates the relative influences of L1 and recent linguistic exposure on the use of prosodic cues for segmenting an artificial language (AL). Participants were L1-French listeners, high-proficiency L2-French L1-English listeners, and L1-English listeners without functional knowledge of French. The prosodic cue assessed was F0 rise, which is word-final in French, but in English tends to be word-initial. 30 participants heard a 20-minute AL speech stream with word-final boundaries marked by F0 rise, and decided in a subsequent listening task which of two words (without word-final F0 rise) had been heard in the speech stream. The analyses revealed a marginally significant effect of L1 (all listeners) and, importantly, a significant effect of recent linguistic exposure (L1-French and L2-French listeners): accuracy increased with decreasing time in the US since the listeners’ last significant (3+ months) stay in a French-speaking environment. Interestingly, no effect of L2 proficiency was found (L2-French listeners).
  • Narasimhan, B., Kopecka, A., Bowerman, M., Gullberg, M., & Majid, A. (2012). Putting and taking events: A crosslinguistic perspective. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 1-18). Amsterdam: Benjamins.
  • Narasimhan, B. (2012). Putting and Taking in Tamil and Hindi. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 201-230). Amsterdam: Benjamins.

    Abstract

    Many languages have general or “light” verbs used by speakers to describe a wide range of situations owing to their relatively schematic meanings, e.g., the English verb do that can be used to describe many different kinds of actions, or the verb put that labels a range of types of placement of objects at locations. Such semantically bleached verbs often become grammaticalized and used to encode an extended (set of) meaning(s), e.g., Tamil veyyii ‘put/place’ is used to encode causative meaning in periphrastic causatives (e.g., okkara veyyii ‘make sit’, nikka veyyii ‘make stand’). But do general verbs in different languages have the same kinds of (schematic) meanings and extensional ranges? Or do they reveal different, perhaps even cross-cutting, ways of structuring the same semantic domain in different languages? These questions require detailed crosslinguistic investigation using comparable methods of eliciting data. The present study is a first step in this direction, and focuses on the use of general verbs to describe events of placement and removal in two South Asian languages, Hindi and Tamil.
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Nijveld, A. (2019). The role of exemplars in speech comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Nordhoff, S., & Hammarström, H. (2012). Glottolog/Langdoc: Increasing the visibility of grey literature for low-density languages. In N. Calzolari (Ed.), Proceedings of the 8th International Conference on Language Resources and Evaluation [LREC 2012], May 23-25, 2012 (pp. 3289-3294). [Paris]: ELRA.

    Abstract

    Language resources can be divided into structural resources treating phonology, morphosyntax, semantics etc. and resources treating the social, demographic, ethnic, political context. A third type are meta-resources, like bibliographies, which provide access to the resources of the first two kinds. This poster will present the Glottolog/Langdoc project, a comprehensive bibliography providing web access to 180k bibliographical records to (mainly) low visibility resources from low-density languages. The resources are annotated for macro-area, content language, and document type and are available in XHTML and RDF.
  • Nouaouri, N. (2012). The semantics of placement and removal predicates in Moroccan Arabic. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 99-122). Amsterdam: Benjamins.

    Abstract

    This article explores the expression of placement and removal events in Moroccan Arabic, particularly the semantic features of ‘putting’ and ‘taking’ verbs, classified in accordance with their combination with Goal and/or Source NPs. Moroccan Arabic verbs encode a variety of components of placement and removal events, including containment, attachment, features of the figure, and trajectory. Furthermore, accidental events are distinguished from deliberate events either by the inherent semantics of predicates or denoted syntactically. The postures of the Figures, in spite of some predicates distinguishing them, are typically not specified as they are in other languages, such as Dutch. Although Ground locations are frequently mentioned in both source-oriented and goal-oriented clauses, they are used more often in goal-oriented clauses.
  • O’Connor, L. (2012). Take it up, down, and away: Encoding placement and removal in Lowland Chontal. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 297-326). Amsterdam: Benjamins.

    Abstract

    This paper offers a structural and semantic analysis of expressions of caused motion in Lowland Chontal of Oaxaca, an indigenous language of southern Mexico. The data were collected using a video stimulus designed to elicit a wide range of caused motion event descriptions. The most frequent event types in the corpus depict caused motion to and from relations of support and containment, fundamental notions in the de­scription of spatial relations between two entities and critical semantic components of the linguistic encoding of caused motion in this language. Formal features of verbal construction type and argument realization are examined by sorting event descriptions into semantic types of placement and removal, to and from support and to and from containment. Together with typological factors that shape the distribution of spatial semantics and referent expression, separate treatments of support and containment relations serve to clarify notable asymmetries in patterns of predicate type and argument realization.
  • O'Meara, C., Speed, L. J., San Roque, L., & Majid, A. (2019). Perception Metaphors: A view from diversity. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. 1-16). Amsterdam: Benjamins.

    Abstract

    Our bodily experiences play an important role in the way that we think and speak. Abstract language is, however, difficult to reconcile with this body-centred view, unless we appreciate the role metaphors play. To explore the role of the senses across semantic domains, we focus on perception metaphors, and examine their realisation across diverse languages, methods, and approaches. To what extent do mappings in perception metaphor adhere to predictions based on our biological propensities; and to what extent is there space for cross-linguistic and cross-cultural variation? We find that while some metaphors have widespread commonality, there is more diversity attested than should be comfortable for universalist accounts.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.

    Abstract

    Gestures are meaningful movements of the body, the hands, and the face during communication,
    which accompany the production of both spoken and signed utterances. Recent
    research has shown that gestures are an integral part of language and that they contribute
    semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore,
    they reveal internal representations of the language user during communication in ways
    that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes
    research on the role of gesture in spoken languages. Subsequently, it gives an overview
    of how gestural components might manifest themselves in sign languages, that is,
    in a situation in which both gesture and sign are expressed by the same articulators.
    Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and
    spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.
  • Ozyurek, A., & Woll, B. (2019). Language in the visual modality: Cospeech gesture and sign language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 67-83). Cambridge, MA: MIT Press.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.
  • Peeters, D., Vanlangendonck, F., & Willems, R. M. (2012). Bestaat er een talenknobbel? Over taal in ons brein. In M. Boogaard, & M. Jansen (Eds.), Alles wat je altijd al had willen weten over taal: De taalcanon (pp. 41-43). Amsterdam: Meulenhoff.

    Abstract

    Wanneer iemand goed is in het spreken van meerdere talen, wordt wel gezegd dat zo iemand een talenknobbel heeft. Iedereen weet dat dat niet letterlijk bedoeld is: iemand met een talenknobbel herkennen we niet aan een grote bult op zijn hoofd. Toch dacht men vroeger wel degelijk dat mensen een letterlijke talenknobbel konden ontwikkelen. Een goed ontwikkeld taalvermogen zou gepaard gaan met het groeien van het hersengebied dat hiervoor verantwoordelijk was. Dit deel van het brein zou zelfs zo groot kunnen worden dat het van binnenuit tegen de schedel drukte, met name rond de ogen. Nu weten we wel beter. Maar waar in het brein bevindt de taal zich dan wel precies?
  • Perniss, P. M. (2012). Use of sign space. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language: an International Handbook (pp. 412-431). Berlin: Mouton de Gruyter.

    Abstract

    This chapter focuses on the semantic and pragmatic uses of space. The questions addressed concern how sign space (i.e. the area of space in front of the signer’s body) is used for meaning construction, how locations in sign space are associated with discourse referents, and how signers choose to structure sign space for their communicative intents. The chapter gives an overview of linguistic analyses of the use of space, starting with the distinction between syntactic and topographic uses of space and the different types of signs that function to establish referent-location associations, and moving to analyses based on mental spaces and conceptual blending theories. Semantic-pragmatic conventions for organizing sign space are discussed, as well as spatial devices notable in the visual-spatial modality (particularly, classifier predicates and signing perspective), which influence and determine the way meaning is created in sign space. Finally, the special role of simultaneity in sign languages is discussed, focusing on the semantic and discourse-pragmatic functions of simultaneous constructions.
  • Petersen, J. H. (2012). How to put and take in Kalasha. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 349-366). Amsterdam: Benjamins.

    Abstract

    In Kalasha, an Indo-Aryan language spoken in Northwest Pakistan, the linguistic encoding of ‘put’ and ‘take’ events reveals a symmetry between lexical ‘put’ and ‘take’ verbs that implies ‘placement on’ and ‘removal from’ a supporting surface. As regards ‘placement in’ and ‘removal from’ an enclosure, the data reveal a lexical asymmetry as ‘take’ verbs display a larger degree of linguistic elaboration of the Figure-Ground relation and the type of caused motion than ‘put’ verbs. When considering syntactic patterns, more instances of asymmetry between these two event types show up. The analysis presented here supports the proposal that an asymmetry exists in the encoding of goals versus sources as suggested in Nam (2004) and Ikegami (1987), but it calls into question the statement put forward by Regier and Zheng (2007) that endpoints (goals) are more finely differentiated semantically than starting points (sources).
  • Piai, V., & Zheng, X. (2019). Speaking waves: Neuronal oscillations in language production. In K. D. Federmeier (Ed.), Psychology of Learning and Motivation (pp. 265-302). Elsevier.

    Abstract

    Language production involves the retrieval of information from memory, the planning of an articulatory program, and executive control and self-monitoring. These processes can be related to the domains of long-term memory, motor control, and executive control. Here, we argue that studying neuronal oscillations provides an important opportunity to understand how general neuronal computational principles support language production, also helping elucidate relationships between language and other domains of cognition. For each relevant domain, we provide a brief review of the findings in the literature with respect to neuronal oscillations. Then, we show how similar patterns are found in the domain of language production, both through review of previous literature and novel findings. We conclude that neurophysiological mechanisms, as reflected in modulations of neuronal oscillations, may act as a fundamental basis for bringing together and enriching the fields of language and cognition.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 2053.

    Abstract

    Two eye-tracking experiments tested whether native listeners can adapt
    to reductions in casual Dutch speech. Listeners were exposed to segmental
    ([b] > [m]), syllabic (full-vowel-deletion), or no reductions. In a subsequent
    test phase, all three listener groups were tested on how efficiently they could
    recognize both types of reduced words. In the first Experiment’s exposure
    phase, the (un)reduced target words were predictable. The segmental reductions
    were completely consistent (i.e., involved the same input sequences).
    Learning about them was found to be pattern-specific and generalized in the
    test phase to new reduced /b/-words. The syllabic reductions were not consistent
    (i.e., involved variable input sequences). Learning about them was
    weak and not pattern-specific. Experiment 2 examined effects of word repetition
    and predictability. The (un-)reduced test words appeared in the exposure
    phase and were not predictable. There was no evidence of learning for
    the segmental reductions, probably because they were not predictable during
    exposure. But there was word-specific learning for the vowel-deleted words.
    The results suggest that learning about reductions is pattern-specific and
    generalizes to new words if the input is consistent and predictable. With
    variable input, there is more likely to be adaptation to a general speaking
    style and word-specific learning.
  • Poort, E. D. (2019). The representation of cognates and interlingual homographs in the bilingual lexicon. PhD Thesis, University College London, London, UK.

    Abstract

    Cognates and interlingual homographs are words that exist in multiple languages. Cognates, like “wolf” in Dutch and English, also carry the same meaning. Interlingual homographs do not: the word “angel” in English refers to a spiritual being, but in Dutch to the sting of a bee. The six experiments included in this thesis examined how these words are represented in the bilingual mental lexicon. Experiment 1 and 2 investigated the issue of task effects on the processing of cognates. Bilinguals often process cognates more quickly than single-language control words (like “carrot”, which exists in English but not Dutch). These experiments showed that the size of this cognate facilitation effect depends on the other types of stimuli included in the task. These task effects were most likely due to response competition, indicating that cognates are subject to processes of facilitation and inhibition both within the lexicon and at the level of decision making. Experiment 3 and 4 examined whether seeing a cognate or interlingual homograph in one’s native language affects subsequent processing in one’s second language. This method was used to determine whether non-identical cognates share a form representation. These experiments were inconclusive: they revealed no effect of cross-lingual long-term priming. Most likely this was because a lexical decision task was used to probe an effect that is largely semantic in nature. Given these caveats to using lexical decision tasks, two final experiments used a semantic relatedness task instead. Both experiments revealed evidence for an interlingual homograph inhibition effect but no cognate facilitation effect. Furthermore, the second experiment found evidence for a small effect of cross-lingual long-term priming. After comparing these findings to the monolingual literature on semantic ambiguity resolution, this thesis concludes that it is necessary to explore the viability of a distributed connectionist account of the bilingual mental lexicon.

    Additional information

    full text via UCL
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2019). Acoustic specification of upper limb movement in voicing. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 68-74). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.
  • Pouw, W., & Dixon, J. A. (2019). Quantifying gesture-speech synchrony. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 75-80). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.

    Abstract

    Spontaneously occurring speech is often seamlessly accompanied by hand gestures. Detailed
    observations of video data suggest that speech and gesture are tightly synchronized in time,
    consistent with a dynamic interplay between body and mind. However, spontaneous gesturespeech
    synchrony has rarely been objectively quantified beyond analyses of video data, which
    do not allow for identification of kinematic properties of gestures. Consequently, the point in
    gesture which is held to couple with speech, the so-called moment of “maximum effort”, has
    been variably equated with the peak velocity, peak acceleration, peak deceleration, or the onset
    of the gesture. In the current exploratory report, we provide novel evidence from motiontracking
    and acoustic data that peak velocity is closely aligned, and shortly leads, the peak pitch
    (F0) of speech

    Additional information

    https://osf.io/9843h/
  • Puccini, D., Hassemer, M., Salomo, D., & Liszkowski, U. (2012). The type of shared activity shapes caregiver and infant communication [Reprint]. In J.-M. Colletta, & M. Guidetti (Eds.), Gesture and multimodal development (pp. 157-174). Amsterdam: John Benjamins.

    Abstract

    For the beginning language learner, communicative input is not based on linguistic codes alone. This study investigated two extralinguistic factors which are important for infants’ language development: the type of ongoing shared activity and non-verbal, deictic gestures. The natural interactions of 39 caregivers and their 12-month-old infants were recorded in two semi-natural contexts: a free play situation based on action and manipulation of objects, and a situation based on regard of objects, broadly analogous to an exhibit. Results show that the type of shared activity structures both caregivers’ language usage and caregivers’ and infants’ gesture usage. Further, there is a specific pattern with regard to how caregivers integrate speech with particular deictic gesture types. The findings demonstrate a pervasive influence of shared activities on human communication, even before language has emerged. The type of shared activity and caregivers’ systematic integration of specific forms of deictic gestures with language provide infants with a multimodal scaffold for a usage-based acquisition of language.
  • Rakoczy, H., & Haun, D. B. M. (2012). Vor- und nichtsprachliche Kognition. In W. Schneider, & U. Lindenberger (Eds.), Entwicklungspsychologie. 7. vollständig überarbeitete Auflage (pp. 337-362). Weinheim: Beltz Verlag.
  • Rapold, C. J. (2012). The encoding of placement and removal events in ǂAkhoe Haiǁom. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 79-98). Amsterdam: Benjamins.

    Abstract

    This paper explores the semantics of placement and removal verbs in Ākhoe Hai om based on event descriptions elicited with a set of video stimuli. After a brief sketch of the morphosyntax of placement/removal constructions in Ākhoe Haiom, four situation types are identified semantically that cover both placement and removal events. The language exhibits a clear tendency to make more fine-grained semantic distinctions in placement verbs, as opposed to semantically more general removal verbs.
  • Ravignani, A., & Fitch, W. T. (2012). Sonification of experimental parameters as a new method for efficient coding of behavior. In A. Spink, F. Grieco, O. E. Krips, L. W. S. Loijens, L. P. P. J. Noldus, & P. H. Zimmerman (Eds.), Measuring Behavior 2012, 8th International Conference on Methods and Techniques in Behavioral Research (pp. 376-379).

    Abstract

    Cognitive research is often focused on experimental condition-driven reactions. Ethological studies frequently
    rely on the observation of naturally occurring specific behaviors. In both cases, subjects are filmed during the
    study, so that afterwards behaviors can be coded on video. Coding should typically be blind to experimental
    conditions, but often requires more information than that present on video. We introduce a method for blindcoding
    of behavioral videos that takes care of both issues via three main innovations. First, of particular
    significance for playback studies, it allows creation of a “soundtrack” of the study, that is, a track composed of
    synthesized sounds representing different aspects of the experimental conditions, or other events, over time.
    Second, it facilitates coding behavior using this audio track, together with the possibly muted original video.
    This enables coding blindly to conditions as required, but not ignoring other relevant events. Third, our method
    makes use of freely available, multi-platform software, including scripts we developed.
  • Ravignani, A., Chiandetti, C., & Kotz, S. (2019). Rhythm and music in animal signals. In J. Choe (Ed.), Encyclopedia of Animal Behavior (vol. 1) (2nd ed., pp. 615-622). Amsterdam: Elsevier.
  • Rissman, L., & Majid, A. (2019). Agency drives category structure in instrumental events. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2661-2667). Montreal, QB: Cognitive Science Society.

    Abstract

    Thematic roles such as Agent and Instrument have a long-standing place in theories of event representation. Nonetheless, the structure of these categories has been difficult to determine. We investigated how instrumental events, such as someone slicing bread with a knife, are categorized in English. Speakers described a variety of typical and atypical instrumental events, and we determined the similarity structure of their descriptions using correspondence analysis. We found that events where the instrument is an extension of an intentional agent were most likely to elicit similar language, highlighting the importance of agency in structuring instrumental categories.
  • Roberts, L., & Meyer, A. S. (Eds.). (2012). Individual differences in second language acquisition [Special Issue]. Language Learning, 62(Supplement S2).
  • Roberts, L. (2012). Sentence and discourse processing in second language comprehension. In C. A. Chapelle (Ed.), Encyclopedia of Applied Linguistics. Chicester: Wiley-Blackwell. doi:10.1002/9781405198431.wbeal1063.

    Abstract

    n applied linguistics (AL), researchers have always been concerned with second language (L2) learners' knowledge of the target language (TL), investigating the development of TL grammar, vocabulary, and phonology, for instance.
  • Rojas-Berscia, L. M. (2019). From Kawapanan to Shawi: Topics in language variation and change. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rojas-Berscia, L. M. (2019). Nominalization in Shawi/Chayahuita. In R. Zariquiey, M. Shibatani, & D. W. Fleck (Eds.), Nominalization in languages of the Americas (pp. 491-514). Amsterdam: Benjamins.

    Abstract

    This paper deals with the Shawi nominalizing suffixes -su’~-ru’~-nu’ ‘general nominalizer’, -napi/-te’/-tun‘performer/agent nominalizer’, -pi’‘patient nominalizer’, and -nan ‘instrument nominalizer’. The goal of this article is to provide a description of nominalization in Shawi. Throughout this paper I apply the Generalized Scale Model (GSM) (Malchukov, 2006) to Shawi verbal nominalizations, with the intention of presenting a formal representation that will provide a basis for future areal and typological studies of nominalization. In addition, I dialogue with Shibatani’s model to see how the loss or gain of categories correlates with the lexical or grammatical nature of nominalizations. strong nominalization in Shawi correlates with lexical nominalization, whereas weak nominalizations correlate with grammatical nominalization. A typology which takes into account the productivity of the nominalizers is also discussed.
  • Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Wat doen onze ogen als we met andere mensen praten? In zijn proefschrift beschrijft Federico Rossano hoe mensen hun ogen gebruiken tijdens face-to-face interacties. Onze oogbewegingen blijken opvallend geordend en voorspelbaar: zo is het bijvoorbeeld mogelijk om met uitsluitend de ogen een reactie uit te lokken als de gesprekspartner niet direct reageert. Ook wanneer bijvoorbeeld een vraag-antwoordreeks ten einde loopt, coördineren gespreksdeelnemers hun oogbewegingen op een specifieke manier. Daarnaast heeft luisteren naar een verhaal of luisteren naar een vraag verschillende implicaties voor oogbewegingen. Dit proefschrift bevat daarom belangrijke informatie voor experts op het gebied van kunstmatige intelligentie en computerwetenschappers: de voorspelbaarheid en reproduceerbaarheid van natuurlijke oogbewegingen kan onder andere gebruikt worden bij de ontwikkeling van robots of avatars.

    Additional information

    full text via Radboud Repository
  • Rowland, C. F., & Kidd, E. (2019). Key issues and future directions: How do children acquire language? In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 181-185). Cambridge, MA: MIT Press.
  • Rubio-Fernández, P. (2019). Theory of mind. In C. Cummins, & N. Katsos (Eds.), The Handbook of Experimental Semantics and Pragmatics (pp. 524-536). Oxford: Oxford University Press.
  • De Ruiter, J. P. (1998). Gesture and speech production. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057686.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Newman-Norlund, R., Hagoort, P., Levinson, S. C., & Toni, I. (2012). Exploring the cognitive infrastructure of communication. In B. Galantucci, & S. Garrod (Eds.), Experimental Semiotics: Studies on the emergence and evolution of human communication (pp. 51-78). Amsterdam: Benjamins.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.

    Files private

    Request files
  • Scharenborg, O., Witteman, M. J., & Weber, A. (2012). Computational modelling of the recognition of foreign-accented speech. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 882 -885).

    Abstract

    In foreign-accented speech, pronunciation typically deviates from the canonical form to some degree. For native listeners, it has been shown that word recognition is more difficult for strongly-accented words than for less strongly-accented words. Furthermore recognition of strongly-accented words becomes easier with additional exposure to the foreign accent. In this paper, listeners’ behaviour was simulated with Fine-tracker, a computational model of word recognition that uses real speech as input. The simulations showed that, in line with human listeners, 1) Fine-Tracker’s recognition outcome is modulated by the degree of accentedness and 2) it improves slightly after brief exposure with the accent. On the level of individual words, however, Fine-tracker failed to correctly simulate listeners’ behaviour, possibly due to differences in overall familiarity with the chosen accent (German-accented Dutch) between human listeners and Fine-Tracker.
  • Scharenborg, O., & Janse, E. (2012). Hearing loss and the use of acoustic cues in phonetic categorisation of fricatives. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 1458-1461).

    Abstract

    Aging often affects sensitivity to the higher frequencies, which results in the loss of sensitivity to phonetic detail in speech. Hearing loss may therefore interfere with the categorisation of two consonants that have most information to differentiate between them in those higher frequencies and less in the lower frequencies, e.g., /f/ and /s/. We investigate two acoustic cues, i.e., formant transitions and fricative intensity, that older listeners might use to differentiate between /f/ and /s/. The results of two phonetic categorisation tasks on 38 older listeners (aged 60+) with varying degrees of hearing loss indicate that older listeners seem to use formant transitions as a cue to distinguish /s/ from /f/. Moreover, this ability is not impacted by hearing loss. On the other hand, listeners with increased hearing loss seem to rely more on intensity for fricative identification. Thus, progressive hearing loss may lead to gradual changes in perceptual cue weighting.
  • Scharenborg, O., Janse, E., & Weber, A. (2012). Perceptual learning of /f/-/s/ by older listeners. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 398-401).

    Abstract

    Young listeners can quickly modify their interpretation of a speech sound when a talker produces the sound ambiguously. Young Dutch listeners rely mainly on the higher frequencies to distinguish between /f/ and /s/, but these higher frequencies are particularly vulnerable to age-related hearing loss. We therefore tested whether older Dutch listeners can show perceptual retuning given an ambiguous pronunciation in between /f/ and /s/. Results of a lexically-guided perceptual learning experiment showed that older Dutch listeners are still able to learn non-standard pronunciations of /f/ and /s/. Possibly, the older listeners have learned to rely on other acoustic cues, such as formant transitions, to distinguish between /f/ and /s/. However, the size and duration of the perceptual effect is influenced by hearing loss, with listeners with poorer hearing showing a smaller and a shorter-lived learning effect.
  • Schimke, S., Verhagen, J., & Turco, G. (2012). The different role of additive and negative particles in the development of finiteness in early adult L2 German and L2 Dutch. In M. Watorek, S. Benazzo, & M. Hickmann (Eds.), Comparative perspectives on language acquisition: A tribute to Clive Perdue (pp. 73-91). Bristol: Multilingual Matters.
  • Schoenmakers, G.-J., & De Swart, P. (2019). Adverbial hurdles in Dutch scrambling. In A. Gattnar, R. Hörnig, M. Störzer, & S. Featherston (Eds.), Proceedings of Linguistic Evidence 2018: Experimental Data Drives Linguistic Theory (pp. 124-145). Tübingen: University of Tübingen.

    Abstract

    This paper addresses the role of the adverb in Dutch direct object scrambling constructions. We report four experiments in which we investigate whether the structural position and the scope sensitivity of the adverb affect acceptability judgments of scrambling constructions and native speakers' tendency to scramble definite objects. We conclude that the type of adverb plays a key role in Dutch word ordering preferences.
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference
  • Segaert, K. (2012). Structuring language: Contributions to the neurocognition of syntax. PhD Thesis, Radboud University, Nijmegen, the Netherlands.

    Abstract

    Sprekers hebben een sterke neiging om syntactische structuren te hergebruiken in nieuwe zinnen. Wanneer we een situatie beschrijven met een passieve zin bijvoorbeeld: 'De vrouw wordt begroet door de man', zullen we voor de beschrijving van een nieuwe situatie gemakkelijker opnieuw een passieve zin gebruiken. Vooral bij moeilijke syntactische structuren is de neiging om ze te hergebruiken erg sterk. Voor gemakkelijke zinsconstructies geldt dat minder. Maar als deze toch hergebruikt worden dan gaat dit samen met een sneller initiëren van de beschrijving. Ook in het brein zien we dat het herhalen van syntactische structuren de verwerking ervan vergemakkelijkt. Bepaalde hersengebieden die zorgen voor de verwerking van syntactische structuren zijn zeer actief de eerste keer dat een syntactische structuur wordt verwerkt, en minder actief de tweede keer. Het gaat hier om een gebiedje in de frontaalkwab en een gebiedje in de temporaalkwab. Opvallend is ook dat deze gebieden de verwerking van syntactische structuren ondersteunen zowel tijdens het spreken als tijdens het luisteren.

    Additional information

    full text via Radboud Repository
  • Seidlmayer, E., Galke, L., Melnychuk, T., Schultz, C., Tochtermann, K., & Förstner, K. U. (2019). Take it personally - A Python library for data enrichment for infometrical applications. In M. Alam, R. Usbeck, T. Pellegrini, H. Sack, & Y. Sure-Vetter (Eds.), Proceedings of the Posters and Demo Track of the 15th International Conference on Semantic Systems co-located with 15th International Conference on Semantic Systems (SEMANTiCS 2019).

    Abstract

    Like every other social sphere, science is influenced by individual characteristics of researchers. However, for investigations on scientific networks, only little data about the social background of researchers, e.g. social origin, gender, affiliation etc., is available.
    This paper introduces ”Take it personally - TIP”, a conceptual model and library currently under development, which aims to support the
    semantic enrichment of publication databases with semantically related background information which resides elsewhere in the (semantic) web, such as Wikidata.
    The supplementary information enriches the original information in the publication databases and thus facilitates the creation of complex scientific knowledge graphs. Such enrichment helps to improve the scientometric analysis of scientific publications as they can also take social backgrounds of researchers into account and to understand social structure in research communities.
  • Seijdel, N., Sakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2019). Implicit scene segmentation in deeper convolutional neural networks. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 1059-1062). doi:10.32470/CCN.2019.1149-0.

    Abstract

    Feedforward deep convolutional neural networks (DCNNs) are matching and even surpassing human performance on object recognition. This performance suggests that activation of a loose collection of image
    features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Recent findings in humans however, suggest that while feedforward activity may suffice for
    sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to
    performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects
    and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicated less distinction between object- and background features for more shallow networks. For those networks, we observed a benefit of training on segmented objects (as compared to unsegmented objects). Overall, deeper networks trained on natural
    (unsegmented) scenes seem to perform implicit 'segmentation' of the objects from their background, possibly by improved selection of relevant features.
  • Senft, G. (2012). Das Erlernen von Fremdsprachen als Voraussetzung für erfolgreiche Feldforschung. In J. Kruse, S. Bethmann, D. Niermann, & C. Schmieder (Eds.), Qualitative Interviewforschung in und mit fremden Sprachen: Eine Einführung in Theorie und Praxis (pp. 121-135). Weinheim: Beltz Juventa.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2012). 67 Wörter + 1 Foto für Roland Posner. In E. Fricke, & M. Voss (Eds.), 68 Zeichen für Roland Posner - Ein semiotisches Mosaik / 68 signs for Roland Posner - A semiotic mosaic (pp. 473-474). Tübingen: Stauffenberg Verlag.
  • Senft, G. (2012). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie - Einführung und Überblick. 7. überarbeitete und erweiterte Auflage (pp. 271-286). Berlin: Reimer.
  • Senft, G. (2012). Referring to colour and taste in Kilivila: Stability and change in two lexical domains of sensual perception. In A. C. Schalley (Ed.), Practical theories and empirical practice (pp. 71-98). Amsterdam: John Benjamins.

    Abstract

    This chapter first compares data collected on Kilivila colour terms in 1983 with data collected in 2008. The Kilivila lexicon has changed from a typical stage IIIb into a stage VII colour term lexicon (Berlin and Kay 1969). The chapter then compares data on the Kilivila taste vocabulary collected in 1982/83 with data collected in 2008. No substantial change was found. Finally the chapter compares the 2008 results on taste terms with a paper on the taste vocabulary of the Torres Strait Islanders published in 1904 by Charles S. Myers. Kilivila provides evidence that traditional terms used for talking about colour and terms used to refer to tastes have remained relatively stable over time.
  • Senft, G. (2019). Rituelle Kommunikation. In F. Liedtke, & A. Tuchen (Eds.), Handbuch Pragmatik (pp. 423-430). Stuttgart: J. B. Metzler. doi:10.1007/978-3-476-04624-6_41.

    Abstract

    Die Sprachwissenschaft hat den Begriff und das Konzept ›Rituelle Kommunikation‹ von der vergleichenden Verhaltensforschung übernommen. Humanethologen unterscheiden eine Reihe von sogenannten ›Ausdrucksbewegungen‹, die in der Mimik, der Gestik, der Personaldistanz (Proxemik) und der Körperhaltung (Kinesik) zum Ausdruck kommen. Viele dieser Ausdrucksbewegungen haben sich zu spezifischen Signalen entwickelt. Ethologen definieren Ritualisierung als Veränderung von Verhaltensweisen im Dienst der Signalbildung. Die zu Signalen ritualisierten Verhaltensweisen sind Rituale. Im Prinzip kann jede Verhaltensweise zu einem Signal werden, entweder im Laufe der Evolution oder durch Konventionen, die in einer bestimmten Gemeinschaft gültig sind, die solche Signale kulturell entwickelt hat und die von ihren Mitgliedern tradiert und gelernt werden.
  • Senft, G. (1998). Zeichenkonzeptionen in Ozeanien. In R. Posner, T. Robering, & T.. Sebeok (Eds.), Semiotics: A handbook on the sign-theoretic foundations of nature and culture (Vol. 2) (pp. 1971-1976). Berlin: de Gruyter.
  • Seuren, P. A. M. (2012). Does a leaking O-corner save the square? In J.-Y. Béziau, & D. Jacquette (Eds.), Around and beyond the square of opposition (pp. 129-138). Basel: Springer.

    Abstract

    It has been known at least since Abelard (12th century) that the classic Square of Opposition suffers from so-called undue existential import (UEI) in that this system of predicate logic collapses when the class denoted by the restrictor predicate is empty. It is usually thought that this mistake was made by Aristotle himself, but it has now become clear that this is not so: Aristotle did not have the Conversions but only one-way entailments, which ‘saves’ the Square. The error of UEI was introduced by his later commentators, especially Apuleius and Boethius. Abelard restored Aristotle’s original logic. After Abelard, some 14th- and 15th-century philosophers (mainly Buridan and Ockham) meant to save the Square by declaring the O-corner true when the restrictor class is empty. This ‘leaking O-corner analysis’, or LOCA, was taken up again around 1950 by some American philosopher-logicians, who now have a fairly large following. LOCA does indeed save the Square from logical disaster, but modern analysis shows that this makes it impossible to give a uniform semantic definition of the quantifiers, which thus become ambiguous—an intolerable state of affairs in logic. Klima (Ars Artium, Essays in Philosophical Semantics, Medieval and Modern, Institute of Philosophy, Hungarian Academy of Sciences, Budapest, 1988) and Parsons (in Zalta (ed.), The Stanford Encyclopedia of Philosophy, http://plato.standford.edu/entries/square/, 2006; Logica Univers. 2:3–11, 2008) have tried to circumvent this problem by introducing a ‘zero’ element into the ontology, standing for non-existing entities and yielding falsity when used for variable substitution. LOCA, both without and with the zero element, is critically discussed and rejected on internal logical and external ontological grounds.
  • Seuren, P. A. M. (1976). Echo, een studie in negatie. In G. Koefoed, & A. Evers (Eds.), Lijnen van taaltheoretisch onderzoek: Een bundel oorspronkelijke artikelen aangeboden aan prof. dr. H. Schultink (pp. 160-184). Groningen: Tjeenk Willink.
  • Seuren, P. A. M. (1986). Anaphora resolution. In T. Myers, K. Brown, & B. McGonigle (Eds.), Reasoning and discourse processes (pp. 187-207). London: Academic Press.
  • Seuren, P. A. M. (1984). Logic and truth-values in language. In F. Landman, & F. Veltman (Eds.), Varieties of formal semantics: Proceedings of the fourth Amsterdam colloquium (pp. 343-364). Dordrecht: Foris.
  • Seuren, P. A. M., & Wekker, H. (1986). Semantic transparency as a factor in Creole genesis. In P. Muysken, & N. Smith (Eds.), Substrata versus universals in Creole genesis: Papers from the Amsterdam Creole Workshop, April 1985 (pp. 57-70). Amsterdam: Benjamins.
  • Seuren, P. A. M. (1998). Towards a discourse-semantic account of donkey anaphora. In S. Botley, & T. McEnery (Eds.), New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2) (pp. 212-220). Lancaster: Universiy Centre for Computer Corpus Research on Language, Lancaster University.
  • Shen, C., & Janse, E. (2019). Articulatory control in speech production. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2533-2537). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Shen, C., Cooke, M., & Janse, E. (2019). Individual articulatory control in speech enrichment. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the 23rd International Congress on Acoustics (pp. 5726-5730). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    ndividual talkers may use various strategies to enrich their speech while speaking in noise (i.e., Lombard speech) to improve their intelligibility. The resulting acoustic-phonetic changes in Lombard speech vary amongst different speakers, but it is unclear what causes these talker differences, and what impact these differences have on intelligibility. This study investigates the potential role of articulatory control in talkers’ Lombard speech enrichment success. Seventy-eight speakers read out sentences in both their habitual style and in a condition where they were instructed to speak clearly while hearing loud speech-shaped noise. A diadochokinetic (DDK) speech task that requires speakers to repetitively produce word or non-word sequences as accurately and as rapidly as possible, was used to quantify their articulatory control. Individuals’ predicted intelligibility in both speaking styles (presented at -5 dB SNR) was measured using an acoustic glimpse-based metric: the High-Energy Glimpse Proportion (HEGP). Speakers’ HEGP scores show a clear effect of speaking condition (better HEGP scores in the Lombard than habitual condition), but no simple effect of articulatory control on HEGP, nor an interaction between speaking condition and articulatory control. This indicates that individuals’ speech enrichment success as measured by the HEGP metric was not predicted by DDK performance.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Extrinsic normalization for vocal tracts depends on the signal, not on attention. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 394-397).

    Abstract

    When perceiving vowels, listeners adjust to speaker-specific vocal-tract characteristics (such as F1) through "extrinsic vowel normalization". This effect is observed as a shift in the location of categorization boundaries of vowel continua. Similar effects have been found with non-speech. Non-speech materials, however, have consistently led to smaller effect-sizes, perhaps because of a lack of attention to non-speech. The present study investigated this possibility. Non-speech materials that had previously been shown to elicit reduced normalization effects were tested again, with the addition of an attention manipulation. The results show that increased attention does not lead to increased normalization effects, suggesting that vowel normalization is mainly determined by bottom-up signal characteristics.
  • Sjerps, M. J., & Chang, E. F. (2019). The cortical processing of speech sounds in the temporal lobe. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 361-379). Cambridge, MA: MIT Press.
  • Sloetjes, H., & Somasundaram, A. (2012). ELAN development, keeping pace with communities' needs. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 219-223). European Language Resources Association (ELRA).

    Abstract

    ELAN is a versatile multimedia annotation tool that is being developed at the Max Planck Institute for Psycholinguistics. About a decade ago it emerged out of a number of corpus tools and utilities and it has been extended ever since. This paper focuses on the efforts made to ensure that the application keeps up with the growing needs of that era in linguistics and multimodality research; growing needs in terms of length and resolution of recordings, the number of recordings made and transcribed and the number of levels of annotation per transcription.
  • Sollis, E. (2019). A network of interacting proteins disrupted in language-related disorders. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stassen, H., & Levelt, W. J. M. (1976). Systemen, automaten en grammatica's. In J. Michon, E. Eijkman, & L. De Klerk (Eds.), Handboek der psychonomie (pp. 100-127). Deventer: Van Loghum Slaterus.
  • Stehouwer, H., Durco, M., Auer, E., & Broeder, D. (2012). Federated search: Towards a common search infrastructure. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3255-3259). European Language Resources Association (ELRA).

    Abstract

    Within scientific institutes there exist many language resources. These resources are often quite specialized and relatively unknown. The current infrastructural initiatives try to tackle this issue by collecting metadata about the resources and establishing centers with stable repositories to ensure the availability of the resources. It would be beneficial if the researcher could, by means of a simple query, determine which resources and which centers contain information useful to his or her research, or even work on a set of distributed resources as a virtual corpus. In this article we propose an architecture for a distributed search environment allowing researchers to perform searches in a set of distributed language resources.
  • Stivers, T., & Rossano, F. (2012). Mobilizing response in interaction: A compositional view of questions. In J. P. De Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 58-80). New York: Cambridge University Press.
  • Stivers, T. (2012). Language socialization in children’s medical encounters. In A. Duranti, E. Ochs, & B. Schieffelin (Eds.), The handbook of language socialization (pp. 247-268). Malden, MA: Wiley-Blackwell.

    Abstract

    Research on child language socialization has its roots in understanding the ways that adults and other caregivers interact with children in mundane social life and how these practices might enculturate the child into local communicative norms and ways of thinking ( Brown 1998 ; Clancy 1999 ; Danziger 1971 ; de León 1998 ; Garrett and Baquedano-López 2002 ; Heath 1983 ; Ochs and Schieffelin 1983, 1984 ). A second primary area of interest has been the effect of different socialization practices on more formal educational settings ( Heath 1983 ; Howard 2004 ; Michaels 1981 ; Moore 2006 , this volume; Philips 1983 ; Rogoff et al. 2003 ). However, as discussed in other contributions to this volume, language socialization extends into many other facets of life. Just as being a member of a cultural group or being a student requires socialization into the associated rights and obligations, so too does the role of medical patient or client. For instance, patients must understand how to explain their problems ( Halkowski 2006 ; Heritage and Robinson 2006 ); what information they should know about their bodies, their treatment, their life, and their medical history; and where to look during examinations ( Heath 1986 ), to name but a few of the norm-governed aspects of medical interaction. Physicians play an important role in a child's socialization into the patient role by providing
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). Development of locative expressions by Turkish deaf and hearing children: Are there modality effects? In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 568-580). Boston: Cascadilla Press.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Svantesson, J.-O., Burenhult, N., Holmer, A., Karlsson, A., & Lundström, H. (Eds.). (2012). Humanities of the lesser-known: New directions in the description, documentation and typology of endangered languages and musics [Special Issue]. Language Documentation and Description, 10.
  • Ten Bosch, L., Mulder, K., & Boves, L. (2019). Phase synchronization between EEG signals as a function of differences between stimuli characteristics. In Proceedings of Interspeech 2019 (pp. 1213-1217). doi:10.21437/Interspeech.2019-2443.

    Abstract

    The neural processing of speech leads to specific patterns in the brain which can be measured as, e.g., EEG signals. When properly aligned with the speech input and averaged over many tokens, the Event Related Potential (ERP) signal is able to differentiate specific contrasts between speech signals. Well-known effects relate to the difference between expected and unexpected words, in particular in the N400, while effects in N100 and P200 are related to attention and acoustic onset effects. Most EEG studies deal with the amplitude of EEG signals over time, sidestepping the effect of phase and phase synchronization. This paper investigates the relation between phase in the EEG signals measured in an auditory lexical decision task by Dutch participants listening to full and reduced English word forms. We show that phase synchronization takes place across stimulus conditions, and that the so-called circular variance is narrowly related to the type of contrast between stimuli.
  • Ten Bosch, L., & Scharenborg, O. (2012). Modeling cue trading in human word recognition. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 2003-2006).

    Abstract

    Classical phonetic studies have shown that acoustic-articulatory cues can be interchanged without affecting the resulting phoneme percept (‘cue trading’). Cue trading has so far mainly been investigated in the context of phoneme identification. In this study, we investigate cue trading in word recognition, because words are the units of speech through which we communicate. This paper aims to provide a method to quantify cue trading effects by using a computational model of human word recognition. This model takes the acoustic signal as input and represents speech using articulatory feature streams. Importantly, it allows cue trading and underspecification. Its set-up is inspired by the functionality of Fine-Tracker, a recent computational model of human word recognition. This approach makes it possible, for the first time, to quantify cue trading in terms of a trade-off between features and to investigate cue trading in the context of a word recognition task.
  • Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.

    Abstract

    In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.
  • Thomassen, A., & Kempen, G. (1976). Geheugen. In J. A. Michon, E. Eijkman, & L. F. De Klerk (Eds.), Handboek der Psychonomie (pp. 354-387). Deventer: Van Loghum Slaterus.
  • Thomaz, A. L., Lieven, E., Cakmak, M., Chai, J. Y., Garrod, S., Gray, W. D., Levinson, S. C., Paiva, A., & Russwinkel, N. (2019). Interaction for task instruction and learning. In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 91-110). Cambridge, MA: MIT Press.
  • Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2019). Learning to produce difficult L2 vowels: The effects of awareness-rasing, exposure and feedback. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 1094-1098). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Turco, G., & Gubian, M. (2012). L1 Prosodic transfer and priming effects: A quantitative study on semi-spontaneous dialogues. In Q. Ma, H. Ding, & D. Hirst (Eds.), Proceedings of the 6th International Conference on Speech Prosody (pp. 386-389). International Speech Communication Association (ISCA).

    Abstract

    This paper represents a pilot investigation of primed accentuation patterns produced by advanced Dutch speakers of Italian as a second language (L2). Contrastive accent patterns within prepositional phrases were elicited in a semispontaneous dialogue entertained with a confederate native speaker of Italian. The aim of the analysis was to compare learner’s contrastive accentual configurations induced by the confederate speaker’s prime against those produced by Italian and Dutch natives in the same testing conditions. F0 and speech rate data were analysed by applying powerful datadriven techniques available in the Functional Data Analysis statistical framework. Results reveal different accentual configurations in L1 and L2 Italian in response to the confederate’s prime. We conclude that learner’s accentual patterns mirror those ones produced by their L1 control group (prosodic-transfer hypothesis) although the hypothesis of a transient priming effect on learners’ choice of contrastive patterns cannot be completely ruled out.
  • Udden, J. (2012). Language as structured sequences: a causal role of Broca's region in sequence processing. PhD Thesis, Karolinska Institutet, Stockholm.

    Abstract

    In this thesis I approach language as a neurobiological system. I defend a sequence processing perspective on language and on the function of Broca's region in the left inferior frontal gyrus (LIFG). This perspective provides a way to express common structural aspects of language, music and action, which all engage the LIFG. It also facilitates the comparison of human language and structured sequence processing in animals. Research on infants, song-birds and non-human primates suggests an interesting role for non-adjacent dependencies in language acquisition and the evolution of language. In a series of experimental studies using a sequence processing paradigm called artificial grammar learning (AGL), we have investigated sequences with adjacent and non-adjacent dependencies. Our behavioral and transcranial magnetic stimulation (TMS) studies show that healthy subjects successfully discriminate between grammatical and non-grammatical sequences after having acquired aspects of a grammar with nested or crossed non-adjacent dependencies implicitly. There were no indications of separate acquisition/processing mechanisms for sequence processing of adjacent and non-adjacent dependencies, although acquisition of non-adjacent dependencies takes more time. In addition, we studied the causal role of Broca‟s region in processing artificial syntax. Although syntactic processing has already been robustly correlated with activity in Broca's region, the causal role of Broca's region in syntactic processing, in particular syntactic comprehension has been unclear. Previous lesion studies have shown that a lesion in Broca's region is neither a necessary nor sufficient condition to induce e.g. syntactic deficits. Subsequent to transcranial magnetic stimulation of Broca‟s region, discrimination of grammatical sequences with non-adjacent dependencies from non-grammatical sequences was impaired, compared to when a language irrelevant control region (vertex) was stimulated. Two additional experiments show perturbation of discrimination performance for grammars with adjacent dependencies after stimulation of Broca's region. Together, these results support the view that Broca‟s region plays a causal role in implicit structured sequence processing.
  • Van Dooren, A., Tulling, M., Cournane, A., & Hacquard, V. (2019). Discovering modal polysemy: Lexical aspect might help. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 203-216). Sommerville, MA: Cascadilla Press.
  • Van Valin Jr., R. D., & Guerrero, L. (2012). De sujetos, pivotes y controladores: El argumento sintácticamente privilegiado. In R. Marial, L. Guerrero, & C. González Vergara (Eds.), El funcionalismo en la teoría lingüística: La gramática del papel y la referencia (pp. 247-267). Madrid: Akal.

    Abstract

    Translated and expanded version of 'Privileged syntactic arguments, pivots and controllers
  • Van Berkum, J. J. A., & Nieuwland, M. S. (2019). A cognitive neuroscience perspective on language comprehension in context. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 429-442). Cambridge, MA: MIT Press.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van Berkum, J. J. A. (2012). The electrophysiology of discourse and conversation. In M. J. Spivey, K. McRae, & M. F. Joanisse (Eds.), The Cambridge handbook of psycholinguistics (pp. 589-614). New York: Cambridge University Press.

    Abstract

    Introduction: What’s happening in the brains of two people having a conversation? One reasonable guess is that in the fMRI scanner we’d see most of their brains light up. Another is that their EEG will be a total mess, reflecting dozens of interacting neuronal systems. Conversation recruits all of the basic language systems reviewed in this book. It also heavily taxes cognitive systems more likely to be found in handbooks of memory, attention and control, or social cognition (Brownell & Friedman, 2001). With most conversations going beyond the single utterance, for instance, they place a heavy load on episodic memory, as well as on the systems that allow us to reallocate cognitive resources to meet the demands of a dynamically changing situation. Furthermore, conversation is a deeply social and collaborative enterprise (Clark, 1996; this volume), in which interlocutors have to keep track of each others state of mind and coordinate on such things as taking turns, establishing common ground, and the goals of the conversation.
  • Van Valin Jr., R. D. (2012). Some issues in the linking between syntax and semantics in relative clauses. In B. Comrie, & Z. Estrada-Fernández (Eds.), Relative Clauses in languages of the Americas: A typological overview (pp. 47-64). Amsterdam: Benjamins.

    Abstract

    Relative clauses present an interesting challenge for theories of the syntaxsemantics interface, because one element functions simultaneously in the matrix and relative clauses. The exact nature of the challenge depends on whether the relative clause is externally-headed or internallyheaded. Standard analyses of relative clauses are grounded in the analysis of Englishtype externally-headed constructions involving a relative pronoun, e.g. The horse which the man bought was a good horse, despite its typological rarity, and such accounts typically involve movement rules, both overt and covert, and phonologically null elements. The analysis of internally-headed relative clauses often involves the positing of an abstract structure including a null external head, with covert movement of the internal head to that position. The purpose of this paper is to show that the essential features of both types of relative clause can be captured in a syntactic theory that eschews movement rules and phonologically null elements, Role and Reference Grammar. It will be argued that a single set of linking principles can handle the syntax-to-semantics linking for both types. Keywords: Externally-headed relative clauses; internally-headed relative clauses; Role and Reference Grammar; linking syntax and semantics
  • Van Uytvanck, D., Stehouwer, H., & Lampen, L. (2012). Semantic metadata mapping in practice: The Virtual Language Observatory. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 1029-1034). European Language Resources Association (ELRA).

    Abstract

    In this paper we present the Virtual Language Observatory (VLO), a metadata-based portal for language resources. It is completely based on the Component Metadata (CMDI) and ISOcat standards. This approach allows for the use of heterogeneous metadata schemas while maintaining the semantic compatibility. We describe the metadata harvesting process, based on OAI-PMH, and the conversion from several formats (OLAC, IMDI and the CLARIN LRT inventory) to their CMDI counterpart profiles. Then we focus on some post-processing steps to polish the harvested records. Next, the ingestion of the CMDI files into the VLO facet browser is described. We also include an overview of the changes since the first version of the VLO, based on user feedback from the CLARIN community. Finally there is an overview of additional ideas and improvements for future versions of the VLO.
  • Van Rhijn, J. R. (2019). The role of FoxP2 in striatal circuitry. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Vernes, S. C. (2019). Neuromolecular approaches to the study of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 577-593). Cambridge, MA: MIT Press.
  • Viebahn, M. C., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in natural speech. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 2019-2022).

    Abstract

    This paper presents a corpus study that investigates the co-occurrence of reduced word forms in natural speech. We extracted Dutch past participles from three different speech registers and investigated the influence of several predictor variables on the presence and duration of schwas in prefixes and /t/s in suffixes. Our results suggest that reduced word forms tend to co-occur even if we partial out the effect of speech rate. The implications of our findings for episodic and abstractionist models of lexical representation are discussed.
  • De Vos, C. (2012). Sign-spatiality in Kata Kolok: How a village sign language in Bali inscribes its signing space. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    In a small village in the north of Bali called Bengkala, relatively many people inherit deafness. The Balinese therefore refer to this village as Desa Kolok, which means 'deaf village'. Connie de Vos studied Kata Kolok, the sign language of this village, and the ways in which the language recruits space to talk about both spatial and non-spatial matters. he small village community Bengkala in the north of Bali has almost 3,000 inhabitants. Of all the inhabitants, 57% use sign language, with varying degrees of fluency. But of this signing community (between 1,200 and 1,800 signers, depending on your definition of 'signer'), only 4% are deaf. So, not only do the deaf people of Bengkala use the sign language Kata Kolok, but also the majority of the hearing population.
    "I've worked with deaf people from all over Asia, Europe, and also some signers in America," says Connie de Vos of MPI's Language and Cognition Department, and Centre for Language Studies (RU). "What sets apart this particular deaf village is that deaf individuals are highly integrated within the village clans. There is really a huge proportion of hearing signers." The sign language currently functions in all major aspects of village life and has been acquired from birth by multiple generations of deaf, native signers. According to De Vos, Kata Kolok is a fully-fledged sign language in every sense of the word. As a collaborative project, she has initiated inclusive deaf education within the village and now Kata Kolok is used as the primary language of instruction. De Vos' primary finding is that Kata Kolok discourse uses a different system of referring to space than other sign languages. Spatial relations are represented by a so-called "absolute frame of reference", based on geographic locations and wind directions. "All sign languages, as we know, use relative constructions for spatial relations. They use signs comparable to words like 'left' and 'right' instead of 'east' and 'west'. Kata Kolok does the latter. Kata Kolok signers appear to have an internal compass to continually register their position in space."De Vos is the first sign linguist who has documented Kata Kolok extensively. She spent more than a year in the village and collected over a hundred hours of video material of spontaneous conversations. "One of the things I've noticed is that language doesn't really emerge out of nothing," she says. "Signers adopt a local gesture system and transform it into a new and much more systematic sign language. A lot of the signs refer to concepts they're familiar with. That's why hearing signers have no difficulties in picking up Kata Kolok. Kata Kolok unites the hearing and the deaf.

    Additional information

    full text via Radboud Repository

Share this page